Alexis C. Madrigal: The Camera Knows Too Much

“The cameras know too much. All cameras capture information about the world—in the past, it was recorded by chemicals interacting with photons, and by definition, a photograph was one exposure, short or long, of a sensor to light. Now, under the hood, phone cameras pull information from multiple image inputs into one picture output, along with drawing on neural networks trained to understand the scenes they’re being pointed at. Using this other information as well as an individual exposure, the computer synthesizes the final image, ever more automatically and invisibly. […] Deepfakes are one way of melting reality; another is changing the simple phone photograph from a decent approximation of the reality we see with our eyes to something much different. It is ubiquitous and low temperature, but no less effective.” Alexis C. Madrigal, “No, You Don’t Really Look Like That. A guide to the new reality-melting technology in your phone’s camera’, in The Atlantic, December 18, 2018

Trevor Paglen: Apes

“Neural networks cannot invent their own classes; they’re only able to relate images they ingest to images that they’ve been trained on. And their training sets reveal the historical, geographical, racial, and socio-economic positions of their trainers. […] engineers at Google decided to deactivate the “gorilla” class after it became clear that its algorithms trained on predominantly white faces and tended to classify African Americans as apes.” Trevor Paglen, “Invisible Images (Your Pictures Are Looking at You)”, in The New Inquiry, December 8, 2016

Trevor Paglen: When you put an image on Facebook

“[…] something completely different happens when you share a picture on Facebook than when you bore your neighbors with projected slide shows. When you put an image on Facebook or other social media, you’re feeding an array of immensely powerful artificial intelligence systems information about how to identify people and how to recognize places and objects, habits and preferences, race, class, and gender identifications, economic statuses, and much more.

Regardless of whether a human subject actually sees any of the 2 billion photographs uploaded daily to Facebook-controlled platforms, the photographs on social media are scrutinized by neural networks with a degree of attention that would make even the most steadfast art historian blush.” Trevor Paglen, “Invisible Images (Your Pictures Are Looking at You)”, in The New Inquiry, December 8, 2016

Trevor Paglen: Digital Images don’t Need Human Eyes

“What’s truly revolutionary about the advent of digital images is the fact that they are fundamentally machine-readable: they can only be seen by humans in special circumstances and for short periods of time. A photograph shot on a phone creates a machine-readable file that does not reflect light in such a way as to be perceptible to a human eye. A secondary application, like a software-based photo viewer paired with a liquid crystal display and backlight may create something that a human can look at, but the image only appears to human eyes temporarily before reverting back to its immaterial machine form when the phone is put away or the display is turned off. However, the image doesn’t need to be turned into human-readable form in order for a machine to do something with it. […] The fact that digital images are fundamentally machine-readable regardless of a human subject has enormous implications. It allows for the automation of vision on an enormous scale and, along with it, the exercise of power on dramatically larger and smaller scales than have ever been possible.” Trevor Paglen, “Invisible Images (Your Pictures Are Looking at You)”, in The New Inquiry, December 8, 2016

Trevor Paglen: Invisible Images

“over the last decade or so, something dramatic has happened. Visual culture has changed form. It has become detached from human eyes and has largely become invisible. Human visual culture has become a special case of vision, an exception to the rule. The overwhelming majority of images are now made by machines for other machines, with humans rarely in the loop. The advent of machine-to-machine seeing has been barely noticed at large, and poorly understood by those of us who’ve begun to notice the tectonic shift invisibly taking place before our very eyes.” Trevor Paglen, “Invisible Images (Your Pictures Are Looking at You)”, in The New Inquiry, December 8, 2016

Nicholas Mirzoeff: Computational Images

“All the “images,” whether moving or still, that appear in the new archives are variants of digital information. Technically, they are not images at all, but rendered results of computation. […] A modern camera still makes a shutter sound when you press the button, but the mirror that used to move, making that noise, is no longer there. The digital camera references the analog film camera without being the same. In many cases, what we can “see” in the image, we could never see with our own eyes. What we see in the photograph is a computation, itself created by “tiling” different images that were further processed to generate color and contrast. It is a way to see the world enabled by machines.” Nicholas Mirzoeff, How to See the World. An Introduction to Images, from Self-portraits to Selfies, Maps to Movies, and More, Basic Books, New York 2016