Rosemary Lee, Machine Learning and Notions of the Image, PHD Thesis, University of Copenhagen 2020
Lev Manovich, “Computer vision, human senses, and language of art”, AI & SOCIETY, November 22, 2020.
Karras et al. and Nvidia, This Person Does Not Exist, 2019
“The cameras know too much. All cameras capture information about the world—in the past, it was recorded by chemicals interacting with photons, and by definition, a photograph was one exposure, short or long, of a sensor to light. Now, under the hood, phone cameras pull information from multiple image inputs into one picture output, along with drawing on neural networks trained to understand the scenes they’re being pointed at. Using this other information as well as an individual exposure, the computer synthesizes the final image, ever more automatically and invisibly. […] Deepfakes are one way of melting reality; another is changing the simple phone photograph from a decent approximation of the reality we see with our eyes to something much different. It is ubiquitous and low temperature, but no less effective.” Alexis C. Madrigal, “No, You Don’t Really Look Like That. A guide to the new reality-melting technology in your phone’s camera’, in The Atlantic, December 18, 2018
“Artists who fetishize the medium, whatever that medium, they’re just generally not good artists. A good artist, a real artist, will reflect on the implications of a technological revolution like AI and they’ll use it to show certain implications on our subjectivity […] Art has only a little bit to do with creating innovative forms or imagining new patterns. Art is rather a kind of empirical philosophy. It’s like doing philosophy through practical means.” Naomi Rea, “Super-Curator Carolyn Christov-Bakargiev Talks About Hito Steyerl’s Latest Work and Why AI Is Actually ‘Artificial Stupidity’”, in Artnet News, November 26, 2018
“The role that I’m playing is more of a curator than anything else. The first thing I do is curate the data that the network is going to study, and once the network is trained I also curate the output data, deciding which outputs I am going to keep and whether I want to tweak the training sets so that the network learns something different. So all of the work that I do is really through curation.” Seth Thompson, “The Artist, The Curator, and the Neural Net: A Conversation with Robbie Barrat”, in Paprika!, November 8, 2018.
“Ideology’s ultimate trick has always been to present itself as objective truth, to present historical conditions as eternal, and to present political formations as natural. Because image operations function on an invisible plane and are not dependent on a human seeing-subject (and are therefore not as obviously ideological as giant paintings of Napoleon) they are harder to recognize for what they are: immensely powerful levers of social regulation that serve specific race and class interests while presenting themselves as objective.
The invisible world of images isn’t simply an alternative taxonomy of visuality. It is an active, cunning, exercise of power, one ideally suited to molecular police and market operations–one designed to insert its tendrils into ever-smaller slices of everyday life. […] Machine-machine systems are extraordinary intimate instruments of power that operate through an aesthetics and ideology of objectivity, but the categories they employ are designed to reify the forms of power that those systems are set up to serve.” Trevor Paglen, “Invisible Images (Your Pictures Are Looking at You)”, in The New Inquiry, December 8, 2016
“Neural networks cannot invent their own classes; they’re only able to relate images they ingest to images that they’ve been trained on. And their training sets reveal the historical, geographical, racial, and socio-economic positions of their trainers. […] engineers at Google decided to deactivate the “gorilla” class after it became clear that its algorithms trained on predominantly white faces and tended to classify African Americans as apes.” Trevor Paglen, “Invisible Images (Your Pictures Are Looking at You)”, in The New Inquiry, December 8, 2016
“[…] something completely different happens when you share a picture on Facebook than when you bore your neighbors with projected slide shows. When you put an image on Facebook or other social media, you’re feeding an array of immensely powerful artificial intelligence systems information about how to identify people and how to recognize places and objects, habits and preferences, race, class, and gender identifications, economic statuses, and much more.
Regardless of whether a human subject actually sees any of the 2 billion photographs uploaded daily to Facebook-controlled platforms, the photographs on social media are scrutinized by neural networks with a degree of attention that would make even the most steadfast art historian blush.” Trevor Paglen, “Invisible Images (Your Pictures Are Looking at You)”, in The New Inquiry, December 8, 2016