Today words and images are much more likely to be read or viewed by machines than by human beings, as enormous quantities of data are filtered, correlated, aggregated and sorted by algorithms designed for digital archives, search engine portals, social network sites and systems policing intellectual property, national security, public safety, civic propriety, medical normality, economic productivity and gender conformity. The use of these algorithms as a substitute for human interpreters tends to raise many anxieties, particularly among those who worry either about omnipresent surveillance or about a total abdication of oversight possible in the very near future. Biometrics – particularly facial recognition and fingerprint authentication – can dictate conditions of access and participation, and users’ future mobility may also be determined by the trajectories of self-driving vehicles equipped with machine vision technologies.

Critics like Jill Walker Rettberg have noted that the last great technological change in visual culture during the Early Modern period connected the sciences and the humanities closely. Renaissance humanist thinkers considered the philosophical, aesthetic and cultural ramifications of techniques for representing linear perspective and anatomical proportions and optical devices like the camera obscura and the telescope, and they also considered how the „self“ was constituted by these technologies. Yet the machine vision revolution has been relatively unexamined in the humanities, even in the digital humanities. This talk considers how we see ourselves in the black box of machine vision and what this means for possible futures as subjects, citizens and feminists.

The lecture took place in the framework of the Community College program The Black Box Issues accompanying the exhibition „Hysterical Mining“ 29/5 – 6/10 2019.

More information: Kunsthalle Wien

Abonniere unseren Newsletter