metropolis m

Lila Lee-Morrison presenting her research on machinic facial recognition. Photo: Malthe Stigaard

Inspired by philosopher Yuk Hui the Studium Generale of the Gerrit Rietveld Academie spent three days ’thinking about divergences within technological development’.

When I took a seat in the already darkened Teijin Auditorium, first staircase on the left in the Stedelijk Museum, I didn’t quite know what to expect from the day ahead of me. What I knew was the overall theme of Rietveld’s 23/24 Studium Generale, ‘Technodiversity’, and the title of the conference I was attending: ‘Technodiversity – Beyond Datafication and Digital Colonialism.’ I was going to sit on my chair for the next six hours – plus an hour lunch break in the middle – and I was both slightly concerned and thrilled at the prospect.

Curator Karen Archey, who was invited by the organisers Jorinde Seydel and Jort van der Laan to host the first of the two-days conference, shared my concern for the many hours we were going to spend seating, and in her introductory discourse she kindly encouraged the audience to take breaks, if needed. Recognising that we have a body that gets tired, aches, wears out with time. Poignantly, she titled her contribution ‘Soft Machines: Technology and Material Impact.’ Scope of the conference was to explore ways to resist the commodification of our bodies for data purpose. Archey’s take on this focused on the human body in relation to control technologies.

‘Technodiversity’ is a term conceptualised by philosopher Yuk Hui; it entails an understanding of technologies as strictly interconnected to other social and biological forms of knowledge, and calls for ‘thinking about divergences within technological development.’ The day kicked off very far from this place, with a screening of the short film Eye/Machine by German director Harun Farocki. The visual essay addresses what Farocki calls ‘operational images,’ images primarily created for machine eyes – i.e. military instruments – devoid of any social intent, ghostly expressions of mechanic communication. Strikingly, if we consider it was made in 2000, the film ends with an almost prophetic note, defining operational images ‘an ad for automated intelligence.’ Fast forward 24 years, the first panellist of the day was Lila Lee-Morrison, postdoctoral researcher at Lund University in Sweden. Lee-Morrison’s PhD dissertation focused on facial recognition technologies and machinic vision of the human face. Her intervention created a clear connection between Farocki’s dystopian account of machinic vision and the employment we are making of it nowadays. Specifically, the researcher pointed out how personal identities are becoming dependent on statistical model, and the human components of a face are being levelled in averages and strips of code to facilitate the automated gaze. If this recalls late 19th century explorations of eugenics, Lee-Morrison tells us it is not a coincidence. What started – also – as a visual attempt to make clarity out of a growing global population, is now at the conceptual base of biometric recognition – a tool which main political use entails border control and governmental surveillance. A process that often extrapolates information from non-consensual bodies.

The program of the day indicated it was time for lunch break. When we walked back to the auditorium the room was even darker than before, a single figure was standing behind the microphone, and a purple laser beam in time to electronic sounds was tracing on the wall what seemed like childish drawings. Kate Cooper’s lecture performance ‘Ground Truth’ brought us into a different space; a soft, echoy, dark womb chequered with the artist’s voice reading her reflections on AI, language formation, and child learning. What happens if we juxtapose a child’s early development to an artificial intelligence on training? Which questions emerge? Touching on current news, legal quibbles – should absence of human involvement be eligible for copyright? – and lived experiences, we learned the drawings come from the artist’s child. Cooper delineates divergences and similarities between two different kind of learners. When the light went up again, we could see two low couches at the centre of the stage. They hosted a conversation between Karen Archey and Ramon Amaro, senior researcher of Digital Culture at Het Nieuwe Instituut, and writer of the book The Black Technical Object: On Machine Learning and the Aspiration of Black Being (2023). It is difficult to summarise in a few words what the conversation touched upon. Informed by his incredibly varied background, Amaro illustrated how algorithms are a reflection of human bias, with a particular attention to racism. He made the point that if one breaks down the math – elaborated in the 16th–17th century – behind the algorithms employed in machine learning, one notices that those calculations came to the world already with racial and societal bias in them. Therefore, there is no way that the language machines speak and understand, and express themselves in, is not biased as well. The algorithm is an echo of us. But it is a distorted echo, a partial shadow. ‘Saying that the algorithm is giving invisibility to Black people is giving recognition and power to the algorithm. No. (Invisibility) comes from preconditioned racist perceptions.’ On a closing note, Amaro brought the discourse back to the human. He encouraged to look at algorithms through a fugitive perspective, and to exercise proximity with each other. To make that leap, to empathise, and arrive to being in alliance and solidarity.

I stretched my back while the last panellist was taking the stage. Minahasa artist Natasha Tontey briefly introduced the two films we were going to watch, Garden Amidst the Flame (2022) and Of Other Tomorrows Never Known (2023). They both took us to a dark, ironic, over-saturated universe populated by fictional characters inspired by Minahasa culture – a culture shared by communities in Northern Sulawesi, Indonesia. Through the lens of their rituals and their approach to community, Tontey conflates ancient principles of trans-generational dialogues, understanding and cooperation with the current discourse on technologies and artificial intelligence. Through performance and storytelling she creates a queer landscape where spirits and machines are – of course, it comes to say – part of the same narration. A conversation between the four panellists and Karen Archey wrapped up the day. The audience was going to meet again, to continue with the next day, a Thursday, curated by artist and filmmaker Zach Blas. I wasn’t going to be there, but I briefly came back on Friday night for a performance programme hosted by dragtivist and art-educator Taka Taka. The same auditorium became the space for multiple conversations between the host and people from their community, old friends or recent acquaintances. Bodies too commonly perceived as subversive and unruly stepped onto the stage to share some of their lived experiences – something that made me think of Fournier’s autotheory, the constant effort to bring your personal experience in, to make it heard and visible, become part of the story.

Some words from Timothy Morton in All Art is Ecological (2021) now come to my mind. ‘How I interpret data will depend on what I think I want to find. How I see myself depends on the kind of person I am. How I interpret things is entangled with prefabricated concepts about what interpreting means.’ Once we realize this, we can ask ourselves: how to break this pattern? Imagination can be a way. Through imagining, we are able to formulate possibilities. We can picture the not-yet, and be open to a certain degree of ambiguity and flexibility. The computational, algorithmic language which we are increasingly letting more agency to in representing and shaping us, does not allow for these qualities to be part of it. Algorithms shun ambiguity and flexibility. By practicing looking and listening from ‘embodied and embedded, relational and affective positions’ – I thank Rosi Braidotti for these words – we can adopt ambiguity and flexibility as tools, bring them in, and learn with them.

Gerrit Rietveld Academie’s 23/24 Studium Generale ‘Technodiversity’ took place from March 20 until March 24 at Stedelijk Museum Amsterdam

Beatrice Cera

is an Italian art worker, curator, and designer. In her work, she takes the stance of a collective practice as a process to foster political responsibility. 

Gerelateerd

Recente artikelen