Artnet News | ‘No Need to Be Afraid of These New Tools’: Lynn Hershman Leeson, Sarah Meyohas, and Cai Guo-Qiang on Our A.I. Futures

December 22, 2023

BY MIN CHEN

 

This is part of a series on how artists are approaching the age of A.I. Read part one here and part two here.

 

A.I. has arrived for artists—what now? When algorithms can be relied on to generate ideas and visuals with just a click, artists are facing a future where creativity could mean hybridity, where humans and technology work ever more in concert. Such a prospect has sparked fears and worries about the coming of machine intelligence and worse still, that dreaded singularity. 

 

But as the following artists tell it, A.I. tools are nothing to be afraid of. In their words, it’s up to individuals to code, program, and exercise control over the machines—and artists to take them even further. In the final part of our series on how artists are approaching A.I., Sarah Meyohas, Lynn Hershman Leeson, and Cai Guo-Qiang expand on a brave new world where the tool isn’t just one of us, but part of us. 

 

 (Excerpt from the full article below)

 

Sarah Meyohas

 

It’s only been a few years since Meyohas fed 100,000 images of rose petals into a dataset on which to train an algorithm to generate yet more petals. The technology has accelerated so much since, she said, that humanity itself is possibly “becoming an obsolete technological form.” For Meyohas, what artists do in the face of this shift is probably the one prompt that can’t be fed into A.I. 

 

As an artist that has long been creating with and alongside A.I., what do you make of its recent developments?
In late 2019, I had actually heard through a friend about OpenAI doing text-to-image models, and I was fascinated that this was a possibility. I asked to be put in touch with one of the engineers and started sending prompts in early 2020. I was sending really evocative poetry that was visual but abstracted, and the images that I got back were so bad that they were just unusable. It was just a bunch of noise and there was nothing poetic that you could read into.  

 

But today, it’s become like a search bar. Search is the dominant way of creating images now—but you’re not producing images, you’re searching for them. The closest corollary is stock photography, but the main difference is that stock images already exist. [Generative] A.I. is the synthesis of this latent space and the possibility of an image. The basis of stock photography was its link to language because images were stored as index cards. It was only by turning these images into categorizable keywords that you could turn them into monetizable assets.  

 

It’s been really interesting to me to see this continuation. With A.I., the image is really worthless; the only thing that has worth is the statistical patterns that lead to people getting results from their prompts. It’s a paradigm shift.  

 

How do you see the relationship between A.I. and human artists evolving into the future?
In a world where you can make images by searching for them in a latent space, as an artist, what do you make? And that’s the prompt for everybody because our visual culture is already getting inundated and is only going to get more inundated. 

 

I think that in the future, the onus is on creators to create their own models. These models are going to be available—the private models and the open-source ones right behind them—so your data and what you’re training on is going to be so important, both for artists and the world.