These days I’m mostly thinking about machines that learn, machines that think; perception, cognition, bias, prejudice, social and political polarization, etc. The current rise of big-data-driven, so-called ‘AI’ acts as a rather apt mechanism through which to reflect on all of this.
I generally try to avoid using the term ‘AI’ - unless I’m specifically referring to the academic field - as it’s very open to misinterpretation and unnecessarily egregious disagreement over terminology. Once, after a panel, I had a member of the audience approach me, and rather angrily explain to me that AlphaGo (DeepMind’s software which beat the world champion Go player) could not be considered ‘AI’ because it had no ‘sense of self,’ which is okay, I guess. But it’s also why instead I say these days I work with machine learning, a term that’s easier to define – a system which is able to improve its performance on a particular task as it gains experience. More specifically, I work with deep learning, a form of machine learning which is able to operate on vast amounts of ‘raw,’ high-dimensional data, to learn hierarchies of representations. I also think of it as the process of extracting meaningful information from big data. A more encompassing term which can refer to what we usually mean by ‘AI’ these days is ‘data-driven methods or systems,’ and specifically ‘big-data-driven methods or systems.’
R: So what you’re interested in is not the technology itself, but the effect on society? If, let’s say, pigeon catching was the latest tech revolution, would you be working on that instead?
M: If it impacted our world in such a massive way as the current big-data-driven systems do, I probably would. For example, I’m also very interested in the blockchain, but I do not feel it is as urgent a topic. Maybe it will be in a few years… (especially with the energy consumption!).
R: AI-generated art surely feels like a hot topic right now with the recent market hype around the Obvious sale at Christie’s [an AI generated painting that fetched $432,000 in October, 2018]. What do you make of it?
M: First, I’d like to set the context for this discussion by bringing to attention the fact that the art market is a place where, with the right branding, you can sell a pickled shark for $8 million. The art market is ultimately the purest expression of the free, open market. The price of an object is determined by how much somebody is willing to pay for it, which is not necessarily related to its cultural value.
I decided not to talk about this before the auction because I feel the negative press and pushback from other folks in the field created too much controversy and fueled the hype. Articles came out daily with opinions from experts, and I’m sure all of this hype inflated the price [the painting was initially estimated at $8-10 thousand].
There’s a spectrum of approaches to the practicalities of making work in this field with generative deep neural networks:
Train on your own data with your own (or heavily modified) algorithms
Train on your own data with off-the-shelf (or lightly modified) algorithms (e.g. Anna Ridler, Helena Sarin)
Curate your own data and use your own (or heavily modified) algorithms (e.g. Mario Klingemann, Georgia Ward Dyer)
Curate your own data and use off-the-shelf (or lightly modified) algorithms
Use existing datasets and train with heavily modified algorithms
Use existing datasets and train with off-the-shelf (or lightly modified) algorithms (this is what Obvious has done)
Use pre-trained models and algorithms (e.g., most DeepDream work, the recent BigGAN, etc.)
Personally, I think it is possible to make interesting work around each of these poles (and I have tried every single one!). But as you get towards the end of the spectrum, you’ll need to work harder to give it a unique spin and make it your own. And I think a very valid approach is to conceptually frame the work in a unique way, even if using existing datasets, or even pre-trained models.
Robbie [Barrat], a young artist, was very upset that Obvious stole his code (which was open source with a fully permissive license at the time). It’s true that they used his code, especially to download the data. But it’s important to remember that the code which actually trains and generates the images is from [ML developer/researcher] Soumith Chintala, which Robbie had forked [copied] from. And the data is already online and open (in fact, I had also trained the exact same models on the exact same data, and I know others did, too). What actually shapes the output and defines what the resulting images look like is the data - which is already out there and available to download - and the algorithm - which, in this case, is a Generative Adversarial Network (GAN) implemented by Chintala. Anybody who puts that same data through that same algorithm (whether it’s Chintala’s code, or other implementations, even in other programming languages) will get the exact same (or incredibly similar) results.
I’ve seen some comments suggesting that the Obvious work was intentionally commenting on this issue of authorship, perhaps in a lineage of appropriation art, similar to Richard Prince’s Instagram Art, etc. But I don’t think that is the case, judging by Obvious’ interviews and press release. Instead, Obvious seems to be going down the ‘can a machine make art?’ angle, which is a very interesting question. Lady Ada Lovelace was already writing about this in 1843, and there have been countless debates, writings, musings, and works on this since then. So personally, I would look for a little bit more than just a random sample from a GAN as a contribution to that discussion. Like I mentioned, what somebody is willing to pay for an artifact is not necessarily related to its cultural value. If a student were to make this work, I would try to be very positive and encouraging, and say, “Great work on figuring out how to download the code and to get it to run. Now start exploring and see where you go.”
On a side note, I’m not a huge fan of the label ‘AI art,’ because I’m not a fan of the term ‘AI,’ but beyond that, because the term ‘AI art’ is somehow infused with the idea that only the art being made with these very recent algorithms is ‘AI art’, whatever that means. I definitely do not consider myself an ‘AI artist.’ If anything, I’m a computational artist, since computation is the common medium in all of my work. People make art by writing software, and have done for 60 or so years (I’m thinking John Whitney, Vera Molnar, etc), or even more specifically, Harold Cohen was making ‘AI art’ 50 years ago. In a tiny corner of the computational art world, Generative Adversarial Networks (GANs) are quite popular today, because they’re relatively easy to use, and for very little effort, produce interesting results. Ten to fifteen years ago I remember delaunay triangulation to be very popular, because again, for relatively little effort, you could produce very interesting and aesthetically pleasing results (and I’m guilty of this, too). And in the ‘80s and ‘90s, we saw computational artists using Genetic Algorithms (GA), e.g., William Latham, Stephen Todd, Karl Sims, Scott Draves, etc. (On a side note, GA is a subfield of AI. So technically they are all AI artists, too.) Computational art will continue, it will grow, the tool palette available to computational artists will expand. And it’s fantastic that new algorithms like GANs attract the attention of new artists and lure them in. But I will just avoid the term ‘AI art’ and call them computational artists or software artists or generative artists or algorithmic artists.
R: That’s it for market sentiment, then. Let’s focus on your practice again. What projects are you currently working on?
M: There’s a few angles that I’m pursuing, all very research-oriented. First is a theme that I’ve been investigating for a while now, which is looking at how emerging technologies – in this case, deep learning – can augment our ability to creatively express ourselves, particularly in a realtime, interactive manner with continuous control - analogous to playing a musical instrument, like a piano. How can I create computational systems, now using deep learning, that give people meaningful control and enable them to feel like they are able to creatively and even emotionally express themselves?
From a more conceptual angle, I’m interested in using machines that learn as a way to reflect on how we make sense of the world. Artificial neural networks [systems of hardware and/or software very loosely inspired by, but really nothing like, the operation of neurons in biological brains] are incredibly biased and problematic. They’re complicated, but can be very predictable, as well. Just like us. I don’t mean artificial neural networks are like our brain. I mean I just like using them as a mirror to ourselves. We can only understand the world through the lens of everything that we’ve seen or heard or read before. We are constantly trying to make sense of everything that we experience based on our past experiences. We see things not as they are, but as we are. And that’s what I’m interested in exploring and exposing. Some of my work tries to combine both of these (and other) themes. E.g., my Learning to See series tries to do this, as both being a system for realtime expression, a potential new form of filmmaking and digital puppetry, but also ultimately demonstrates this extreme bias. One who has only ever seen thousands of images of the ocean will see the ocean everywhere they look.