Memo Akten, Learning to see: We are made of star dust (#2), 2017
“A deep neural network making predictions on live camera input, trying to make sense of what it sees, in context of what it’s seen before. It can see only what it already knows, just like us. Trained on images from the Hubble telescope. (not 'style transfer'!)”
“If we do not use technology to see things differently, we are wasting it.”
- Memo Akten
I met Memo Akten before he grabbed the train to London, where he is currently developing some exciting projects and pursuing a PhD in machine learning.
R: Memo, I first contacted you for an article I was writing on artificial intelligence and the art market (you can read it here). The timing was too tight, though, so I’m glad we’re meeting today to discuss your art practice more broadly! But let’s start with AI anyway: as an artist who has long been active in this field, I am curious to have your analysis on the field as it is now. Can you briefly explain who you are and what you do?
M: Broadly speaking, I work with emerging technologies as both a medium and a subject matter, looking at its impact on us as individuals, as an extension of our mind and body, and its impact on society, culture, tradition, ritual, etc.
These days I’m mostly thinking about machines that learn, machines that think; perception, cognition, bias, prejudice, social and political polarization, etc. The current rise of big-data-driven, so-called ‘AI’ acts as a rather apt mechanism through which to reflect on all of this.
I generally try to avoid using the term ‘AI’ - unless I’m specifically referring to the academic field - as it’s very open to misinterpretation and unnecessarily egregious disagreement over terminology. Once, after a panel, I had a member of the audience approach me, and rather angrily explain to me that AlphaGo (DeepMind’s software which beat the world champion Go player) could not be considered ‘AI’ because it had no ‘sense of self,’ which is okay, I guess. But it’s also why instead I say these days I work with machine learning, a term that’s easier to define – a system which is able to improve its performance on a particular task as it gains experience. More specifically, I work with deep learning, a form of machine learning which is able to operate on vast amounts of ‘raw,’ high-dimensional data, to learn hierarchies of representations. I also think of it as the process of extracting meaningful information from big data. A more encompassing term which can refer to what we usually mean by ‘AI’ these days is ‘data-driven methods or systems,’ and specifically ‘big-data-driven methods or systems.’
R: So what you’re interested in is not the technology itself, but the effect on society? If, let’s say, pigeon catching was the latest tech revolution, would you be working on that instead?
M: If it impacted our world in such a massive way as the current big-data-driven systems do, I probably would. For example, I’m also very interested in the blockchain, but I do not feel it is as urgent a topic. Maybe it will be in a few years… (especially with the energy consumption!).
R: AI-generated art surely feels like a hot topic right now with the recent market hype around the Obvious sale at Christie’s [an AI generated painting that fetched $432,000 in October, 2018]. What do you make of it?
M: First, I’d like to set the context for this discussion by bringing to attention the fact that the art market is a place where, with the right branding, you can sell a pickled shark for $8 million. The art market is ultimately the purest expression of the free, open market. The price of an object is determined by how much somebody is willing to pay for it, which is not necessarily related to its cultural value.
I decided not to talk about this before the auction because I feel the negative press and pushback from other folks in the field created too much controversy and fueled the hype. Articles came out daily with opinions from experts, and I’m sure all of this hype inflated the price [the painting was initially estimated at $8-10 thousand].
There’s a spectrum of approaches to the practicalities of making work in this field with generative deep neural networks:
Train on your own data with your own (or heavily modified) algorithms
Curate your own data and use off-the-shelf (or lightly modified) algorithms
Use existing datasets and train with heavily modified algorithms
Use existing datasets and train with off-the-shelf (or lightly modified) algorithms (this is what Obvious has done)
Use pre-trained models and algorithms (e.g., most DeepDream work, the recent BigGAN, etc.)
Personally, I think it is possible to make interesting work around each of these poles (and I have tried every single one!). But as you get towards the end of the spectrum, you’ll need to work harder to give it a unique spin and make it your own. And I think a very valid approach is to conceptually frame the work in a unique way, even if using existing datasets, or even pre-trained models.
Robbie [Barrat], a young artist, was very upset that Obvious stole his code (which was open source with a fully permissive license at the time). It’s true that they used his code, especially to download the data. But it’s important to remember that the code which actually trains and generates the images is from [ML developer/researcher] Soumith Chintala, which Robbie had forked [copied] from. And the data is already online and open (in fact, I had also trained the exact same models on the exact same data, and I know others did, too). What actually shapes the output and defines what the resulting images look like is the data - which is already out there and available to download - and the algorithm - which, in this case, is a Generative Adversarial Network (GAN) implemented by Chintala. Anybody who puts that same data through that same algorithm (whether it’s Chintala’s code, or other implementations, even in other programming languages) will get the exact same (or incredibly similar) results.
I’ve seen some comments suggesting that the Obvious work was intentionally commenting on this issue of authorship, perhaps in a lineage of appropriation art, similar to Richard Prince’s Instagram Art, etc. But I don’t think that is the case, judging by Obvious’ interviews and press release. Instead, Obvious seems to be going down the ‘can a machine make art?’ angle, which is a very interesting question. Lady Ada Lovelace was already writing about this in 1843, and there have been countless debates, writings, musings, and works on this since then. So personally, I would look for a little bit more than just a random sample from a GAN as a contribution to that discussion. Like I mentioned, what somebody is willing to pay for an artifact is not necessarily related to its cultural value. If a student were to make this work, I would try to be very positive and encouraging, and say, “Great work on figuring out how to download the code and to get it to run. Now start exploring and see where you go.”
On a side note, I’m not a huge fan of the label ‘AI art,’ because I’m not a fan of the term ‘AI,’ but beyond that, because the term ‘AI art’ is somehow infused with the idea that only the art being made with these very recent algorithms is ‘AI art’, whatever that means. I definitely do not consider myself an ‘AI artist.’ If anything, I’m a computational artist, since computation is the common medium in all of my work. People make art by writing software, and have done for 60 or so years (I’m thinking John Whitney, Vera Molnar, etc), or even more specifically, Harold Cohen was making ‘AI art’ 50 years ago. In a tiny corner of the computational art world, Generative Adversarial Networks (GANs) are quite popular today, because they’re relatively easy to use, and for very little effort, produce interesting results. Ten to fifteen years ago I remember delaunay triangulation to be very popular, because again, for relatively little effort, you could produce very interesting and aesthetically pleasing results (and I’m guilty of this, too). And in the ‘80s and ‘90s, we saw computational artists using Genetic Algorithms (GA), e.g., William Latham, Stephen Todd, Karl Sims, Scott Draves, etc. (On a side note, GA is a subfield of AI. So technically they are all AI artists, too.) Computational art will continue, it will grow, the tool palette available to computational artists will expand. And it’s fantastic that new algorithms like GANs attract the attention of new artists and lure them in. But I will just avoid the term ‘AI art’ and call them computational artists or software artists or generative artists or algorithmic artists.
R: That’s it for market sentiment, then. Let’s focus on your practice again. What projects are you currently working on?
M: There’s a few angles that I’m pursuing, all very research-oriented. First is a theme that I’ve been investigating for a while now, which is looking at how emerging technologies – in this case, deep learning – can augment our ability to creatively express ourselves, particularly in a realtime, interactive manner with continuous control - analogous to playing a musical instrument, like a piano. How can I create computational systems, now using deep learning, that give people meaningful control and enable them to feel like they are able to creatively and even emotionally express themselves?
From a more conceptual angle, I’m interested in using machines that learn as a way to reflect on how we make sense of the world. Artificial neural networks [systems of hardware and/or software very loosely inspired by, but really nothing like, the operation of neurons in biological brains] are incredibly biased and problematic. They’re complicated, but can be very predictable, as well. Just like us. I don’t mean artificial neural networks are like our brain. I mean I just like using them as a mirror to ourselves. We can only understand the world through the lens of everything that we’ve seen or heard or read before. We are constantly trying to make sense of everything that we experience based on our past experiences. We see things not as they are, but as we are. And that’s what I’m interested in exploring and exposing. Some of my work tries to combine both of these (and other) themes. E.g., my Learning to See series tries to do this, as both being a system for realtime expression, a potential new form of filmmaking and digital puppetry, but also ultimately demonstrates this extreme bias. One who has only ever seen thousands of images of the ocean will see the ocean everywhere they look.
As a more distilled version of this perspective, in 2017 I made a Virtual Reality (VR) piece FIGHT!. It doesn’t use neural networks or anything like that, actually. It uses the technology of VR, but is about as opposite to VR as is possible, I think. In the headset, your eyes are presented with monocularly dissimilar (i.e., very different) images. Your brain is unable to integrate the images together to create a single cohesive 3D percept, so instead the two rival images fight for attention in your conscious awareness. In your mind’s eye, you will not see both images blended, but the two rival images flicker back and forth as they alternate in dominance. In your conscious experience, your mind will conjure up animated swipes and swirly transitions – which aren’t really there. And this experience is unique and different for everybody, as it depends on your physiology. Everybody is presented with the exact same images, but everybody “sees” something different in their mind. And it’s impossible for me to know or see or ‘empathize’ with what you see. And of course, this is actually always the case, not just in this VR experience, but in daily life, in everything that we experience. We just forget that and assume that everybody experiences the world in the same way we do.
While I’m interested in these themes from a perceptual point of view, the underlying motivation with these kinds of subjective experiences is to expose and investigate cognitive bias and polarization. I come from Turkey, which is currently torn in two over our current president. In the UK, where I’ve been living for 20 years, the Remain/Brexit campaign has also radically split society. There seems to be a trend where people in one camp attribute the other camp’s political views to them being ‘stupid.’ E.g. I’m very much for remaining in the EU, but it disturbs me when I see other ‘remainers’ believe that the only possible explanation that somebody might have to have voted to leave the EU is because they’re either stupid or racist (or both). I can’t see the world in such simple black-and-white terms. I’m sure many (or at least some) leavers have a line of reasoning which may be more intricate than just being ‘stupid’ or ‘racist,’ even if I don’t agree with it. And if we refuse to acknowledge that, we can’t have a discussion, we’ll never be able to reconcile our differences. We’ll be driven further apart, and ultimately things will only get worse.
R: Can you tell a bit more about the PhD you’re currently doing at Goldsmith’s University? Is it purely technical?
M: My idea going into the PhD was very ambitious. I wanted to weave together art, neuroscience, physics, information theory, control theory, systems theory, perception, philosophy, anthropology, politics, religion, etc., but that turned out to be a bit ambitious, at least for a first PhD. Now it’s narrowed down to being more technical. And like I mentioned before, for the past few decades, I have been trying to create systems that enhance the human experience, particularly of creative expression. What I’m interested in are realtime, interactive, closed feedback loops with continuous control.
This is also how we sense the world. E.g., our eyes are constantly scanning, receiving signals, moving, receiving signals, moving. And the brain integrates all of that information, and that’s how we perceive and understand the world. This is also how we embody our tools and instruments, through action-perception loops. This is how we can embody something like a bicycle or a car, or from a creative self-expression point of view, it’s how we embody something like a piano: we hit a key, hear a note, feel it and respond to it. Eventually, we get to a stage where we don’t think about what we’re playing, we just feel it, it becomes an extension of the body, and the act of playing becomes an emotional act in itself. I don’t feel a tool like Photoshop has that level of immediacy or emotional engagement, once you click on the menu dropdown, etc…
I am looking to use deep learning in that context, to achieve meaningful, expressive continuous control. The way generative deep learning mostly works right now is, for example, you run training code on a big set of images, then you run the generation code, and it generates images. It’s like a black box where you can only press one button: ‘generate something.’ Of course, there are some levels of control you could have. You can control the training data you feed it, you can pick an image and tell the code to create similar images. And in recent years, there have been more ways of controlling the algorithm. But very few of these methods are immediate, realtime closed feedback loops with continuous control. This is both a computational challenge and a system design challenge, as current systems are simply not built with this in mind (though it is a growing field, so that’s very exciting).
R: We’ve talked a lot about machine learning, how about we flip that on its head: can machines teach us something?
M: Yes, definitely! We can look at today through an anthropological timescale: what’s happening in 2018 is not disconnected from what happened 100 or 10,000 years ago. When Galileo took a lens and made a telescope to look at the stars, he literally allowed us to look at the world in a whole new light. We cannot be the same after that. Well, that would have worked better if the Church hadn’t stepped in. If we do not use technology to see things differently, we are wasting it.
Take word embeddings, for example [a set of techniques that maps words and phrases to vectors of real numbers]. There’s a well-known model trained on three billion words of Google News. The program does not know anything to begin with, it doesn’t know what a verb is, it has no idea of grammar, but it eventually creates semantic associations. So it learned about gender, for example, and you can run mathematical operations on words like king – man + woman => queen. It’s learnt about the prejudices and biases encoded in three billion words of news, a reflection of society. Who knows what else is in that model. I wrote a few twitter bots to explore that space, actually. @wordofmath and @wordofmathbias
But even Google autocomplete is a really powerful way of looking at what our collective consciousness is thinking or feeling. I wrote a poem about this in 2014. It’s a collaboration with Google (the search engine, not people working at Google), the keeper of our collective consciousness. And actually it’s more a collection of prayers.
A very powerful project in this realm I really like is by Hayden Anyasi. He was disturbed by the way newspapers selected images to accompany news stories, so he created an installation that takes a picture of your face and then creates a news story about you, based on the data it was trained on: a large dataset of newspaper articles. So if you’re an attractive young white woman, the story generated might be about winning some contest or something. If you’re a young black man, the story is more likely to be about crime. Some people might think that this just reflects reality, but unfortunately, that expectation is exactly the problem, as there are situations where images have been selected to accompany stories not because they are related to the story, but simply because that’s what the expectation was. In Hayden’s own words: “A young man's face was used as the lead image in a story about a spate of crimes. Despite being cleared of any involvement, his picture was later used again anyway. Did his face meet the standard expectations of what a criminal should look like?” It’s easy to dismiss these things when you’re not affected, but when you see it like this, this kind of art punches you.
R: Speaking of scary things, there’s a lot of anxiety around technology these days. Would you say you’re a techno-optimist?
M: I’m definitely not very optimistic. I’m not worried about the singularity or the ‘intelligence explosion’ or robots taking over. To me, that seems more like a marketing trick that’s good for business, to sell books, and to get funding from people who are so rich that the only thing which scares them are so-called ‘existential risks’ which will affect all of humanity, even people as rich and powerful as themselves. On a related note though, autonomous weapons are indeed a major genuine concern, and algorithmic decision-making systems are already in use and proving to be hugely problematic. I do believe algorithms could have the potential to be less prejudiced and fairer than humans on average, but they have to be thoroughly regulated, open source, open data, and preferably developed by non-profit organizations who are doing it only because they believe they can develop fairer systems which will be beneficial to everybody. And by ‘they’ I am referring to not just computer scientists, but a diverse team of experts across many disciplines, backgrounds, and life experiences who collectively have a much greater chance of thinking about and foreseeing the wider impact of these systems once deployed. Closed source, closed data systems developed by for-profit companies which are not well regulated is an absolute recipe for disaster.
But I worry more about “unknown unknowns” that can come out of nowhere and have a huge impact. Here’s a dystopia for you: what if, in the future, the link between genotype and phenotype [how a particular trait is coded in our DNA and expressed through environmental conditions] was mastered (it is something that is being heavily researched right now)? And imagine that combined with CRISPR (or its successor), there was a service which allowed you to boost your baby’s IQ to 300+. And imagine that this service was incredibly expensive, something which only a select few could afford. What kind of world would that be? I don’t necessarily believe that this exact scenario will happen, but I’m sure we will face similar situations.
On the other hand, if we are ever to cure Alzheimer’s or leukemia, it will undoubtedly be with the help of similar data-driven methods. Even the recent discovery of gravitational waves produced by colliding neutron stars is a massive undertaking in data analysis and extracting information (the detection of a tiny blip of signal) in a massive sea of background noise. Machine learning encompasses the act of extracting meaningful information from data, and so any breakthrough in machine learning will impact any field which is data-driven. And in this day and age, everything is data-driven: physics, chemistry, biology, genetics, neuroscience, psychology, economics, and even politics. So it’s impossible to predict the unknown unknowns. Who knows, maybe someday we’ll be able to photosynthesize!
But I do have a streak of optimism. However, what I'm optimistic about is not technology, but us, and a potential shift in values. If we look at the overall evolutionary arc of human morals going back thousands of years, it seems there is a trend towards expanding our circle of compassion to be more inclusive. We used to live in small tribes, and neighboring tribes would be at war. We've now expanded those tribes to the size of countries. This is still far from perfect, especially with the current rise of nationalism, but the overarching long-term trend is a positive one, if it carries on in the same direction (and that is a big open ‘if’). We've now legally recognized that half of the population - women - are the equal of men and deserve the same rights, whether it be for voting, working, healthcare, education, etc. It’s quite shocking that this has only happened so recently, in the last hundred years or so. And so the effects have unfortunately not yet fully permeated into our culture and day-to-day lives, but I think it's inevitable that it will happen. Likewise, we’ve abolished slavery, we legally recognize all humans to be equal. Again, unfortunately, this has happened shockingly recently, so we are absolutely nowhere near being at a level where the day-to-day practice is satisfactory. But again, hopefully, the overall long-term trend is moving in a desirable direction. And this last century has even seen massive efforts to include non-human animals in our circle of compassion, whether it be vegetarianism or veganism or animal rights, in general.
So while I’m not overly optimistic, the only glimmer of hope that I am able to potentially see for the future is not via any particular technology saving us, but hopefully a gradual shift in values which will head towards prioritizing the well being of all living things, as opposed to just a select few at the massive expense of others. The big open question is, apart from whether this will happen or not, is how soon will this happen. And how much damage will we have inflicted before we realize what we've done.
R: Thanks a lot for our chat! To wrap up, do you have any reading recommendations to dig deeper into machine learning art?
M: A few years ago I collated a list of resources which I had used to get up to speed.
At the time, there weren’t many introductory or beginner-friendly materials. It was more academic books and full-on online university courses. But in the past few years, as deep learning became really popular, loads of new ‘beginner-friendly’ materials came online. So this is probably quite out of date, but for those willing to invest time, I’m sure a lot of this will help build a strong foundation.
But since collating that list, a fantastic resource that is now available is Gene Kogan’s Machine Learning for Artists. It’s full of amazingly useful, beginner-friendly info. And another resource which I have not personally used, but I’ve heard very good things about, is Fast.ai.