Artificial intelligence (AI) is revolutionising approaches to some of life’s biggest challenges: from cancer research to driverless cars. It has the potential to turn vast tracts of data into meaningful insights, improving our day-to-day lives.
At first glance, machines appear to be the logical answer to a huge range of questions, with no vested interests, no preconceptions, no egos or opinions. AI can take the sum of human learning and reflect it back to us in new, revealing ways. But what does this way of looking at the world mean when applied to art? Can a machine go beyond the prejudices of its creator, if the data set it works from is biased, or will it simply reflect that back to us?
Computing for Creativity
From the earliest cave paintings, humanity has been attempting to communicate ideas through images. Each brushstroke reflects both a moment in time and the feelings of the painter. Can machines with no emotion ever paint anything meaningful? Is it possible for a machine to have depth?
Professor Jonathan Hare, an expert in machine learning, has been experimenting with getting machines to communicate with each other in innovative new ways. Traditionally, this has been done with token sequences. Tokens are basic units of text or code that the AI uses to process and generate language. In effect, the tokens have the same function as words in a human language. Now Jonathan has been inspired by art to take a new approach.
Jonathan and his team want to make the communication between machines easier for humans to interpret. “In this project, we propose a direct and potentially self-explaining means of transmitting knowledge: sketching.” Humans have communicated through pictures long before writing was developed, leading the team to wonder whether sketching was a more natural form of communication.
“We called our study Perceptions because we looked at how a machine perceives a photograph at different layers within its neural network, which is a mathematical computer system modelled on the way nerve cells fire in the human brain.”
The outcome is a series of sketches, based on travel photography, which were displayed in the gallery of the NeurlPS Machine Learning for Creativity and Design workshop. They show that computers, like humans, perceive images differently depending on which part of their neural networks are engaged to explore them.
Jonathan’s work is fascinating. It suggests computers are not one-dimensional data crunchers. If you feed the same data in, you may get different results out. One computer may see things differently to another. And that, it could be argued, is the very heart of what makes art work.
AI and Inequity
Some hope that artificial intelligence may lead us to a new, brighter future, not shaded by humanity’s biases and preconceptions. The question remains: can a machine that learns about the world from data created by humanity, with all its vested interests, ever see the world in a truly fair way? Researchers at Southampton have been investigating the opportunities and limitations that AI represents for historically underrepresented groups.
Programming machines to ‘see’ is one thing. The content we show them is another. Winchester School of Art has been collaborating with Tate Britain to explore the ways AI could help open up art to marginalised groups. What they found has led to questions about the lack of diverse materials available to artificial intelligence.
During a series of public workshops, members of the LGBTQIA+ community were given the chance to experiment with AI imaging software. By inputting text commands, users could ask the software to take existing artworks and modify them.
“I am interested in the kind of affordances that technology might give, enabling marginalised groups to connect, express themselves and engage,” explained Winchester School of Art’s Professor Ed D’Souza. He sees AI as potentially emancipatory.
However, there are limits to the freedom AI offers when the material it draws from is also limited. AI makes use of huge amounts of data to interpret and respond to human inputs. In essence, the computer is like a child learning from the world, but it can only learn from the world it is shown.
“The Tate Collective Producers, a very diverse group of young people, found it hard to be able to describe some of their cultural experience through the AI software.” Ed recalled a specific example of a Producer attempting to get the AI to bring in aspects of the Notting Hill Carnival, but it “lacked the understanding of the language she was using and was unable to visualise the diversity of people for her.”
Artificial intelligences will only ever be as representative as the data they have access to. Only by acknowledging that AIs are not created outside of the world’s biases but nestled deeply within it can we work to make the next generation of computers a benefit to society as a whole, rather than an echo chamber for a select few.
Building in Fairness
Right now, machines are just another tool. They can take our ideas and construct them for us on a canvas or a screen. But as Jonathan’s work has shown us, they have the capacity for complexity. They can learn to communicate ideas of different levels. If this is to happen in a meaningful way, we need to work hard to build machines on a platform of equity and fairness.
That job, at least in part, falls on Responsible AI UK (RAIUK) a national consortium headed up by Southampton researchers. Professor Gopal Ramchurn will lead the programme, backed by a £31million government grant, as it seeks to create a safe and trustworthy future for artificial intelligence.
“All eyes are on the University of Southampton – it’s up to us to lead the UK on future AI development and, one thing is for sure, RAIUK will be at the forefront.”
Read more about the University work in AI in the latest issue of Re:action.