A happy bowl of soup made of wool. An astronaut riding a horse in space. Teddy bears shopping for groceries in ancient Egypt. These are not the products of the imagination of an artist on magic mushrooms but the creations of a text-to-image and art generator neural network called DALL-E, although without this knowledge no one could tell the difference. What does this mean for the world of art?
Is it just another tool in the hands of artists similar to the point in time when photographs appeared next to canvas paintings? Or would it go beyond that and render artists obsolete? How could art use new technology and still preserve its function as a critical insight into the hidden workings of society? Will art remain independent enough to be able to critique AI and the world it creates?
Many questions with no immediate and unequivocal answers. Even the description of AI art is a bit simplistic if we say it is the practice of employing code, data, and machine learning systems to generate pieces of art. Ted Loos from the New York Times believes that AI is not only a tool for artists, who are employing machine intelligence in fascinating ways, it is also frequently a topic to be examined - sometimes in the same piece. Before we explore the critical and darker side of AI, let’s see some examples of what he means by using technology’s latest revolutionary tool - algorithms that are able to learn, predict and connect data in unexpected ways.
Artificial intelligence in art comprises of a wide range of tools and areas, it not only means OpenAI’s latest creation to generate images from texts, but also various tools for digital artists, such as smart cropping features that automatically recognize the subject of photos, or automatic image tagging to help people find stock photos faster. Moreover, auto-coloring tools designed for comics and animation could speed up and automate the process of drawing and animating stories.
Some studios even went a step further, and have already begun investing in their own auto-coloring research, including OLM, the production studio behind the Pokémon anime. And similar AI colorization tools have been around for a few years, including Japanese AI startup Preferred Networks’ browser-based tool PaintsChainer, and Style2Paints, a web-based bot created by a team of research students from the Chinese University of Hong Kong and Soochow University.
The next level of employing artificial intelligence in art is the involvement of algorithms and software in the creation of new artistic meanings, or allowing these systems themselves to come up with their own results - based on thousands of images or sound clips. In the last years, a wide range of AI generated artworks flooded the online space - from Lady Gaga’s Pokerface in the style of Mozart through artistic images imitating Van Gogh’s Starry Night to animated works generated from real-time speech.
However, many artists believe that algorithms are unable to ‘create’ as humans, they can only ‘generate’ new images, pieces of music, photographs, movies, or any other artistic work based on already existing data points, which is not art but rather the ability to connect data in new and unexpected ways. Some could say that’s close to being creative, however, an algorithm could never reflect on our world and society in a way an artist does, its choices are random, and it could never search for meaning behind phenomena.
It could come close to being creative, though, and that is why the art world was so shocked when in 2018, an algorithm-generated portrait, “Edmond de Belamy, from La Famille de Belamy,” went under the hammer at Christie’s and sold for $432,500. Created by French collective Obvious, it was the first-ever AI portrait to go on auction, and its sale spawned headlines variously celebrating and worrying about its implications for art-making.
However, many who are afraid that AI could carve out a significant space in the art world, tend to forget that art is not just about aesthetics, it is about meaning, too. And that’s where AI does not have a chance, and that’s why one of the most valuable artistic activities are those which critique algorithms, software, the digital world and show humans the negative sides, the failures, and the biases of artificial intelligence.
For example, in 2020, German performance artist, Simon Weckert, has tricked Google Maps with 99 smartphones showing how easily fallible algorithms are. British artist, James Bridle found a brilliant way to “outflank” self-driving cars in the framework of another art project. According to his idea, the software controlling these devices learn the highway code first: the meaning of signs painted on the road and on the roadsides. For example, if a driver sees a continuous and a dashed line, if the dashed line is further away from them, they can't cross the continuous one - but crossing is allowed in the opposite case. So suppose that a self-driving car meets a circle painted on the road with a dashed and a continuous line, where the outer line is the dashed one.
In this case, the software of the vehicle senses that it can cross the line and go into the circle, but here, it gets trapped as the inner line is continuous - which it cannot cross, so it cannot leave the circle. The car will not move out of the circle - except a human being arrives for giving a helping hand. Similar “tricks” were already used by IT specialists and hackers to show the weaknesses of self-driving cars: for example, the camera system of several Tesla cars was successfully manipulated by tiny, simple stickers in February 2020 to cause the cars to speed up dangerously.
Another project called ImageNet Roulette launched by researcher Kate Crawford and artist Trevor Paglen aimed at drawing attention to the harmful effects caused potentially by facial recognition and other algorithms categorizing people. As a reaction to their work, one of the biggest visual database in the world, ImageNet, which was created by researchers at Stanford and Princeton Universities, removed around 600,000 images and reviewed 438 categories identifying them as “offensive” independent from any context. Removal of the images happened in the framework of a review of whether the algorithm of ImageNet shows any bias. During the examination, it turned out that the software reproduced those power relationships of gender and race, which were hidden in the data - moreover, it made them visible and exaggerated them.
This is not the only example for finding bias in algorithms, and this is how the development of artificial intelligence becomes the latest battlefield for social justice. It makes the control of AI, the exploration of its impact on marginalized social groups, as well as the alleviation of its negative consequences essential.
While countless media channels propagate that AI-powered software can perform better than humans, no matter whether its medicine or education, they “work” almost without errors, they increase efficiency and decrease costs, this techno-optimistic rhetoric tends to leave out of sight that algorithms are created by humans with the help of data telling human stories and collected by humans, thus every single touch point of the creative and operational process of algorithms is full of social preconceptions as well as ingrained prejudices and bias.
Talented artists and imaginative artwork can uncover exactly these hidden workings of celebrated algorithms, pointing out their weaknesses, repairing social justice issues - and going against the tide that strengthens the fears about an omnipotent AI. Let me assure you: we are very far from that, and if we continue to have artists like we do today, we’ll never get there.
Cover photo: OpenAI.com