Hello World
Page 19
Similarity works perfectly well for recommendation engines. But when you ask algorithms to create art without a pure measure for quality, that’s where things start to get interesting. Can an algorithm be creative if its only sense of art is what happened in the past?
Good artists borrow; great artists steal – Pablo Picasso
In October 1997, an audience arrived at the University of Oregon to be treated to a rather unusual concert. A lone piano sat on the stage at the front. Then the pianist Winifred Kerner took her place at the keys, poised to play three short separate pieces.
One was a lesser-known keyboard composition penned by the master of the baroque, Johann Sebastian Bach. A second was composed in the style of Bach by Steve Larson, a professor of music at the university. And a third was composed by an algorithm, deliberately designed to imitate the style of Bach.
After hearing the three performances, the audience were asked to guess which was which. To Steve Larson’s dismay, the majority voted that his was the piece that had been composed by the computer. And to collective gasps of delighted horror, the audience were told the music they’d voted as genuine Bach was nothing more than the work of a machine.
Larson wasn’t happy. In an interview with the New York Times soon after the experiment, he said: ‘My admiration for [Bach’s] music is deep and cosmic. That people could be duped by a computer program was very disconcerting.’
He wasn’t alone in his discomfort. David Cope, the man who created the remarkable algorithm behind the computer composition, had seen this reaction before. ‘I [first] played what I called the “game” with individuals,’ he told me. ‘And when they got it wrong they got angry. They were mad enough at me for just bringing up the whole concept. Because creativity is considered a human endeavour.’19
This had certainly been the opinion of David Hofstadter, the cognitive scientist and author who had organized the concert in the first place. A few years earlier, in his 1979 Pulitzer Prize winning book Gödel, Escher, Bach, Hofstadter had taken a firm stance on the matter:
Music is a language of emotions, and until programs have emotions as complex as ours, there is no way a program will write anything beautiful … To think that we might be able to command a pre-programmed ‘music box’ to bring forth pieces which Bach might have written is a grotesque and shameful mis-estimation of the depth of the human spirit.20
But after hearing the output of Cope’s algorithm – the so-called ‘Experiments in Musical Intelligence’ (EMI) – Hofstadter conceded that perhaps things weren’t quite so straightforward: ‘I find myself baffled and troubled by EMI,’ he confessed in the days following the University of Oregon experiment. ‘The only comfort I could take at this point comes from realizing that EMI doesn’t generate style on its own. It depends on mimicking prior composers. But that is still not all that much comfort. To my absolute devastation [perhaps] music is much less than I ever thought it was.’21
So which is it? Is aesthetic excellence the sole preserve of human endeavour? Or can an algorithm create art? And if an audience couldn’t distinguish EMI’s music from that of a great master, had this machine demonstrated the capacity for true creativity?
Let’s try and tackle those questions in turn, starting with the last one. To form an educated opinion, it’s worth pausing briefly to understand how the algorithm works.fn3 Something David Cope was generous enough to explain to me.
The first step in building the algorithm was to translate Bach’s music into something that can be understood by a machine: ‘You have to place into a database five representations of a single note: the on time, the duration, pitch, loudness and instrument.’ For each note in Bach’s back catalogue, Cope had to painstakingly enter these five numbers into a computer by hand. There were 371 Bach chorales alone, many harmonies, tens of thousands of notes, five numbers per note. It required a monumental effort from Cope: ‘For months, all I was doing every day was typing in numbers. But I’m a person who is nothing but obsessive.’
From there, Cope’s analysis took each beat in Bach’s music and examined what happened next. For every note that is played in a Bach chorale, Cope made a record of the next note. He stored everything together in a kind of dictionary – a bank in which the algorithm could look up a single chord and find an exhaustive list of all the different places Bach’s quill had sent the music next.
In that sense, EMI has some similarities to the predictive text algorithms you’ll find on your smartphone. Based on the sentences you’ve written in the past, the phone keeps a dictionary of the words you’re likely to want to type next and brings them up as suggestions as you’re writing.fn4
The final step was to let the machine loose. Cope would seed the system with an initial chord and instruct the algorithm to look it up in the dictionary to decide what to play next, by selecting the new chord at random from the list. Then the algorithm repeats the process – looking up each subsequent chord in the dictionary to choose the next notes to play. The result was an entirely original composition that sounds just like Bach himself.fn5
Or maybe it is Bach himself. That’s Cope’s view, anyway. ‘Bach created all of the chords. It’s like taking Parmesan cheese and putting it through the grater, and then trying to put it back together again. It would still turn out to be Parmesan cheese.’
Regardless of who deserves the ultimate credit, there’s one thing that is in no doubt. However beautiful EMI’s music may sound, it is based on a pure recombination of existing work. It’s mimicking the patterns found in Bach’s music, rather than actually composing any music itself.
More recently, other algorithms have been created that make aesthetically pleasing music that is a step on from pure recombination. One particularly successful approach has been genetic algorithms – another type of machine learning, which tries to exploit the way natural selection works. After all, if peacocks are anything to go by, evolution knows a thing or two about creating beauty.
The idea is simple. Within these algorithms, notes are treated like the DNA of music. It all starts with an initial population of ‘songs’ – each a random jumble of notes stitched together. Over many generations, the algorithm breeds from the songs, finding and rewarding ‘beautiful’ features within the music to breed ‘better’ and better compositions as time goes on. I say ‘beautiful’ and ‘better’, but – of course – as we already know, there’s no way to decide what either of those words means definitively. The algorithm can create poems and paintings as well as music, but – still – all it has to go on is a measure of similarity to whatever has gone before.
And sometimes that’s all you need. If you’re looking for a background track for your website or your YouTube video that sounds generically like a folk song, you don’t care that it’s similar to all the best folk songs of the past. Really, you just want something that avoids copyright infringement without the hassle of having to compose it yourself. And if that’s what you’re after, there are a number of companies who can help. British startups Jukebox and AI Music are already offering this kind of service, using algorithms that are capable of creating music. Some of that music will be useful. Some of it will be (sort of) original. Some of it will be beautiful, even. The algorithms are undoubtedly great imitators, just not very good innovators.
That’s not to do these algorithms a disservice. Most human-made music isn’t particularly innovative either. If you ask Armand Leroi, the evolutionary biologist who studied the cultural evolution of pop music, we’re a bit too misty-eyed about the inventive capacities of humans. Even the stand-out successes in the charts, he says, could be generated by a machine. Here’s his take on Pharrell Williams’ ‘Happy’, for example (something tells me he’s not a fan):
‘Happy, happy, happy, I’m so happy.’ I mean, really! It’s got about, like, five words in the lyrics. It’s about as robotic a song as you could possibly get, which panders to just the most base uplifting human desire for uplifting summer happy music. The most moronic and reductive
song passable. And if that’s the level – well it’s not too hard.
Leroi doesn’t think much of the lyrical prowess of Adele either: ‘If you were to analyse any of the songs you would find no sentiment in there that couldn’t be created by a sad song generator.’
You may not agree (I’m not sure I do), but there is certainly an argument that much of human creativity – like the products of the ‘composing’ algorithms – is just a novel combination of pre-existing ideas. As Mark Twain says:
There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope. We give them a turn and they make new and curious combinations. We keep on turning and making new combinations indefinitely; but they are the same old pieces of colored glass that have been in use through all the ages.22
Cope, meanwhile, has a very simple definition for creativity, which easily encapsulates what the algorithms can do: ‘Creativity is just finding an association between two things which ordinarily would not seem related.’
Perhaps. But I can’t help feeling that if EMI and algorithms like it are exhibiting creativity, then it’s a rather feeble form. Their music might be beautiful, but it is not profound. And try as I might, I can’t quite shake the feeling that seeing the output of these machines as art leaves us with a rather culturally impoverished view of the world. It’s cultural comfort food, maybe. But not art with a capital A.
In researching this chapter, I’ve come to realize that the source of my discomfort about algorithms making art lies in a different question. The real issue is not whether machines can be creative. They can. It is about what counts as art in the first place.
I’m a mathematician. I can trade in facts about false positives and absolute truths about accuracy and statistics with complete confidence. But in the artistic sphere I’d prefer to defer to Leo Tolstoy. Like him, I think that true art is about human connection; about communicating emotion. As he put it: ‘Art is not a handicraft, it is the transmission of feeling the artist has experienced.’23 If you agree with Tolstoy’s argument then there’s a reason why machines can’t produce true art. A reason expressed beautifully by Douglas Hofstadter, years before he encountered EMI:
A ‘program’ which could produce music … would have to wander around the world on its own, fighting its way through the maze of life and feeling every moment of it. It would have to understand the joy and loneliness of a chilly night wind, the longing for a cherished hand, the inaccessibility of a distant town, the heartbreak and regeneration after a human death. It would have to have known resignation and world-weariness, grief and despair, determination and victory, piety and awe. It would have had to commingle such opposites as hope and fear, anguish and jubilation, serenity and suspense. Part and parcel of it would have to be a sense of grace, humour, rhythm, a sense of the unexpected – and of course an exquisite awareness of the magic of fresh creation. Therein, and only therein, lie the sources of meaning in music.24
I might well be wrong here. Perhaps if algorithmic art takes on the appearance of being a genuine human creation – as EMI did – we’ll still value it, and bring our own meaning to it. After all, the long history of manufactured pop music seems to hint that humans can form an emotional reaction to something that has no more than the semblance of an authentic connection. And perhaps once these algorithmic artworks become more commonplace and we become aware that the art didn’t come from a human, we won’t be bothered by the one-way connection. After all, people form emotional relationships with objects that don’t love them back – like treasured childhood teddy bears or pet spiders.
But for me, true art can’t be created by accident. There are boundaries to the reach of algorithms. Limits to what can be quantified. Among all of the staggeringly impressive, mind-boggling things that data and statistics can tell me, how it feels to be human isn’t one of them.
Conclusion
RAHINAH IBRAHIM WAS an architect with four children, a husband who lived overseas, a job volunteering at a local hospital and a PhD at Stanford to complete. As if her life wasn’t busy enough, she had also just undergone an emergency hysterectomy and – although she was pretty much back on her feet by now – was still struggling with standing unaided for any length of time without medication. None the less, when the 38th annual International Conference on System Sciences rolled around in January 2005 she booked her flights to Hawaii and organized herself to present her latest paper to her academic peers.1
When Ibrahim arrived at San Francisco airport with her daughter, first thing on the morning of 2 January 2005, she approached the counter, handed over her documents and asked the staff if they could help her source some wheelchair assistance. They did not oblige. Her name flashed up on the computer screen as belonging to the federal no-fly list – a database set up after 9/11 to prevent suspected terrorists from travelling.
Ibrahim’s teenage daughter, left alone and distraught by the desk, called a family friend saying they’d marched her mother away in handcuffs. Ibrahim, meanwhile, was put into the back of a police car and taken to the station. They searched beneath her hijab, refused her medication and locked her in a cell. Two hours later a Homeland Security agent arrived with release papers and told her she had been taken off the list. Ibrahim made it to her conference in Hawaii and then flew on to her native Malaysia to visit family.
Ibrahim had been put on the no-fly list when an FBI agent ticked the wrong box on a form. It might be that the mistake was down to a mix-up between Jemaah Islamiyah, a terrorist organization notorious for the Bali bombings of 2002, and Jemaah Islam, a professional Malaysian organization for people who study abroad. Ibrahim was a member of the latter, but had never had any connection with the former. It was a simple mistake, but one with dramatic consequences. As soon as the error had made its way into the automated system, it had taken on an aura of authority that made it all but immune to appeal. The encounter at San Francisco wasn’t the end of the story.
On the return leg of her journey two months later, while flying home to the United States from Malaysia, Ibrahim was again stopped at the airport. This time, the resolution did not come so quickly. Her visa had been revoked on the grounds of suspected connections to terrorism. Although she was the mother of an American citizen, had her home in San Francisco and held a role at one of the country’s most prestigious universities, Ibrahim was not allowed to return to the United States. In the end, it would take almost a decade of fighting to win the case to clear her name. Almost a decade during which she was forbidden to set foot on American soil. And all because of one human error, and a machine with an omnipotent authority.
Human plus machine
There’s no doubting the profound positive impact that automation has had on all of our lives. The algorithms we’ve built to date boast a bewilderingly impressive list of accomplishments. They can help us help diagnose breast cancer, catch serial killers and avoid plane crashes; give each of us free and easy access to the full wealth of human knowledge with our fingertips; and connect people across the globe instantly in a way that our ancestors could only have dreamed of. But in our urge to automate, in our hurry to solve many of the world’s issues, we seem to have swapped one problem for another. The algorithms – useful and impressive as they are – have left us with a tangle of complications to unpick.
Everywhere you look – in the judicial system, in healthcare, in policing, even online shopping – there are problems with privacy, bias, error, accountability and transparency that aren’t going to go away easily. Just by virtue of some algorithms existing, we face issues of fairness that cut to the core of who we are as humans, what we want our society to look like, and how far we can cope with the impending authority of dispassionate technology.
But maybe that’s precisely the point. Perhaps thinking of algorithms as some kind of authority is exactly where we’re going wrong.
For one thing, our reluctance to question the power of an algorithm has opened the
door to people who wish to exploit us. In researching this book, I have come across all manner of snake-oil salesmen willing to trade on myths and profit from our gullibility. Despite the weight of scientific evidence to the contrary, there are people selling algorithms to police forces and governments that claim to ‘predict’ whether someone is a terrorist or a paedophile based on the characteristics of their face alone. Others insist their algorithm can suggest changes to a single line in a screenplay that will make a movie more profitable at the box office.fn1 Others boldly state – without even a hint of sarcasm – that their algorithm is capable of finding your one true love.fn2
But even the algorithms that live up to their claims often misuse their authority. This book is packed full of stories of the harm that algorithms can do. The ‘budget tool’ used to arbitrarily cut financial assistance to disabled residents of Idaho. The recidivism algorithms that, thanks to historical data, are more likely to suggest a higher risk score for black defendants. The kidney injury detection system that forces millions of people to give up their most personal and private data without their consent or knowledge. The supermarket algorithm that robs a teenage girl of the chance to tell her father that she’s fallen pregnant. The Strategic Subject List that was intended to help victims of gun crime, but was used by police as a hit list. Examples of unfairness are everywhere.
And yet, pointing out the flaws in the algorithms risks implying that there is a perfect alternative we’re aiming for. I’ve thought long and hard and I’ve struggled to find a single example of a perfectly fair algorithm. Even the ones that look good on the surface – like autopilot in planes or neural networks that diagnose cancer – have problems deep down. As you’ll have read in the ‘Cars’ chapter, autopilot can put those who trained under automation at a serious disadvantage behind the wheel or the joystick. There are even concerns that the apparently miraculous tumour-finding algorithms we looked at in the ‘Medicine’ chapter don’t work as well on all ethnic groups. But examples of perfectly fair, just systems aren’t exactly abundant when algorithms aren’t involved either. Wherever you look, in whatever sphere you examine, if you delve deep enough into any system at all, you’ll find some kind of bias.