Book Read Free

The Formula_How Algorithms Solve All Our Problems... and Create More

Page 19

by Luke Dormehl


  Hume was ahead of his time in various ways. In recent years, a number of organizations around the world have been investigating what is referred to as “Emotional Optimization.” Emotional Optimization relates to the discovery that certain parts of the brain correspond to different emotions. By asking test subjects to wear electroencephalography (EEG) brain caps, neuroscientists can measure the electrical activity that results from ionic current flows within the neurons of the brain. These readings can then be used to uncover the positive and negative reactions experienced by a person as they listen to a piece of music or watch a scene from a film. Through the addition of machine-learning tools, the possibility of discovering which low-level features in art prompt particular emotional responses becomes a reality.

  Looking to the future, the potential of such work is clear. The addition of a feedback loop, for instance, would allow users not simply to have their EEG response to particular works read, but also to dictate the mood they wanted to achieve. Instead of having playlists to match our mood, a person would have the option of entering their desired emotion into a computer, with a customized playlist then generated to provoke that specific response. This may have particular application in the therapeutic world to help treat those suffering from stress or forms of depression. Runners, meanwhile, could have their pulse rates measured by the headphones they’re wearing, with music selected according to whether heart rate rises or falls. Translated to literature, electronic novels could monitor the electrical activity of neurons in the brain while they are being read, leading to algorithms rewriting sections to match the reactions elicited. In the same way that a stand-up comic or live musician subtly alters their performance to fit a particular audience, so too will media increasingly resemble its consumer. The medium might stay the same, but the message will change depending on who is listening.

  In an article published in the New Statesman, journalist Alexandra Coughlan refers to this idea as “aural pill-popping,” in which Emotional Optimization will mean that there will be “one [music] track to bring us up [and] another to bring us down.”45 This comment demonstrates a belief in functional form—the idea that, as I described earlier in this chapter, it is desirable that art be “made useful” in some way. Coughlan’s suggestion of “aural pill-popping” raises a number of questions—not least whether the value of art is simply as a creative substitute for mind-altering drugs.

  We might feel calm looking at Mark Rothko’s Untitled (Green on Blue) painting, for example, but does this relegate it to the artistic equivalent of Valium? In his book To Save Everything, Click Here, Belarusian technology scholar Evgeny Morozov takes this utilitarian idea to task. Suppose, Morozov says, that Google (selecting one company that has made clear its ambitions to quantify everything) knows that we are not at our happiest after receiving a sad phone call from an ex-girlfriend. If art equals pleasure—and the quickest way to achieve pleasure is to look at a great painting—then Google knows that what we need more than anything for a quick pick-me-up is to see a painting by Impressionist painter Renoir:

  Well, Google doesn’t exactly “know” it; it knows only that you are missing 124 units of “art” and that, according to Google’s own measurement system, Renoir’s paintings happen to average in the 120s. You see the picture and—boom!—your mood stays intact.46

  Morozov continues his line of inquiry by asking the pertinent questions that arise with such a proposition. Would keeping our mood levels stabilized by looking at the paintings of Renoir turn us into a world of art lovers? Would it expand our horizons? Or would such attempts to consume art in the manner of self-help literature only serve to demean artistic endeavors? Still more problems not touched on by Morozov surface with efforts to quantify art as unitary measures of pleasure, in the manner of Sergei Eisenstein’s “attractions.” If we accept that Renoir’s work gives us a happiness boost of, say, 122, while Pablo Picasso’s score languishes at a mere 98, why bother with Picasso’s work at all?

  Similarly, let’s imagine for a moment that the complexity of Beethoven’s 7th Symphony turns out to produce measurably greater neurological highs than Justin Bieber’s hit song “Baby,” thereby giving us the ability to draw a mathematical distinction between the fields of “high” and “low” art. Should this prove to be the case, could we receive the same dosage of artistic nourishment—albeit in a less efficient time frame—by watching multiple episodes of Friends (assuming the sitcom is classified as “low” art) as we could from reading Leo Tolstoy’s War and Peace (supposing that it is classified as “high” art)? Ultimately, presuming that War and Peace is superior to Friends, or that Beethoven is superior to Justin Bieber, simply because they top up our artistic needs at a greater rate of knots, is essentially the same argument as suggesting that James Patterson is a greater novelist than J. M. Coetzee on the basis that data gathered by Kindle shows that Patterson’s Kill Alex Cross can be read in a single afternoon, while Coetzee’s Life & Times of Michael K takes several days, or even weeks. It may look mathematically rigorous, but something doesn’t quite add up.

  The Dehumanization of Art

  All of this brings us ever closer to the inevitable question of whether algorithms will ever be able to generate their own art. Perhaps unsurprisingly, this is a line of inquiry that provokes heated comments on both sides. “It’s only a matter of when it happens—not if,” says Lior Shamir, who built the automated art critic I described earlier. Much as Epagogix’s movie prediction system spots places in a script where a potential yield is not where it should be and then makes recommendations accordingly, so Shamir is convinced that in the long term his creation will be able to spot the features great works of art have in common and generate entire new works accordingly.

  While this might seem a new concept, it is not. In 1787, Mozart anonymously published what is referred to in German as Musikalisches Würfelspiel (“musical dice game”). His idea was simple: to enable readers to compose German waltzes, “without the least knowledge of music . . . by throwing a certain number with two dice.” Mozart provided 176 bars of music, arranged in 16 columns, with 11 bars to each column. To select the first musical bar, readers would throw two dice and then choose the corresponding bar from the available options. The technique was repeated for the second column, then the third, and so on. The total number of possible compositions was an astonishing 46 × 1,000,000,000,000,000, with each generated work sounding Mozartian in style.47

  A similar concept—albeit in a different medium—is the current work of Celestino Soddu, a contemporary Italian architect and designer who uses what are referred to as “genetic algorithms” to generate endless variations on individual themes. A genetic algorithm replicates evolution inside a computer, adopting the idea that living organisms are the consummate problem solvers and using this to optimize specific solutions. By inputting what he considers to be the “rules” that define, say, a chair or a Baroque cathedral, Soddu is able to use his algorithm to conceptualize what a particular object might look like were it a living entity undergoing thousands of years of natural selection. Because there is (realistically speaking) no limit to the amount of results the genetic algorithm can generate, Soddu’s “idea-products” mean that a trendy advertising agency could conceivably fill its offices with hundreds of chairs, each one subtly different, while a company engaged in building its new corporate headquarters might generate thousands of separate designs before deciding upon one to go ahead with.

  There are, however, still problems with the concept of creating art by algorithm. Theodor Adorno and Max Horkheimer noted in the 1940s how formulaic art does not offer new experiences, but rather remixed versions of what came before. Instead of the joy of being exposed to something new, Adorno saw mass culture’s reward coming in the form of the smart audience member who “can guess what is coming and feel flattered when it does come.”48 This prescient comment is backed up by algorithms that predict the future by establishing what has worked in the past.
An artwork in this sense might achieve a quantifiable perfection, but it will only ever be perfection measured against what has already occurred.

  For instance, Nick Meaney acknowledges that Epagogix would have been unable to predict the huge success of a film like Avatar. The reason: there had been no $2 billion films before to measure it against. This doesn’t mean that Epagogix wouldn’t have realized it had a hit on its hands, of course. “Would we have said that it would earn what it did in the United States? Probably not,” Meaney says. “It would have been flagged up as being off the scale, but because it was off the scale there was nothing to measure it against. The next Avatar, on the other hand? Now there’s something to measure it against.”

  The issue becomes more pressing when it relates to the generating of new art, rather than the measurement of existing works. Because Lior Shamir’s automated art critic algorithm measures works based on 4,024 different numerical descriptors, there is a chance that it might be able to quantify what would comprise the best illustration of, say, pop art and generate an artwork that conforms to all of these criteria. But these criteria are themselves based upon human creativity. Would it be possible for algorithms themselves to move art forward in a meaningful way, rather than simply aping the style of previous works? “At first, no,” Shamir says. “Ultimately, I would be very careful in saying there are things that machines can not do.”

  A better question might be whether we would accept such works if they did—knowing that a machine rather than a human artist had created them? For those that see creativity as a profoundly human activity (a relatively new idea, as it happens), the question is one that goes beyond technical ability and touches on somewhat close to the essence of humanity.

  In 2012, the London Symphony Orchestra took to the stage to perform compositions written entirely by a music-generating algorithm called Iamus.49 Iamus was the project of professor and entrepreneur Francisco Vico, under whose coding it has composed more than one billion songs across a wide range of genres. In the aftermath of Iamus’s concert, a staff writer for the Columbia Spectator named David Ecker put pen to paper (or rather finger to keyboard) to write a polemic taking aim at the new technology. “I use computers for damn near everything, [but] there’s something about this computer that I find deeply troubling,” Ecker wrote.

  I’m not a purist by any stretch. I hate overt music categorization, and I hate most debates about “real” versus “fake” art, but that’s not what this is about. This is about the very essence of humanity. Computers can compete and win at Jeopardy!, beat chess masters, and connect us with people on the other side of the world. When it comes to emotion, however, they lack much of the necessary equipment. We live every day under the pretense that what we do carries a certain weight, partly due to the knowledge of our own mortality, and this always comes through in truly great music. Iamus has neither mortality nor the urgency that comes with it. It can create sounds—some of which may be pleasing—but it can never achieve the emotional complexity and creative innovation of a musician or a composer. One could say that Iamus could be an ideal tool for creating meaningless top-40 tracks, but for me, this too would be troubling. Even the most transient and superficial of pop tracks take root in the human experience, and I believe that even those are worth protecting from Iamus.50

  Perhaps there is still hope for those who dream of an algorithm creating art. However, as Iamus’s Francisco Vico points out: “I received one comment from a woman who admitted that Iamus was a milestone in technology. But she also said that she had to stop listening to it, because it was making her feel things. In some senses we see this as creepy, and I can fully understand that. We are not ready for it. Part of us still thinks that computers are Terminators that want to kill us, or else simple tools that are supposed to help us with processing information. The idea that they can be artists, too, is something unexpected. It’s something new.”

  CONCLUSION

  Predicting the Future

  In 1954, a 34-year-old American psychology professor named Paul E. Meehl published a groundbreaking book with a somewhat unwieldy title. Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence presented 20 case studies in which predictions made by statistical algorithms were compared with clinical predictions made by trained experts.1

  A sample study asked trained counselors to predict the end-of-year grades of first-year students. The counselors were allowed three-quarters of an hour to interview each student, along with access to their previous grades, multiple aptitude tests, and a personal statement that ran four pages in length. The algorithm, meanwhile, required only high school grades and a single aptitude test. In 11 out of 14 cases, the algorithm proved more accurate at predicting students’ finishing grades than did the counselors.

  The same proved true of the textbook’s other studies, which analyzed topics as diverse as parole violation rates (as per Chapter 3’s Richard Berk) to would-be pilots’ success during training. In 19 out of 20 cases—which Meehl later argued should be modified to a clean sweep—the statistical algorithms were demonstrably more accurate than those made by the experts, and almost always required less data in order to get there. “It is clear,” Meehl concluded, “that the dogmatic, complacent assertion sometimes heard from clinicians that ‘naturally’ clinical predictions, being based on ‘real understanding’ is superior, is simply not justified by the facts to date.”

  Meehl was, perhaps understandably, something of an outsider in academic circles from this point on. His anti-expert stance amounted to suggesting—in the words of a colleague quoted in Meehl’s 2003 New York Times obituary—that “clinicians could be replaced by a clerk with a hand-cranked Monroe calculator.”2 (Meehl’s status as prototypical Internet troll was only further added to by the publishing of a later paper in his career, entitled “Why I Do Not Attend Case Conferences,” in which he dismissed academic conferences on the basis that they were boring to the point of offensiveness.)

  Regardless of his divisive status at the time, Meehl’s views of the predictive power of algorithms have been borne out in the years since. In the roughly 200 similar studies that have been carried out in the half century since, algorithms have triumphed over human intuition with a success rate of around 60 percent. In the remaining 40 percent, the difference between statistical and clinical predictions proved statistically insignificant, still representing a tick in the “win” column for the algorithmic approach, since this is almost always cheaper than hiring an expert.

  The Power of Thinking Without Thinking

  Why do algorithms interest us? The first point to make is that it is quite likely that many of the computer scientists reading this book will be the same people who would have picked up a similar book in 1984, or 1964. But not all of us (including this writer) are computer scientists by trade, and the question of how and why a once obscure mathematical concept came to occupy the front page of major newspapers and other publications was one that often occurred to me when I was carrying out my research.

  In this final chapter, I would like to share some of my thoughts on that question. It seems obvious to point out that the reason for this comes down to the growing role that algorithms have to play in all of our lives on a daily basis. Search engines like Google help us to navigate massive databases of information. Recommender systems like those employed by Amazon meanwhile map our preferences against those of other people and suggest new bits of culture for us to experience. On social networking sites, algorithms highlight news that is “relevant” to us, and on dating sites like eHarmony they match us up with potential life partners. It is not “cyberbole,” then, to suggest that algorithms represent a crucial force in our participation in public life.

  They go further than the four main areas I have chosen to look at in this book, too. For instance, algorithmic trading now represents a whopping 70 percent of the U.S. equity market, running on supercomputers that are able to buy and sell
millions of shares at practically the speed of light. Algorithmic trading has become a race measured in milliseconds, with billions of dollars dependent on the laying of new fiber-optic cables that will shave just five milliseconds off the communication time between financial markets in London and New York. (To put this in perspective, it takes a human 300 milliseconds to blink.)3

  Medicine, too, has taken an algorithmic turn, as doctors working in hospitals are often asked to rely on algorithms rather than their own clinical judgment. In his book Blink: The Power of Thinking Without Thinking, Malcolm Gladwell recounts the story of one hospital that adopted an algorithm for diagnosing chest pain. “They instructed their doctors to gather less information on their patients,” Gladwell writes, explaining how doctors were told to instead zero in “on just a few critical pieces of information about patients . . . like blood pressure and the ECG—while ignoring everything else, like the patient’s age and weight and medical history. And what happened? Cook County is now one of the best places in the United States at diagnosing chest pain.”4 Recent medical algorithms have been shown to yield equally impressive results in other areas, such as an algorithm able to diagnose for Parkinson’s disease by listening to a person’s voice over the telephone, and another pattern-recognition algorithm able to, quite literally, “sniff” for diseases like cancer.

 

‹ Prev