Sapiens and Homo Deus

Home > Nonfiction > Sapiens and Homo Deus > Page 82
Sapiens and Homo Deus Page 82

by Yuval Noah Harari


  Even the managers in charge of all these activities can be replaced. Thanks to its powerful algorithms, Uber can manage millions of taxi drivers with only a handful of humans. Most of the commands are given by the algorithms without any need of human supervision.16 In May 2014 Deep Knowledge Ventures – a Hong Kong venture-capital firm specialising in regenerative medicine – broke new ground by appointing an algorithm named VITAL to its board. VITAL makes investment recommendations by analysing huge amounts of data regarding the financial situation, clinical trials and intellectual property of prospective companies. Like the other five board members, the algorithm gets to vote on whether or not the firm makes an investment in a specific company.

  Examining VITAL’s record so far, it seems that it has already picked up at least one managerial vice: nepotism. It has recommended investing in companies that grant algorithms more authority. For example, with VITAL’s blessing, Deep Knowledge Ventures recently invested in Pathway Pharmaceuticals, which employs an algorithm called OncoFinder to select and rate personalised cancer therapies.17

  As algorithms push humans out of the job market, wealth and power might become concentrated in the hands of the tiny elite that owns the all-powerful algorithms, creating unprecedented social and political inequality. Today millions of taxi drivers, bus drivers and truck drivers have significant economic and political clout, each commanding a tiny share of the transportation market. If their collective interests are threatened, they can unionise, go on strike, stage boycotts and create powerful voting blocks. However, once millions of human drivers are replaced by a single algorithm, all that wealth and power will be cornered by the corporation that owns the algorithm, and by the handful of billionaires who own the corporation. Alternatively, the algorithms might themselves become the owners. Human law already recognises intersubjective entities like corporations and nations as ‘legal persons’. Though Toyota or Argentina has neither a body nor a mind, they are subject to international laws, they can own land and money, and they can sue and be sued in court. We might soon grant similar status to algorithms. An algorithm could then own a transportation empire or a venture-capital fund without having to obey the wishes of any human master.

  If the algorithm makes the right decisions, it could accumulate a fortune, which it could then invest as it sees fit, perhaps buying your house and becoming your landlord. If you infringe on the algorithm’s legal rights – say, by not paying rent – the algorithm could hire lawyers and sue you in court. If such algorithms consistently outperform human capitalists, we might end up with an algorithmic upper class owning most of our planet. This may sound impossible, but before dismissing the idea, remember that most of our planet is already legally owned by non-human intersubjective entities, namely nations and corporations. Indeed, 5,000 years ago much of Sumer was owned by imaginary gods such as Enki and Inanna. If gods can possess land and employ people, why not algorithms?

  So what will people do? Art is often said to provide us with our ultimate (and uniquely human) sanctuary. In a world where computers have replaced doctors, drivers, teachers and even landlords, would everyone become an artist? Yet it is hard to see why artistic creation would be safe from the algorithms. Why are we so confident that computers will never be able to outdo us in the composition of music? According to the life sciences, art is not the product of some enchanted spirit or metaphysical soul, but rather of organic algorithms recognising mathematical patterns. If so, there is no reason why non-organic algorithms couldn’t master it.

  David Cope is a musicology professor at the University of California in Santa Cruz. He is also one of the more controversial figures in the world of classical music. Cope has written computer programs that compose concertos, chorales, symphonies and operas. His first creation was named EMI (Experiments in Musical Intelligence), which specialised in imitating the style of Johann Sebastian Bach. It took seven years to create the program, but once the work was done EMI composed 5,000 chorales à la Bach in a single day. Cope arranged for a performance of a few select chorales at a music festival in Santa Cruz. Enthusiastic members of the audience praised the stirring performance, and explained excitedly how the music had touched their innermost being. They didn’t know that it had been created by EMI rather than Bach, and when the truth was revealed some reacted with glum silence, while others shouted in anger.

  EMI continued to improve and learned to imitate Beethoven, Chopin, Rachmaninov and Stravinsky. Cope got EMI a contract, and its first album – Classical Music Composed by Computer – sold surprisingly well. Publicity brought increasing hostility from classical-music buffs. Professor Steve Larson from the University of Oregon sent Cope a challenge for a musical showdown. Larson suggested that professional pianists play three pieces one after the other: one each by Bach, by EMI, and by Larson himself. The audience would then be asked to vote on who composed which piece. Larson was convinced that people would easily distinguish between soulful human compositions and the lifeless artefact of a machine. Cope accepted the challenge. On the appointed date hundreds of lecturers, students and music fans assembled in the University of Oregon’s concert hall. At the end of the performance, a vote was taken. The result? The audience thought that EMI’s piece was genuine Bach, that Bach’s piece was composed by Larson, and that Larson’s piece was produced by a computer.

  Critics continued to argue that EMI’s music is technically excellent, but that it lacks something. It is too accurate. It has no depth. It has no soul. Yet when people heard EMI’s compositions without being informed of their provenance, they frequently praised them precisely for their soulfulness and emotional resonance.

  Following EMI’s successes Cope created newer and even more sophisticated programs. His crowning achievement was Annie. Whereas EMI composed music according to predetermined rules, Annie is based on machine learning. Its musical style constantly changes and develops in response to new inputs from the outside world. Cope has no idea what Annie is going to compose next. Indeed, Annie does not restrict itself to music composition but also explores other art forms such as haiku poetry. In 2011 Cope published Comes the Fiery Night: 2,000 Haiku by Man and Machine. Some of the haiku were written by Annie, and the rest by organic poets. The book does not disclose which are which. If you think you can tell the difference between human creativity and machine output, you are welcome to test your claim.18

  In the nineteenth century the Industrial Revolution created a huge urban proletariat, and socialism spread because no other creed managed to answer the unprecedented needs, hopes and fears of this new working class. Liberalism eventually defeated socialism only by adopting the best parts of the socialist programme. In the twenty-first century we might witness the creation of a massive new unworking class: people devoid of any economic, political or even artistic value, who contribute nothing to the prosperity, power and glory of society. This ‘useless class’ will not merely be unemployed – it will be unemployable.

  In September 2013 two Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, published ‘The Future of Employment’, in which they surveyed the likelihood of different professions being taken over by computer algorithms within the next twenty years. The algorithm developed by Frey and Osborne to do the calculations estimated that 47 per cent of US jobs are at high risk. For example, there is a 99 per cent probability that by 2033 human telemarketers and insurance underwriters will lose their jobs to algorithms. There is a 98 per cent probability that the same will happen to sports referees, 97 per cent that it will happen to cashiers and 96 per cent to chefs. Waiters – 94 per cent. Paralegal assistants – 94 per cent. Tour guides – 91 per cent. Bakers – 89 per cent. Bus drivers – 89 per cent. Construction labourers – 88 per cent. Veterinary assistants – 86 per cent. Security guards – 84 per cent. Sailors – 83 per cent. Bartenders – 77 per cent. Archivists – 76 per cent. Carpenters – 72 per cent. Lifeguards – 67 per cent. And so forth. There are of course some safe jobs. The likelihood that computer algorithms will
displace archaeologists by 2033 is only 0.7 per cent, because their job requires highly sophisticated types of pattern recognition, and doesn’t produce huge profits. Hence it is improbable that corporations or government will make the necessary investment to automate archaeology within the next twenty years.19

  Of course, by 2033 many new professions are likely to appear, for example, virtual-world designers. But such professions will probably require much more creativity and flexibility than current run-of-the-mill jobs, and it is unclear whether forty-year-old cashiers or insurance agents will be able to reinvent themselves as virtual-world designers (try to imagine a virtual world created by an insurance agent!). And even if they do so, the pace of progress is such that within another decade they might have to reinvent themselves yet again. After all, algorithms might well outperform humans in designing virtual worlds too. The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms.20

  Since we do not know how the job market would look in 2030 or 2040, already today we have no idea what to teach our kids. Most of what they currently learn at school will probably be irrelevant by the time they are forty. Traditionally, life has been divided into two main parts: a period of learning followed by a period of working. Very soon this traditional model will become utterly obsolete, and the only way for humans to stay in the game will be to keep learning throughout their lives, and to reinvent themselves repeatedly. Many if not most humans may be unable to do so.

  The coming technological bonanza will probably make it feasible to feed and support these useless masses even without any effort from their side. But what will keep them occupied and content? People must do something, or they go crazy. What will they do all day? One answer might be drugs and computer games. Unnecessary people might spend increasing amounts of time within 3D virtual-reality worlds that would provide them with far more excitement and emotional engagement than the drab reality outside. Yet such a development would deal a mortal blow to the liberal belief in the sacredness of human life and of human experiences. What’s so sacred about useless bums who pass their days devouring artificial experiences in La La Land?

  Some experts and thinkers, such as Nick Bostrom, warn that humankind is unlikely to suffer this degradation, because once artificial intelligence surpasses human intelligence, it might simply exterminate humankind. The AI would likely do so either for fear that humankind would turn against it and try to pull its plug, or in pursuit of some unfathomable goal of its own. For it would be extremely difficult for humans to control the motivation of a system smarter than themselves.

  Even preprogramming the system with seemingly benign goals might backfire horribly. One popular scenario imagines a corporation designing the first artificial super-intelligence and giving it an innocent test such as calculating pi. Before anyone realises what is happening, the AI takes over the planet, eliminates the human race, launches a campaign of conquest to the ends of the galaxy, and transforms the entire known universe into a giant super-computer that for billions upon billions of years calculates pi ever more accurately. After all, this is the divine mission its Creator gave it.21

  A Probability of 87 Per Cent

  At the beginning of this chapter we identified several practical threats to liberalism. The first is that humans might become militarily and economically useless. This is just a possibility, of course, not a prophecy. Technical difficulties or political objections might slow down the algorithmic invasion of the job market. Alternatively, since much of the human mind is still uncharted territory, we don’t really know what hidden talents humans might discover in themselves, and what novel jobs they might create to offset the loss of others. That, however, may not be enough to save liberalism. For liberalism believes not just in the value of human beings – it also believes in individualism. The second threat facing liberalism is that, while the system might still need humans in the future, it will not need individuals. Humans will continue to compose music, teach physics and invest money, but the system will understand these humans better than they understand themselves and will make most of the important decisions for them. The system will thereby deprive individuals of their authority and freedom.

  The liberal belief in individualism is founded on the three important assumptions that we discussed:

  1.I am an in-dividual – that is, I have a single essence that cannot be divided into parts or subsystems. True, this inner core is wrapped in many outer layers. But if I make the effort to peel away these external crusts, I will find deep within myself a clear and single inner voice, which is my authentic self.

  2.My authentic self is completely free.

  3.It follows from the first two assumptions that I can know things about myself nobody else can discover. For only I have access to my inner space of freedom, and only I can hear the whispers of my authentic self. This is why liberalism grants the individual so much authority. I cannot trust anyone else to make choices for me, because no one else can know who I really am, how I feel and what I want. This is why the voter knows best, why the customer is always right and why beauty is in the eye of the beholder.

  However, the life sciences challenge all three assumptions. According to them:

  1.Organisms are algorithms, and humans are not individuals – they are ‘dividuals’. That is, humans are an assemblage of many different algorithms lacking a single inner voice or a single self.

  2.The algorithms constituting a human are not free. They are shaped by genes and environmental pressures, and take decisions either deterministically or randomly – but not freely.

  3.It follows that an external algorithm could theoretically know me much better than I can ever know myself. An algorithm that monitors each of the systems that comprise my body and my brain could know exactly who I am, how I feel and what I want. Once developed, such an algorithm could replace the voter, the customer and the beholder. Then the algorithm will know best, the algorithm will always be right, and beauty will be in the calculations of the algorithm.

  During the nineteenth and twentieth centuries the belief in individualism nevertheless made good practical sense, because there were no external algorithms that could actually monitor me effectively. States and markets may have wished to do exactly that, but they lacked the necessary technology. The KGB and FBI had only a vague understanding of my biochemistry, genome and brain, and even if agents bugged every phone call I made and recorded every chance encounter on the street, they did not have the computing power to analyse all that data. Consequently, given twentieth-century technological conditions, liberals were right to argue that nobody can know me better than I know myself. Humans therefore had a very good reason to regard themselves as an autonomous system and to follow their own inner voices rather than the commands of Big Brother.

  However, twenty-first-century technology may enable external algorithms to ‘hack humanity’ and know me far better than I know myself. Once this happens, the belief in individualism will collapse and authority will shift from individual humans to networked algorithms. People will no longer see themselves as autonomous beings running their lives according to their wishes, but instead will become accustomed to seeing themselves as a collection of biochemical mechanisms that is constantly monitored and guided by a network of electronic algorithms. For this to happen, there is no need of an external algorithm that knows me perfectly and never makes any mistake; it is enough that the algorithm will know me better than I know myself, and will make fewer mistakes than I do. It will then make sense to trust this algorithm with more and more of my decisions and life choices.

  We have already crossed this line as far as medicine is concerned. In hospitals we are no longer individuals. It is highly likely that during your lifetime many of the most momentous decisions about your body and health will be taken by computer algorithms such as IBM’s Watson. And this is not necessarily bad news. Diabetics already carry sensors that automatically check their sugar level several
times a day, alerting them whenever it crosses a dangerous threshold. In 2014 researchers at Yale University announced the first successful trial of an ‘artificial pancreas’ controlled by an iPhone. Fifty-two diabetics took part in the experiment. Each patient had a tiny sensor and a tiny pump implanted in his or her abdomen. The pump was connected to small tubes of insulin and glucagon, two hormones that together regulate sugar levels in the blood. The sensor constantly measured the sugar level, transmitting the data to an iPhone. The iPhone hosted an application that analysed the information, and whenever necessary gave orders to the pump, which injected measured amounts of either insulin or glucagon – without any need of human intervention.22

  Many other people who suffer from no serious illnesses have begun to use wearable sensors and computers to monitor their health and activities. These devices – incorporated into anything from smartphones and wristwatches to armbands and underwear – record diverse biometric data such as blood pressure and heart rate. The data is then fed into sophisticated computer programs that advise the wearer on how to alter his or her diet and daily routines in order to enjoy improved health and a longer and more productive life.23 Google, together with the drug giant Novartis, is developing a contact lens that checks glucose levels in the blood every few seconds by analysing the composition of tears.24 Pixie Scientific sells ‘smart diapers’ that analyse baby poop for clues about the child’s medical condition. In November 2014 Microsoft launched the Microsoft Band – a smart armband that monitors among other things your heartbeat, the quality of your sleep and the number of steps you take each day. An application called Deadline goes a step further, informing you of how many years of life you have left, given your current habits.

 

‹ Prev