Sapiens and Homo Deus

Home > Nonfiction > Sapiens and Homo Deus > Page 84
Sapiens and Homo Deus Page 84

by Yuval Noah Harari


  A recent study commissioned by Google’s nemesis – Facebook – has indicated that already today the Facebook algorithm is a better judge of human personalities and dispositions than even people’s friends, parents and spouses. The study was conducted on 86,220 volunteers who have a Facebook account and who completed a hundred-item personality questionnaire. The Facebook algorithm predicted the volunteers’ answers based on monitoring their Facebook Likes – which webpages, images and clips they tagged with the Like button. The more Likes, the more accurate the predictions. The algorithm’s predictions were compared with those of work colleagues, friends, family members and spouses. Amazingly, the algorithm needed a set of only ten Likes in order to outperform the predictions of work colleagues. It needed seventy Likes to outperform friends, 150 Likes to outperform family members and 300 Likes to outperform spouses. In other words, if you happen to have clicked 300 Likes on your Facebook account, the Facebook algorithm can predict your opinions and desires better than your husband or wife!

  Indeed, in some fields the Facebook algorithm did better than the person themself. Participants were asked to evaluate things such as their level of substance use or the size of their social networks. Their judgements were less accurate than those of the algorithm. The research concludes with the following prediction (made by the human authors of the article, not by the Facebook algorithm): ‘People might abandon their own psychological judgements and rely on computers when making important life decisions, such as choosing activities, career paths, or even romantic partners. It is possible that such data-driven decisions will improve people’s lives.’32

  On a more sinister note, the same study implies that in future US presidential elections Facebook could know not only the political opinions of tens of millions of Americans, but also who among them are the critical swing voters, and how these voters might be swung. Facebook could tell that in Oklahoma the race between Republicans and Democrats is particularly close, identify the 32,417 voters who still haven’t made up their minds, and determine what each candidate needs to say in order to tip the balance. How could Facebook obtain this priceless political data? We provide it for free.

  In the heyday of European imperialism, conquistadors and merchants bought entire islands and countries in exchange for coloured beads. In the twenty-first century our personal data is probably the most valuable resource most humans still have to offer, and we are giving it to the tech giants in exchange for email services and funny cat videos.

  From Oracle to Sovereign

  Once Google, Facebook and other algorithms become all-knowing oracles, they may well evolve into agents and ultimately into sovereigns.33 To understand this trajectory, consider the case of Waze – a GPS-based navigational application that many drivers use nowadays. Waze isn’t just a map. Its millions of users constantly update it about traffic jams, car accidents and police cars. Hence Waze knows to divert you away from heavy traffic, and bring you to your destination through the quickest possible route. When you reach a junction and your gut instinct tells you to turn right, but Waze instructs you to turn left, users sooner or later learn that they had better listen to Waze rather than to their feelings.34

  At first sight it seems that the Waze algorithm serves only as an oracle. You ask a question, the oracle replies, but it is up to you to make a decision. If the oracle wins your trust, however, the next logical step is to turn it into an agent. You give the algorithm only a final aim, and it acts to realise that aim without your supervision. In the case of Waze, this may happen when you connect Waze to a self-driving car, and tell Waze ‘take the fastest route home’ or ‘take the most scenic route’ or ‘take the route which will result in the minimum amount of pollution’. You call the shots, but leave it to Waze to execute your commands.

  Finally, Waze might become sovereign. Having so much power in its hands, and knowing far more than you, it may start manipulating you and the other drivers, shaping your desires and making your decisions for you. For example, suppose because Waze is so good, everybody starts using it. And suppose there is a traffic jam on route no. 1, while the alternative route no. 2 is relatively open. If Waze simply lets everybody know that, then all drivers will rush to route no. 2, and it too will be clogged. When everybody uses the same oracle, and everybody believes the oracle, the oracle turns into a sovereign. So Waze must think for us. Maybe it will inform only half the drivers that route no. 2 is open, while keeping this information secret from the other half. Thereby pressure will ease on route no. 1 without blocking route no. 2.

  Microsoft is developing a far more sophisticated system called Cortana, named after an AI character in its popular Halo video-game series. Cortana is an AI personal assistant that Microsoft hopes to include as an integral feature of future versions of Windows. Users will be encouraged to allow Cortana access to all their files, emails and applications, so that it will get to know them and can thereby offer advice on myriad matters, as well as becoming a virtual agent representing the user’s interests. Cortana could remind you to buy something for your wife’s birthday, select the present, reserve a table at a restaurant and prompt you to take your medicine an hour before dinner. It could alert you that if you don’t stop reading now, you will be late for an important business meeting. As you are about to enter the meeting, Cortana will warn that your blood pressure is too high and your dopamine level too low, and based on past statistics, you tend to make serious business mistakes in such circumstances. So you had better keep things tentative and avoid committing yourself or signing any deals.

  Once Cortanas evolve from oracles to agents, they might start speaking directly with one another on their masters’ behalf. It can begin innocently enough, with my Cortana contacting your Cortana to agree on a place and time for a meeting. Next thing I know, a potential employer will tell me not to bother sending a CV, but simply allow his Cortana to grill my Cortana. Or my Cortana may be approached by the Cortana of a potential lover, and the two will compare notes to decide whether it’s a good match – completely unbeknown to their human owners.

  As Cortanas gain authority, they may begin manipulating each other to further the interests of their masters, so that success in the job market or the marriage market may increasingly depend on the quality of your Cortana. Rich people owning the most up-to-date Cortana will have a decisive advantage over poor people with their older versions.

  But the murkiest issue of all concerns the identity of Cortana’s master. As we have seen, humans are not individuals, and they don’t have a single unified self. Whose interests, then, should Cortana serve? Suppose my narrating self makes a New Year resolution to start a diet and go to the gym every day. A week later, when it is time for the gym, the experiencing self instructs Cortana to turn on the TV and order pizza. What should Cortana do? Should it obey the experiencing self, or the resolution taken a week earlier by the narrating self?

  You may wonder whether Cortana is really different from an alarm clock, which the narrating self sets in the evening in order to wake the experiencing self in time for work. But Cortana will have far more power over me than an alarm clock. The experiencing self can silence the alarm clock by pressing a button. In contrast, Cortana will know me so well that it will know exactly what inner buttons to push in order to make me follow its ‘advice’.

  Microsoft’s Cortana is not alone in this game. Google Now and Apple’s Siri are headed in the same direction. Amazon too employs algorithms that constantly study you and then use their accumulated knowledge to recommend products. When I go to a physical bookstore I wander among the shelves and trust my feelings to choose the right book. When I go to visit Amazon’s virtual shop, an algorithm immediately pops up and tells me: ‘I know which books you liked in the past. People with similar tastes also tend to love this or that new book.’

  And this is just the beginning. Today in the US more people read digital books than printed ones. Devices such as Amazon’s Kindle are able to collect data on their users while they ar
e reading. Your Kindle can, for example, monitor which parts of a book you read quickly, and which slowly; on which page you took a break, and on which sentence you abandoned the book, never to pick it up again. (Better tell the author to rewrite that bit.) If Kindle is upgraded with face recognition and biometric sensors, it will know how each sentence you read influenced your heart rate and blood pressure. It will know what made you laugh, what made you sad and what made you angry. Soon, books will read you while you are reading them. And whereas you quickly forget most of what you read, Amazon will never forget a thing. Such data will enable Amazon to choose books for you with uncanny precision. It will also enable Amazon to know exactly who you are, and how to turn you on and off.35

  Eventually we may reach a point when it will be impossible to disconnect from this all-knowing network even for a moment. Disconnection will mean death. If medical hopes are realised, future humans will incorporate into their bodies a host of biometric devices, bionic organs and nano-robots, which will monitor our health and defend us from infections, illnesses and damage. Yet these devices will have to be online 24/7, both in order to be updated with the latest medical developments, and to protect them from the new plagues of cyberspace. Just as my home computer is constantly attacked by viruses, worms and Trojan horses, so will be my pacemaker, hearing aid and nanotech immune system. If I don’t update my body’s anti-virus program regularly, I will wake up one day to discover that the millions of nano-robots coursing through my veins are now controlled by a North Korean hacker.

  The new technologies of the twenty-first century may thus reverse the humanist revolution, stripping humans of their authority, and empowering non-human algorithms instead. If you are horrified by this direction, don’t blame the computer geeks. The responsibility actually lies with the biologists. It is crucial to realise that this entire trend is fuelled more by biological insights than by computer science. It is the life sciences that concluded that organisms are algorithms. If this is not the case – if organisms function in an inherently different way to algorithms – then computers may work wonders in other fields, but they will not be able to understand us and direct our life, and they will certainly be incapable of merging with us. Yet once biologists concluded that organisms are algorithms, they dismantled the wall between the organic and inorganic, turned the computer revolution from a purely mechanical affair into a biological cataclysm, and shifted authority from individual humans to networked algorithms.

  Some people are indeed horrified by this development, but the fact is that millions willingly embrace it. Already today many of us give up our privacy and our individuality by conducting much of our lives online, recording our every action and becoming hysterical if connection to the net is interrupted even for a few minutes. The shifting of authority from humans to algorithms is happening all around us, not as a result of some momentous governmental decision, but due to a flood of mundane personal choices.

  If we are not careful the result might be an Orwellian police state that constantly monitors and controls not only all our actions, but even what happens inside our bodies and our brains. Just think what uses Stalin could have found for omnipresent biometric sensors – and what uses Putin might yet find for them. However, while defenders of human individuality fear a repetition of twentieth-century nightmares and brace themselves to resist familiar Orwellian foes, human individuality is now facing an even bigger threat from the opposite direction. In the twenty-first century the individual is more likely to disintegrate gently from within than to be brutally crushed from without. Today most corporations and governments pay homage to my individuality, and promise to provide medicine, education and entertainment customised to my unique needs and wishes. But in order to do so, corporations and governments first need to deconstruct me into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their working with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy. Reality will be a mesh of biochemical and electronic algorithms, without clear borders, and without individual hubs.

  Upgrading Inequality

  So far we have looked at two of the three practical threats to liberalism: firstly, that humans will lose their value completely; secondly, that humans will still be valuable collectively, but will lose their individual authority, and instead be managed by external algorithms. The system will still need you to compose symphonies, teach history or write computer code, but it will know you better than you know yourself, and will therefore make most of the important decisions for you – and you will be perfectly happy with that. It won’t necessarily be a bad world; it will, however, be a post-liberal world.

  The third threat to liberalism is that some people will remain both indispensable and undecipherable, but they will constitute a small and privileged elite of upgraded humans. These superhumans will enjoy unheard-of abilities and unprecedented creativity, which will allow them to go on making many of the most important decisions in the world. They will perform crucial services for the system, while the system could neither understand nor manage them. However, most humans will not be upgraded, and will consequently become an inferior caste dominated by both computer algorithms and the new superhumans.

  Splitting humankind into biological castes will destroy the foundations of liberal ideology. Liberalism can coexist with socio-economic gaps. Indeed, since it favours liberty over equality, it takes such gaps for granted. However, liberalism still presupposes that all human beings have equal value and authority. From a liberal perspective, it is perfectly all right that one person is a billionaire living in a sumptuous chateau, whereas another is a poor peasant living in a straw hut. For according to liberalism, the peasant’s unique experiences are still just as valuable as the billionaire’s. That’s why liberal authors write long novels about the experiences of poor peasants – and why even billionaires avidly read such books. If you go to see Les Misérables on Broadway or in Covent Garden, you will find that good seats can cost hundreds of dollars, and the audience’s combined wealth probably runs into the billions, yet they still sympathise with Jean Valjean who served nineteen years in jail for stealing a loaf of bread to feed his starving nephews.

  The same logic operates on election day, when the vote of the poor peasant counts for exactly the same as the billionaire’s. The liberal solution for social inequality is to give equal value to different human experiences, instead of trying to create the same experiences for everyone. However, will this solution still work once rich and poor are separated not merely by wealth, but also by real biological gaps?

  In her New York Times article, Angelina Jolie referred to the high costs of genetic testing. The test Jolie had taken costs $3,000 (not including the price of the actual mastectomy, the reconstructive surgery and related treatments). This in a world where 1 billion people earn less than $1 per day, and another 1.5 billion earn between $1 and $2 a day.36 Even if they work hard their entire life, these people will never be able to afford a $3,000 genetic test. And the economic gaps are at present only increasing. As of early 2016, the sixty-two richest people in the world were worth as much as the poorest 3.6 billion people! Since the world’s population is about 7.2 billion, it means that these sixty-two billionaires together hold as much wealth as the entire bottom half of humankind.37

  The cost of DNA testing is likely to go down with time, but expensive new procedures are constantly being pioneered. So while old treatments will gradually come within reach of the masses, the elites will always remain a couple of steps ahead. Throughout history the rich have enjoyed many social and political advantages, but no huge biological gap ever separated them from the poor. Medieval aristocrats claimed that superior blue blood was flowing through their veins, and Hindu Brahmins insisted that they were naturally smarter than everyone else, but this was pure fiction. In the future, however, we may see real gaps in physical and cognitive abilities opening between an upgraded upper class and the rest of society.
/>   When scientists are confronted with this scenario, their standard reply is that in the twentieth century too many medical breakthroughs began with the rich, but eventually benefited the entire population and helped to narrow rather than widen the social gaps. For example, vaccines and antibiotics at first profited mainly the upper classes in Western countries, but today they improve the lives of all humans everywhere.

  However, the expectation that this process will be repeated in the twenty-first century may be just wishful thinking, for two important reasons. First, medicine is undergoing a tremendous conceptual revolution. Twentieth-century medicine aimed to heal the sick. Twenty-first-century medicine is increasingly aiming to upgrade the healthy. Healing the sick was an egalitarian project, because it assumed that there is a normative standard of physical and mental health that everyone can and should enjoy. If someone fell below the norm, it was the job of doctors to fix the problem and help him or her ‘be like everyone’. In contrast, upgrading the healthy is an elitist project, because it rejects the idea of a universal standard applicable to all and seeks to give some individuals an edge over others. People want superior memories, above-average intelligence and first-class sexual abilities. If some form of upgrade becomes so cheap and common that everyone enjoys it, it will simply be considered the new baseline, which the next generation of treatments will strive to surpass.

  Consequently by 2070 the poor could very well enjoy much better healthcare than today, but the gap separating them from the rich will nevertheless be much greater. People usually compare themselves to their more fortunate contemporaries rather than to their ill-fated ancestors. If you tell a poor American in a Detroit slum that he has access to much better healthcare than his great-grandparents did a century ago, it is unlikely to cheer him up. Indeed, such talk will sound terribly smug and condescending. ‘Why should I compare myself to nineteenth-century factory workers or peasants?’ he would retort. ‘I want to live like the rich people on television, or at least like the folks in the affluent suburbs’. Similarly, if in 2070 you tell the lower classes that they enjoy better healthcare than in 2017, it might be very cold comfort to them, because they would be comparing themselves to the upgraded superhumans who dominate the world.

 

‹ Prev