21 Lessons for the 21st Century

Home > Nonfiction > 21 Lessons for the 21st Century > Page 6
21 Lessons for the 21st Century Page 6

by Yuval Noah Harari


  To really achieve its goals, universal basic support will have to be supplemented by some meaningful pursuits, ranging from sports to religion. Perhaps the most successful experiment so far in how to live a contented life in a post-work world has been conducted in Israel. There, about 50% of ultra-Orthodox Jewish men never work. They dedicate their lives to studying holy scriptures and performing religious rituals. They and their families don’t starve partly because the wives often work, and partly because the government provides them with generous subsidies and free services, making sure that they don’t lack the basic necessities of life. That’s universal basic support avant la lettre.30

  Although they are poor and unemployed, in survey after survey these ultra-Orthodox Jewish men report higher levels of life satisfaction than any other section of Israeli society. This is due to the strength of their community bonds, as well as to the deep meaning they find in studying scriptures and performing rituals. A small room full of Jewish men discussing the Talmud might well generate more joy, engagement and insight than a huge textile sweatshop full of hard-working factory hands. In global surveys of life satisfaction, Israel is usually somewhere near the top, thanks in part to the contribution of these jobless poor people.31

  Secular Israelis often complain bitterly that the ultra-Orthodox don’t contribute enough to society, and live off other people’s hard work. Secular Israelis also tend to argue that the ultra-Orthodox way of life is unsustainable, especially as ultra-Orthodox families have seven children on average.32 Sooner or later, the state will not be able to support so many unemployed people, and the ultra-Orthodox will have to go to work. Yet it might be just the reverse. As robots and AI push humans out of the job market, the ultra-Orthodox Jews may come to be seen as the model of the future rather than as a fossil from the past. Not that everyone will become Orthodox Jews and go to the yeshivas to study the Talmud. But in the lives of all people, the quest for meaning and for community might eclipse the quest for a job.

  If we manage to combine a universal economic safety net with strong communities and meaningful pursuits, losing our jobs to the algorithms might actually turn out to be a blessing. Losing control over our lives, however, is a much scarier scenario. Notwithstanding the danger of mass unemployment, what we should worry about even more is the shift in authority from humans to algorithms, which might destroy any remaining faith in the liberal story and open the way to the rise of digital dictatorships.

  3

  LIBERTY

  Big Data is watching you

  The liberal story cherishes human liberty as its number one value. It argues that all authority ultimately stems from the free will of individual humans, as it is expressed in their feelings, desires and choices. In politics, liberalism believes that the voter knows best. It therefore upholds democratic elections. In economics, liberalism maintains that the customer is always right. It therefore hails free-market principles. In personal matters, liberalism encourages people to listen to themselves, be true to themselves, and follow their hearts – as long as they do not infringe on the liberties of others. This personal freedom is enshrined in human rights.

  In Western political discourse the term ‘liberal’ is sometimes used today in a much narrower partisan sense, to denote those who support specific causes like gay marriage, gun control and abortion. Yet most so-called conservatives also embrace the broad liberal world view. Especially in the United States, both Republicans and Democrats should occasionally take a break from their heated quarrels to remind themselves that they all agree on fundamentals such as free elections, an independent judiciary, and human rights.

  In particular, it is vital to remember that right-wing heroes such as Ronald Reagan and Margaret Thatcher were great champions not only of economic freedoms but also of individual liberties. In a famous interview in 1987, Thatcher said that ‘There is no such thing as society. There is [a] living tapestry of men and women … and the quality of our lives will depend upon how much each of us is prepared to take responsibility for ourselves.’1

  Thatcher’s heirs in the Conservative Party fully agree with the Labour Party that political authority comes from the feelings, choices and free will of individual voters. Thus when Britain needed to decide whether it should leave the EU, Prime Minister David Cameron didn’t ask Queen Elizabeth II, the Archbishop of Canterbury, or the Oxford and Cambridge dons to resolve the issue. He didn’t even ask the Members of Parliament. Rather, he held a referendum in which each and every Briton was asked: ‘What do you feel about it?’

  You might object that people were asked ‘What do you think?’ rather than ‘What do you feel?’, but this is a common misperception. Referendums and elections are always about human feelings, not about human rationality. If democracy were a matter of rational decision-making, there would be absolutely no reason to give all people equal voting rights – or perhaps any voting rights. There is ample evidence that some people are far more knowledgeable and rational than others, certainly when it comes to specific economic and political questions.2 In the wake of the Brexit vote, eminent biologist Richard Dawkins protested that the vast majority of the British public – including himself – should never have been asked to vote in the referendum, because they lacked the necessary background in economics and political science. ‘You might as well call a nationwide plebiscite to decide whether Einstein got his algebra right, or let passengers vote on which runway the pilot should land.’3

  However, for better or worse, elections and referendums are not about what we think. They are about what we feel. And when it comes to feelings, Einstein and Dawkins are no better than anyone else. Democracy assumes that human feelings reflect a mysterious and profound ‘free will’, that this ‘free will’ is the ultimate source of authority, and that while some people are more intelligent than others, all humans are equally free. Like Einstein and Dawkins, an illiterate maid also has free will, hence on election day her feelings – represented by her vote – count just as much as anybody else’s.

  Feelings guide not just the voters, but also the leaders. In the 2016 Brexit referendum the Leave campaign was headed together by Boris Johnson and Michael Gove. After David Cameron resigned, Gove initially supported Johnson for the premiership, but at the very last minute Gove declared Johnson unfit for the position and announced his own intention to run for the job. Gove’s action, which destroyed Johnson’s chances, was described as a Machiavellian political assassination.4 But Gove defended his conduct by appealing to his feelings, explaining that ‘In every step in my political life I have asked myself one question: “What is the right thing to do? What does your heart tell you?”’5 That’s why, according to Gove, he has fought so hard for Brexit, and that’s why he felt compelled to backstab his erstwhile ally Boris Johnson and bid for the alpha-dog position himself – because his heart told him to do it.

  This reliance on the heart might prove to be the Achilles heel of liberal democracy. For once somebody (whether in Beijing or in San Francisco) gains the technological ability to hack and manipulate the human heart, democratic politics will mutate into an emotional puppet show.

  Listen to the algorithm

  The liberal belief in the feelings and free choices of individuals is neither natural nor very ancient. For thousands of years people believed that authority came from divine laws rather than from the human heart, and that we should therefore sanctify the word of God rather than human liberty. Only in the last few centuries did the source of authority shift from celestial deities to flesh-and-blood humans.

  Soon authority might shift again – from humans to algorithms. Just as divine authority was legitimised by religious mythologies, and human authority was justified by the liberal story, so the coming technological revolution might establish the authority of Big Data algorithms, while undermining the very idea of individual freedom.

  As we mentioned in the previous chapter, scientific insights into the way our brains and bodies work suggest that our feelings are not some u
niquely human spiritual quality, and they do not reflect any kind of ‘free will’. Rather, feelings are biochemical mechanisms that all mammals and birds use in order to quickly calculate probabilities of survival and reproduction. Feelings aren’t based on intuition, inspiration or freedom – they are based on calculation.

  When a monkey, mouse or human sees a snake, fear arises because millions of neurons in the brain swiftly calculate the relevant data and conclude that the probability of death is high. Feelings of sexual attraction arise when other biochemical algorithms calculate that a nearby individual offers a high probability of successful mating, social bonding, or some other coveted goal. Moral feelings such as outrage, guilt or forgiveness derive from neural mechanisms that evolved to enable group cooperation. All these biochemical algorithms were honed through millions of years of evolution. If the feelings of some ancient ancestor made a mistake, the genes shaping these feelings did not pass on to the next generation. Feelings are thus not the opposite of rationality – they embody evolutionary rationality.

  We usually fail to realise that feelings are in fact calculations, because the rapid process of calculation occurs far below our threshold of awareness. We don’t feel the millions of neurons in the brain computing probabilities of survival and reproduction, so we erroneously believe that our fear of snakes, our choice of sexual mates, or our opinions about the European Union are the result of some mysterious ‘free will’.

  Nevertheless, though liberalism is wrong to think that our feelings reflect a free will, up until today relying on feelings still made good practical sense. For although there was nothing magical or free about our feelings, they were the best method in the universe for deciding what to study, who to marry, and which party to vote for. And no outside system could hope to understand my feelings better than me. Even if the Spanish Inquisition or the Soviet KGB spied on me every minute of every day, they lacked the biological knowledge and the computing power necessary to hack the biochemical processes shaping my desires and choices. For all practical purposes, it was reasonable to argue that I have free will, because my will was shaped mainly by the interplay of inner forces, which nobody outside could see. I could enjoy the illusion that I control my secret inner arena, while outsiders could never really understand what is happening inside me and how I make decisions.

  Accordingly, liberalism was correct in counselling people to follow their heart rather than the dictates of some priest or party apparatchik. However, soon computer algorithms could give you better counsel than human feelings. As the Spanish Inquisition and the KGB give way to Google and Baidu, ‘free will’ will likely be exposed as a myth, and liberalism might lose its practical advantages.

  For we are now at the confluence of two immense revolutions. On the one hand biologists are deciphering the mysteries of the human body, and in particular, of the brain and of human feelings. At the same time computer scientists are giving us unprecedented data-processing power. When the biotech revolution merges with the infotech revolution, it will produce Big Data algorithms that can monitor and understand my feelings much better than I can, and then authority will probably shift from humans to computers. My illusion of free will is likely to disintegrate as I daily encounter institutions, corporations and government agencies that understand and manipulate what was hitherto my inaccessible inner realm.

  This is already happening in the field of medicine. The most important medical decisions in our life rely not on our feelings of illness or wellness, or even on the informed predictions of our doctor – but on the calculations of computers which understand our bodies much better than we do. Within a few decades, Big Data algorithms informed by a constant stream of biometric data could monitor our health 24/7. They could detect the very beginning of influenza, cancer or Alzheimer’s disease, long before we feel anything is wrong with us. They could then recommend appropriate treatments, diets and daily regimens, custom-built for our unique physique, DNA and personality.

  People will enjoy the best healthcare in history, but for precisely this reason they will probably be sick all the time. There is always something wrong somewhere in the body. There is always something that can be improved. In the past, you felt perfectly healthy as long as you didn’t sense pain or you didn’t suffer from an apparent disability such as limping. But by 2050, thanks to biometric sensors and Big Data algorithms, diseases may be diagnosed and treated long before they lead to pain or disability. As a result, you will always find yourself suffering from some ‘medical condition’ and following this or that algorithmic recommendation. If you refuse, perhaps your medical insurance would become invalid, or your boss would fire you – why should they pay the price of your obstinacy?

  It is one thing to continue smoking despite general statistics that connect smoking with lung cancer. It is a very different thing to continue smoking despite a concrete warning from a biometric sensor that has just detected seventeen cancerous cells in your upper left lung. And if you are willing to defy the sensor, what will you do when the sensor forwards the warning to your insurance agency, your manager, and your mother?

  Who will have the time and energy to deal with all these illnesses? In all likelihood, we could just instruct our health algorithm to deal with most of these problems as it sees fit. At most, it will send periodic updates to our smartphones, telling us that ‘seventeen cancerous cells were detected and destroyed’. Hypochondriacs might dutifully read these updates, but most of us will ignore them just as we ignore those annoying anti-virus notices on our computers.

  The drama of decision-making

  What is already beginning to happen in medicine is likely to occur in more and more fields. The key invention is the biometric sensor, which people can wear on or inside their bodies, and which converts biological processes into electronic information that computers can store and analyse. Given enough biometric data and enough computing power, external data-processing systems can hack all your desires, decisions and opinions. They can know exactly who you are.

  Most people don’t know themselves very well. When I was twenty-one, I finally realised that I was gay, after several years of living in denial. That’s hardly exceptional. Many gay men spend their entire teenage years unsure about their sexuality. Now imagine the situation in 2050, when an algorithm can tell any teenager exactly where he is on the gay/straight spectrum (and even how malleable that position is). Perhaps the algorithm shows you pictures or videos of attractive men and women, tracks your eye movements, blood pressure and brain activity, and within five minutes ejects a number on the Kinsey scale.6 It could have saved me years of frustration. Perhaps you personally wouldn’t want to take such a test, but then maybe you find yourself with a group of friends at Michelle’s boring birthday party, and somebody suggests you all take turns checking yourself on this cool new algorithm (with everybody standing around to watch the results – and comment on them). Would you just walk away?

  Even if you do, and even if you keep hiding from yourself and your classmates, you won’t be able to hide from Amazon, Alibaba or the secret police. As you surf the Web, watch YouTube or read your social media feed, the algorithms will discreetly monitor you, analyse you, and tell Coca-Cola that if it wants to sell you some fizzy drink, it had better use the advertisement with the shirtless guy rather than the shirtless girl. You won’t even know. But they will know, and such information will be worth billions.

  Then again, maybe it will all be out in the open, and people will gladly share their information in order to get better recommendations – and eventually in order to get the algorithm to make decisions for them. It starts with simple things, like deciding which movie to watch. As you sit down with a group of friends to spend a cozy evening in front of the TV, you first have to choose what to see. Fifty years ago you had no choice, but today – with the rise of view-on-demand services – there are thousands of titles available. Reaching an agreement can be quite difficult, because while you personally like science-fiction thrillers, Jack p
refers romantic comedies, and Jill votes for artsy French films. You may well end up compromising on some mediocre B-movie that disappoints all of you.

  An algorithm might help. You can tell it which previous movies each of you really liked, and based on its massive statistical database, the algorithm can then find the perfect match for the group. Unfortunately, such a crude algorithm is easily misled, particularly because self-reporting is a notoriously unreliable gauge for people’s true preferences. It often happens that we hear lots of people praise some movie as a masterpiece, feel compelled to watch it, and even though we fall asleep midway through, we don’t want to look like philistines, so we tell everyone it was an amazing experience.7

  Such problems, however, can be solved if we just allow the algorithm to collect real-time data on us as we actually watch movies, instead of relying on our own dubious self-reports. For starters, the algorithm can monitor which movies we completed, and which we stopped watching halfway through. Even if we tell the whole world that Gone With the Wind is the best movie ever made, the algorithm will know we never made it past the first half-hour, and we never really saw Atlanta burning.

  Yet the algorithm can go much deeper than that. Engineers are currently developing software that can detect human emotions based on the movements of our eyes and facial muscles.8 Add a good camera to the television, and such software will know which scenes made us laugh, which scenes made us sad, and which scenes bored us. Next, connect the algorithm to biometric sensors, and the algorithm will know how each frame has influenced our heart rate, our blood pressure, and our brain activity. As we watch, say, Tarantino’s Pulp Fiction, the algorithm may note that the rape scene caused us an almost imperceptible tinge of sexual arousal, that when Vincent accidentally shot Marvin in the face it made us laugh guiltily, and that we didn’t get the joke about the Big Kahuna Burger – but we laughed anyway, so as not to look stupid. When you force yourself to laugh, you use different brain circuits and muscles than when you laugh because something is really funny. Humans cannot usually detect the difference. But a biometric sensor could.9

 

‹ Prev