Book Read Free

Homo Deus

Page 39

by Yuval Noah Harari


  This extreme situation in which all data is processed and all decisions are made by a single central processor is called communism. In a communist economy, people allegedly work according to their abilities, and receive according to their needs. In other words, the government takes 100 per cent of your profits, decides what you need and then supplies these needs. Though no country ever realised this scheme in its extreme form, the Soviet Union and its satellites came as close as they could. They abandoned the principle of distributed data processing, and switched to a model of centralised data processing. All information from throughout the Soviet Union flowed to a single location in Moscow, where all the important decisions were made. Producers and consumers could not communicate directly, and had to obey government orders.

  Credit 1.49

  49. The Soviet leadership in Moscow, 1963: centralised data processing.

  For instance, the Soviet economics ministry might decide that the price of bread in all shops should be exactly two roubles and four kopeks, that a particular kolkhoz in the Odessa oblast should switch from growing wheat to raising chickens, and that the Red October bakery in Moscow should produce 3.5 million loaves of bread per day, and not a single loaf more. Meanwhile the Soviet science ministry forced all Soviet biotech laboratories to adopt the theories of Trofim Lysenko – the infamous head of the Lenin Academy for Agricultural Sciences. Lysenko rejected the dominant genetic theories of his day. He insisted that if an organism acquired some new trait during its lifetime, this quality could pass directly to its descendants. This idea flew in the face of Darwinian orthodoxy, but it dovetailed nicely with communist educational principles. It implied that if you could train wheat plants to withstand cold weather, their progenies will also be cold-resistant. Lysenko accordingly sent billions of counter-revolutionary wheat plants to be re-educated in Siberia – and the Soviet Union was soon forced to import more and more flour from the United States.4

  Credit 1.50

  50. Commotion on the floor of the Chicago Board of Trade: distributed data processing.

  Capitalism did not defeat communism because capitalism was more ethical, because individual liberties are sacred or because God was angry with the heathen communists. Rather, capitalism won the Cold War because distributed data processing works better than centralised data processing, at least in periods of accelerating technological changes. The central committee of the Communist Party just could not deal with the rapidly changing world of the late twentieth century. When all data is accumulated in one secret bunker, and all important decisions are taken by a group of elderly apparatchiks, you can produce nuclear bombs by the cartload, but you won’t get an Apple or a Wikipedia.

  There is a story (probably apocryphal, like most good stories) that when Mikhail Gorbachev tried to resuscitate the moribund Soviet economy, he sent one of his chief aids to London to find out what Thatcherism was all about, and how a capitalist system actually functioned. The hosts took their Soviet visitor on a tour of the City, of the London stock exchange and of the London School of Economics, where he had lengthy talks with bank managers, entrepreneurs and professors. After a few hours, the Soviet expert burst out: ‘Just one moment, please. Forget about all these complicated economic theories. We have been going back and forth across London for a whole day now, and there’s one thing I cannot understand. Back in Moscow, our finest minds are working on the bread supply system, and yet there are such long queues in every bakery and grocery store. Here in London live millions of people, and we have passed today in front of many shops and supermarkets, yet I haven’t seen a single bread queue. Please take me to meet the person in charge of supplying bread to London. I must learn his secret.’ The hosts scratched their heads, thought for a moment, and said: ‘Nobody is in charge of supplying bread to London.’

  That’s the capitalist secret of success. No central processing unit monopolises all the data on the London bread supply. The information flows freely between millions of consumers and producers, bakers and tycoons, farmers and scientists. Market forces determine the price of bread, the number of loaves baked each day and the research-and-development priorities. If market forces make the wrong decision, they soon correct themselves, or so capitalists believe. For our current purposes, it doesn’t matter whether the theory is correct. The crucial thing is that the theory understands economics in terms of data processing.

  Where Has All the Power Gone?

  Political scientists also increasingly interpret human political structures as data-processing systems. Like capitalism and communism, so democracies and dictatorships are in essence competing mechanisms for gathering and analysing information. Dictatorships use centralised processing methods, whereas democracies prefer distributed processing. In the last decades democracy gained the upper hand because under the unique conditions of the late twentieth century, distributed processing worked better. Under alternative conditions – those prevailing in the ancient Roman Empire, for instance – centralised processing had an edge, which is why the Roman Republic fell and power shifted from the Senate and popular assemblies into the hands of a single autocratic emperor.

  This implies that as data-processing conditions change again in the twenty-first century, democracy might decline and even disappear. As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete – not because they are unethical, but because they don’t process data efficiently enough. These institutions evolved in an era when politics moved faster than technology. In the nineteenth and twentieth centuries, the Industrial Revolution unfolded slowly enough for politicians and voters to remain one step ahead of it and regulate and manipulate its course. Yet whereas the rhythm of politics has not changed much since the days of steam, technology has switched from first gear to fourth. Technological revolutions now outpace political processes, causing MPs and voters alike to lose control.

  The rise of the Internet gives us a taste of things to come. Cyberspace is now crucial to our daily lives, our economy and our security. Yet the critical choices between alternative web designs weren’t taken through a democratic political process, even though they involved traditional political issues such as sovereignty, borders, privacy and security. Did you ever vote about the shape of cyberspace? Decisions made by web designers far from the public limelight mean that today the Internet is a free and lawless zone that erodes state sovereignty, ignores borders, abolishes privacy and poses perhaps the most formidable global security risk. Whereas a decade ago it hardly registered on the radar, today hysterical officials are predicting an imminent cyber 9/11.

  Governments and NGOs consequently conduct intense debates about restructuring the Internet, but it is much harder to change an existing system than to intervene at its inception. Besides, by the time the cumbersome government bureaucracy makes up its mind about cyber regulation, the Internet has morphed ten times. The governmental tortoise cannot keep up with the technological hare. It is overwhelmed by data. The NSA may be spying on your every word, but to judge by the repeated failures of American foreign policy, nobody in Washington knows what to do with all the data. Never in history did a government know so much about what’s going on in the world – yet few empires have botched things up as clumsily as the contemporary United States. It’s like a poker player who knows what cards his opponents hold, yet somehow still manages to lose round after round.

  In the coming decades, it is likely that we will see more Internet-like revolutions, in which technology steals a march on politics. Artificial intelligence and biotechnology might soon overhaul our societies and economies – and our bodies and minds too – but they are hardly a blip on our political radar. Our current democratic structures just cannot collect and process the relevant data fast enough, and most voters don’t understand biology and cybernetics well enough to form any pertinent opinions. Hence traditional democratic politics loses control of events, and fails to provide us with meaningful visions for the future.
/>   That doesn’t mean we will go back to twentieth-century-style dictatorships. Authoritarian regimes seem to be equally overwhelmed by the pace of technological development and the speed and volume of the data flow. In the twentieth century, dictators had grand visions for the future. Communists and fascists alike sought to completely destroy the old world and build a new world in its place. Whatever you think about Lenin, Hitler or Mao, you cannot accuse them of lacking vision. Today it seems that leaders have a chance to pursue even grander visions. While communists and Nazis tried to create a new society and a new human with the help of steam engines and typewriters, today’s prophets could rely on biotechnology and super-computers.

  In science-fiction films, ruthless Hitler-like politicians are quick to pounce on such new technologies, putting them in the service of this or that megalomaniac political ideal. Yet flesh-and-blood politicians in the early twenty-first century, even in authoritarian countries such as Russia, Iran or North Korea, are nothing like their Hollywood counterparts. They don’t seem to plot any Brave New World. The wildest dreams of Kim Jong-un and Ali Khamenei don’t go much beyond atom bombs and ballistic missiles: that is so 1945. Putin’s aspirations seem confined to rebuilding the old Soviet zone, or the even older tsarist empire. Meanwhile in the USA, paranoid Republicans accuse Barack Obama of being a ruthless despot hatching conspiracies to destroy the foundations of American society – yet in eight years of presidency he barely managed to pass a minor health-care reform. Creating new worlds and new humans is far beyond his agenda.

  Precisely because technology is now moving so fast, and parliaments and dictators alike are overwhelmed by data they cannot process quickly enough, present-day politicians are thinking on a far smaller scale than their predecessors a century ago. In the early twenty-first century, politics is consequently bereft of grand visions. Government has become mere administration. It manages the country, but it no longer leads it. It makes sure teachers are paid on time and sewage systems don’t overflow, but it has no idea where the country will be in twenty years.

  To some extent, this is a very good thing. Given that some of the big political visions of the twentieth century led us to Auschwitz, Hiroshima and the Great Leap Forward, maybe we are better off in the hands of petty-minded bureaucrats. Mixing godlike technology with megalomaniac politics is a recipe for disaster. Many neo-liberal economists and political scientists argue that it is best to leave all the important decisions in the hands of the free market. They thereby give politicians the perfect excuse for inaction and ignorance, which are reinterpreted as profound wisdom. Politicians find it convenient to believe that the reason they don’t understand the world is that they need not understand it.

  Yet mixing godlike technology with myopic politics also has its downside. Lack of vision isn’t always a blessing, and not all visions are necessarily bad. In the twentieth century, the dystopian Nazi vision did not fall apart spontaneously. It was defeated by the equally grand visions of socialism and liberalism. It is dangerous to trust our future to market forces, because these forces do what’s good for the market rather than what’s good for humankind or for the world. The hand of the market is blind as well as invisible, and left to its own devices it may fail to do anything about the threat of global warming or the dangerous potential of artificial intelligence.

  Some people believe that there is somebody in charge after all. Not democratic politicians or autocratic despots, but rather a small coterie of billionaires who secretly run the world. But such conspiracy theories never work, because they underestimate the complexity of the system. A few billionaires smoking cigars and drinking Scotch in some back room cannot possibly understand everything happening on the globe, let alone control it. Ruthless billionaires and small interest groups flourish in today’s chaotic world not because they read the map better than anyone else, but because they have very narrow aims. In a chaotic system, tunnel vision has its advantages, and the billionaires’ power is strictly proportional to their goals. If the world’s richest man would like to make another billion dollars he could easily game the system in order to achieve his goal. In contrast, if he would like to reduce global inequality or stop global warming, even he won’t be able to do it, because the system is far too complex.

  Yet power vacuums seldom last long. If in the twenty-first century traditional political structures can no longer process the data fast enough to produce meaningful visions, then new and more efficient structures will evolve to take their place. These new structures may be very different from any previous political institutions, whether democratic or authoritarian. The only question is who will build and control these structures. If humankind is no longer up to the task, perhaps it might give somebody else a try.

  History in a Nutshell

  From a Dataist perspective, we may interpret the entire human species as a single data-processing system, with individual humans serving as its chips. If so, we can also understand the whole of history as a process of improving the efficiency of this system, through four basic methods:

  1. Increasing the number of processors. A city of 100,000 people has more computing power than a village of 1,000 people.

  2. Increasing the variety of processors. Different processors may use diverse ways to calculate and analyse data. Using several kinds of processors in a single system may therefore increase its dynamism and creativity. A conversation between a peasant, a priest and a physician may produce novel ideas that would never emerge from a conversation between three hunter-gatherers.

  3. Increasing the number of connections between processors. There is little point in increasing the mere number and variety of processors if they are poorly connected to each other. A trade network linking ten cities is likely to result in many more economic, technological and social innovations than ten isolated cities.

  4. Increasing the freedom of movement along existing connections. Connecting processors is hardly useful if data cannot flow freely. Just building roads between ten cities won’t be very useful if they are plagued by robbers, or if some autocratic despot doesn’t allow merchants and travellers to move as they wish.

  These four methods often contradict one another. The greater the number and variety of processors, the harder it is to freely connect them. The construction of the Sapiens data-processing system accordingly passed through four main stages, each characterised by an emphasis on different methods.

  The first stage began with the Cognitive Revolution, which made it possible to connect unlimited numbers of Sapiens into a single data-processing network. This gave Sapiens a crucial advantage over all other human and animal species. While there is a strict limit to the number of Neanderthals, chimpanzees or elephants you can connect to the same net, there is no limit to the number of Sapiens.

  Sapiens used their advantage in data processing to overrun the entire world. However, as they spread into different lands and climates they lost touch with one another, and underwent diverse cultural transformations. The result was an immense variety of human cultures, each with its own lifestyle, behaviour patterns and world view. Hence the first phase of history involved an increase in the number and variety of human processors, at the expense of connectivity: 20,000 years ago there were many more Sapiens than 70,000 years ago, and Sapiens in Europe processed information differently to Sapiens in China. However, there were no connections between people in Europe and China, and it would have seemed utterly impossible that all Sapiens may one day be part of a single data-processing web.

  The second stage began with the Agricultural Revolution and continued until the invention of writing and money about 5,000 years ago. Agriculture speeded demographic growth, so the number of human processors rose sharply. Simultaneously, agriculture enabled many more people to live together in the same place, thereby generating dense local networks that contained an unprecedented number of processors. In addition, agriculture created new incentives and opportunities for different networks to trade and communicate with one another.
Nevertheless, during the second phase centrifugal forces remained predominant. In the absence of writing and money, humans could not establish cities, kingdoms or empires. Humankind was still divided into innumerable little tribes, each with its own lifestyle and world view. Uniting the whole of humankind was not even a fantasy.

  The third stage kicked off with the invention of writing and money about 5,000 years ago, and lasted until the beginning of the Scientific Revolution. Thanks to writing and money, the gravitational field of human cooperation finally overpowered the centrifugal forces. Human groups bonded and merged to form cities and kingdoms. Political and commercial links between different cities and kingdoms also tightened. At least since the first millennium BC – when coinage, empires and universal religions appeared – humans began to consciously dream about forging a single network that would encompass the entire globe.

  This dream became a reality during the fourth and last stage of history, which began around 1492. Early modern explorers, conquerors and traders wove the first thin threads that encompassed the whole world. In the late modern period these threads were made stronger and denser, so that the spider’s web of Columbus’s days became the steel and asphalt grid of the twenty-first century. Even more importantly, information was allowed to flow increasingly freely along this global grid. When Columbus first hooked up the Eurasian net to the American net, only a few bits of data could cross the ocean each year, running the gauntlet of cultural prejudices, strict censorship and political repression. But as the years went by, the free market, the scientific community, the rule of law and the spread of democracy all helped to lift the barriers. We often imagine that democracy and the free market won because they were ‘good’. In truth, they won because they improved the global data-processing system.

 

‹ Prev