Book Read Free

Enlightenment Now

Page 36

by Steven Pinker


  Figure 18-4: Happiness and excitement, US, 1972–2016

  Source: “General Social Survey,” Smith, Son, & Schapiro 2015, figs. 1 and 5, updated for 2016 from https://gssdataexplorer.norc.org/projects/15157/variables/438/vshow. Data exclude nonresponses.

  The divergence of the curves is not a paradox. Recall that people who feel they lead meaningful lives are more susceptible to stress, struggle, and worry.90 Consider as well that anxiety has always been a perquisite of adulthood: it rises steeply from the school-age years to the early twenties as people take on adult responsibilities, and then falls steadily over the rest of the life course as they learn to cope with them.91 Perhaps that is emblematic of the challenges of modernity. Though people today are happier, they are not as happy as one might expect, perhaps because they have an adult’s appreciation of life, with all its worry and all its excitement. The original definition of Enlightenment, after all, was “humankind’s emergence from its self-incurred immaturity.”

  CHAPTER 19

  EXISTENTIAL THREATS

  But are we flirting with disaster? When pessimists are forced to concede that life has been getting better and better for more and more people, they have a retort at the ready. We are cheerfully hurtling toward a catastrophe, they say, like the man who fell off the roof and says “So far so good” as he passes each floor. Or we are playing Russian roulette, and the deadly odds are bound to catch up to us. Or we will be blindsided by a black swan, a four-sigma event far along the tail of the statistical distribution of hazards, with low odds but calamitous harm.

  For half a century the four horsemen of the modern apocalypse have been overpopulation, resource shortages, pollution, and nuclear war. They have recently been joined by a cavalry of more exotic knights: nanobots that will engulf us, robots that will enslave us, artificial intelligence that will turn us into raw materials, and Bulgarian teenagers who will brew a genocidal virus or take down the Internet from their bedrooms.

  The sentinels for the familiar horsemen tended to be romantics and Luddites. But those who warn of the higher-tech dangers are often scientists and technologists who have deployed their ingenuity to identify ever more ways in which the world will soon end. In 2003 the eminent astrophysicist Martin Rees published a book entitled Our Final Hour in which he warned that “humankind is potentially the maker of its own demise” and laid out some dozen ways in which we have “endangered the future of the entire universe.” For example, experiments in particle colliders could create a black hole that would annihilate the Earth, or a “strangelet” of compressed quarks that would cause all matter in the cosmos to bind to it and disappear. Rees tapped a rich vein of catastrophism. The book’s Amazon page notes, “Customers who viewed this item also viewed Global Catastrophic Risks; Our Final Invention: Artificial Intelligence and the End of the Human Era; The End: What Science and Religion Tell Us About the Apocalypse; and World War Z: An Oral History of the Zombie War.” Techno-philanthropists have bankrolled research institutes dedicated to discovering new existential threats and figuring out how to save the world from them, including the Future of Humanity Institute, the Future of Life Institute, the Center for the Study of Existential Risk, and the Global Catastrophic Risk Institute.

  How should we think about the existential threats that lurk behind our incremental progress? No one can prophesy that a cataclysm will never happen, and this chapter contains no such assurance. But I will lay out a way to think about them, and examine the major menaces. Three of the threats—overpopulation, resource depletion, and pollution, including greenhouse gases—were discussed in chapter 10, and I will take the same approach here. Some threats are figments of cultural and historical pessimism. Others are genuine, but we can treat them not as apocalypses in waiting but as problems to be solved.

  * * *

  At first glance one might think that the more thought we give to existential risks, the better. The stakes, quite literally, could not be higher. What harm could there be in getting people to think about these terrible risks? The worst that could happen is that we would take some precautions that turn out in retrospect to have been unnecessary.

  But apocalyptic thinking has serious downsides. One is that false alarms to catastrophic risks can themselves be catastrophic. The nuclear arms race of the 1960s, for example, was set off by fears of a mythical “missile gap” with the Soviet Union.1 The 2003 invasion of Iraq was justified by the uncertain but catastrophic possibility that Saddam Hussein was developing nuclear weapons and planning to use them against the United States. (As George W. Bush put it, “We cannot wait for the final proof—the smoking gun—that could come in the form of a mushroom cloud.”) And as we shall see, one of the reasons the great powers refuse to take the common-sense pledge that they won’t be the first to use nuclear weapons is that they want to reserve the right to use them against other supposed existential threats such as bioterror and cyberattacks.2 Sowing fear about hypothetical disasters, far from safeguarding the future of humanity, can endanger it.

  A second hazard of enumerating doomsday scenarios is that humanity has a finite budget of resources, brainpower, and anxiety. You can’t worry about everything. Some of the threats facing us, like climate change and nuclear war, are unmistakable, and will require immense effort and ingenuity to mitigate. Folding them into a list of exotic scenarios with minuscule or unknown probabilities can only dilute the sense of urgency. Recall that people are poor at assessing probabilities, especially small ones, and instead play out scenarios in their mind’s eye. If two scenarios are equally imaginable, they may be considered equally probable, and people will worry about the genuine hazard no more than about the science-fiction plotline. And the more ways people can imagine bad things happening, the higher their estimate that something bad will happen.

  And that leads to the greatest danger of all: that people will think, as a recent New York Times article put it, “These grim facts should lead any reasonable person to conclude that humanity is screwed.”3 If humanity is screwed, why sacrifice anything to reduce potential risks? Why forgo the convenience of fossil fuels, or exhort governments to rethink their nuclear weapons policies? Eat, drink, and be merry, for tomorrow we die! A 2013 survey in four English-speaking countries showed that among the respondents who believe that our way of life will probably end in a century, a majority endorsed the statement “The world’s future looks grim so we have to focus on looking after ourselves and those we love.”4

  Few writers on technological risk give much thought to the cumulative psychological effects of the drumbeat of doom. As Elin Kelsey, an environmental communicator, points out, “We have media ratings to protect children from sex or violence in movies, but we think nothing of inviting a scientist into a second grade classroom and telling the kids the planet is ruined. A quarter of (Australian) children are so troubled about the state of the world that they honestly believe it will come to an end before they get older.”5 According to recent polls, so do 15 percent of people worldwide, and between a quarter and a third of Americans.6 In The Progress Paradox, the journalist Gregg Easterbrook suggests that a major reason that Americans are not happier, despite their rising objective fortunes, is “collapse anxiety”: the fear that civilization may implode and there’s nothing anyone can do about it.

  * * *

  Of course, people’s emotions are irrelevant if the risks are real. But risk assessments fall apart when they deal with highly improbable events in complex systems. Since we cannot replay history thousands of times and count the outcomes, a statement that some event will occur with a probability of .01 or .001 or .0001 or .00001 is essentially a readout of the assessor’s subjective confidence. This includes mathematical analyses in which scientists plot the distribution of events in the past (like wars or cyberattacks) and show they fall into a power-law distribution, one with “fat” or “thick” tails, in which extreme events are highly improbable but not astronomically improbable.7 The math is of li
ttle help in calibrating the risk, because the scattershot data along the tail of the distribution generally misbehave, deviating from a smooth curve and making estimation impossible. All we know is that very bad things can happen.

  That takes us back to subjective readouts, which tend to be inflated by the Availability and Negativity biases and by the gravitas market (chapter 4).8 Those who sow fear about a dreadful prophecy may be seen as serious and responsible, while those who are measured are seen as complacent and naïve. Despair springs eternal. At least since the Hebrew prophets and the Book of Revelation, seers have warned their contemporaries about an imminent doomsday. Forecasts of End Times are a staple of psychics, mystics, televangelists, nut cults, founders of religions, and men pacing the sidewalk with sandwich boards saying “Repent!”9 The storyline that climaxes in harsh payback for technological hubris is an archetype of Western fiction, including Promethean fire, Pandora’s box, Icarus’s flight, Faust’s bargain, the Sorcerer’s Apprentice, Frankenstein’s monster, and, from Hollywood, more than 250 end-of-the-world flicks.10 As the engineer Eric Zencey has observed, “There is seduction in apocalyptic thinking. If one lives in the Last Days, one’s actions, one’s very life, take on historical meaning and no small measure of poignance.”11

  Scientists and technologists are by no means immune. Remember the Y2K bug?12 In the 1990s, as the turn of the millennium drew near, computer scientists began to warn the world of an impending catastrophe. In the early decades of computing, when information was expensive, programmers often saved a couple of bytes by representing a year by its last two digits. They figured that by the time the year 2000 came around and the implicit “19” was no longer valid, the programs would be long obsolete. But complicated software is replaced slowly, and many old programs were still running on institutional mainframes and embedded in chips. When 12:00 A.M. on January 1, 2000, arrived and the digits rolled over, a program would think it was 1900 and would crash or go haywire (presumably because it would divide some number by the difference between what it thought was the current year and the year 1900, namely zero, though why a program would do this was never made clear). At that moment, bank balances would be wiped out, elevators would stop between floors, incubators in maternity wards would shut off, water pumps would freeze, planes would fall from the sky, nuclear power plants would melt down, and ICBMs would be launched from their silos.

  And these were the hardheaded predictions from tech-savvy authorities (such as President Bill Clinton, who warned the nation, “I want to stress the urgency of the challenge. This is not one of the summer movies where you can close your eyes during the scary part”). Cultural pessimists saw the Y2K bug as comeuppance for enthralling our civilization to technology. Among religious thinkers, the numerological link to Christian millennialism was irresistible. The Reverend Jerry Falwell declared, “I believe that Y2K may be God’s instrument to shake this nation, humble this nation, awaken this nation and from this nation start revival that spreads the face of the earth before the Rapture of the Church.” A hundred billion dollars was spent worldwide on reprogramming software for Y2K Readiness, a challenge that was likened to replacing every bolt in every bridge in the world.

  As a former assembly language programmer I was skeptical of the doomsday scenarios, and fortuitously I was in New Zealand, the first country to welcome the new millennium, at the fateful moment. Sure enough, at 12:00 A.M. on January 1, nothing happened (as I quickly reassured family members back home on a fully functioning telephone). The Y2K reprogrammers, like the elephant-repellent salesman, took credit for averting disaster, but many countries and small businesses had taken their chances without any Y2K preparation, and they had no problems either. Though some software needed updating (one program on my laptop displayed “January 1, 19100”), it turned out that very few programs, particularly those embedded in machines, had both contained the bug and performed furious arithmetic on the current year. The threat turned out to be barely more serious than the lettering on the sidewalk prophet’s sandwich board. The Great Y2K Panic does not mean that all warnings of potential catastrophes are false alarms, but it reminds us that we are vulnerable to techno-apocalyptic delusions.

  * * *

  How should we think about catastrophic threats? Let’s begin with the greatest existential question of all, the fate of our species. As with the more parochial question of our fate as individuals, we assuredly have to come to terms with our mortality. Biologists joke that to a first approximation all species are extinct, since that was the fate of at least 99 percent of the species that ever lived. A typical mammalian species lasts around a million years, and it’s hard to insist that Homo sapiens will be an exception. Even if we had remained technologically humble hunter-gatherers, we would still be living in a geological shooting gallery.13 A burst of gamma rays from a supernova or collapsed star could irradiate half the planet, brown the atmosphere, and destroy the ozone layer, allowing ultraviolet light to irradiate the other half.14 Or the Earth’s magnetic field could flip, exposing the planet to an interlude of lethal solar and cosmic radiation. An asteroid could slam into the Earth, flattening thousands of square miles and kicking up debris that would black out the sun and drench us with corrosive rain. Supervolcanoes or massive lava flows could choke us with ash, CO2, and sulfuric acid. A black hole could wander into the solar system and pull the Earth out of its orbit or suck it into oblivion. Even if the species manages to survive for a billion more years, the Earth and solar system will not: the sun will start to use up its hydrogen, become denser and hotter, and boil away our oceans on its way to becoming a red giant.

  Technology, then, is not the reason that our species must someday face the Grim Reaper. Indeed, technology is our best hope for cheating death, at least for a while. As long as we are entertaining hypothetical disasters far in the future, we must also ponder hypothetical advances that would allow us to survive them, such as growing food under lights powered with nuclear fusion, or synthesizing it in industrial plants like biofuel.15 Even technologies of the not-so-distant future could save our skin. It’s technically feasible to track the trajectories of asteroids and other “extinction-class near-Earth objects,” spot the ones that are on a collision course with the Earth, and nudge them off course before they send us the way of the dinosaurs.16 NASA has also figured out a way to pump water at high pressure into a supervolcano and extract the heat for geothermal energy, cooling the magma enough that it would never blow its top.17 Our ancestors were powerless to stop these lethal menaces, so in that sense technology has not made this a uniquely dangerous era in the history of our species but a uniquely safe one.

  For this reason, the techno-apocalyptic claim that ours is the first civilization that can destroy itself is misconceived. As Ozymandias reminds the traveler in Percy Bysshe Shelley’s poem, most of the civilizations that have ever existed have been destroyed. Conventional history blames the destruction on external events like plagues, conquests, earthquakes, or weather. But David Deutsch points out that those civilizations could have thwarted the fatal blows had they had better agricultural, medical, or military technology:

  Before our ancestors learned how to make fire artificially (and many times since then too), people must have died of exposure literally on top of the means of making the fires that would have saved their lives, because they did not know how. In a parochial sense, the weather killed them; but the deeper explanation is lack of knowledge. Many of the hundreds of millions of victims of cholera throughout history must have died within sight of the hearths that could have boiled their drinking water and saved their lives; but, again, they did not know that. Quite generally, the distinction between a “natural” disaster and one brought about by ignorance is parochial. Prior to every natural disaster that people once used to think of as “just happening,” or being ordained by gods, we now see many options that the people affected failed to take—or, rather, to create. And all those options add up to the overarching option that t
hey failed to create, namely that of forming a scientific and technological civilization like ours. Traditions of criticism. An Enlightenment.18

  * * *

  Prominent among the existential risks that supposedly threaten the future of humanity is a 21st-century version of the Y2K bug. This is the danger that we will be subjugated, intentionally or accidentally, by artificial intelligence (AI), a disaster sometimes called the Robopocalypse and commonly illustrated with stills from the Terminator movies. As with Y2K, some smart people take it seriously. Elon Musk, whose company makes artificially intelligent self-driving cars, called the technology “more dangerous than nukes.” Stephen Hawking, speaking through his artificially intelligent synthesizer, warned that it could “spell the end of the human race.”19 But among the smart people who aren’t losing sleep are most experts in artificial intelligence and most experts in human intelligence.20

  The Robopocalypse is based on a muzzy conception of intelligence that owes more to the Great Chain of Being and a Nietzschean will to power than to a modern scientific understanding.21 In this conception, intelligence is an all-powerful, wish-granting potion that agents possess in different amounts. Humans have more of it than animals, and an artificially intelligent computer or robot of the future (“an AI,” in the new count-noun usage) will have more of it than humans. Since we humans have used our moderate endowment to domesticate or exterminate less well-endowed animals (and since technologically advanced societies have enslaved or annihilated technologically primitive ones), it follows that a supersmart AI would do the same to us. Since an AI will think millions of times faster than we do, and use its superintelligence to recursively improve its superintelligence (a scenario sometimes called “foom,” after the comic-book sound effect), from the instant it is turned on we will be powerless to stop it.22

 

‹ Prev