The Doomsday Handbook

Home > Nonfiction > The Doomsday Handbook > Page 30
The Doomsday Handbook Page 30

by Alok Jha


  Even today, talking across different countries can be difficult, never mind thinking about thousands of years in the future. Red means danger in one culture but luck in another; a creepy-crawlie might be dirty or scary in one place but be seen as a tasty snack in another. The scientists’ solution to the nuclear conundrum is to use facial expressions: an image adapted from the famous painting The Scream by Edvard Munch is thought to mean the same thing to everyone now and (hopefully) in the future too.

  * * *

  What about all that genome-sequencing data that we have been patiently collecting for the past few decades? How about the blueprints to make aircraft, computer circuits and radio transmitters? What about our great novels and the histories of what happened while we were alive?

  * * *

  Scientific legacy

  Maintaining archives of data for the future has only recently become a priority for scientific institutions. And not just for curiosity’s sake: in future, scientists will need access to raw data so that they can perform brand-new analyses or even look for proof of new theorems or evidence of scientific fraud in old data.

  Take the LHC in Geneva. It will generate almost 500 million gigabytes of data in its 15-year run of experiments, and that data will be stored on disks and tapes and distributed to the world’s scientists via a “grid” of 100,000 computers. The information technology project to allow the collection and management of all this data at Cern has been decades in the planning, but there is little idea of how the information will be kept in storage beyond the lifetime of the LHC itself, which will shut down early in the 2020s.

  One way to keep data more resilient for longer is to copy it from one disk or tape to another. However, doing that systematically for immense data sets (such as the one from the LHC or from gene-sequencing projects) increases the risk of errors creeping into the information.

  Individuals also copy files to and from personal computers. It may well be the case that in the future, there is more chance of our descendants coming across a Michael Jackson album on a historical hard disk than the latest papers on general relativity or details about how to extract metals from the Earth or make important drugs.

  The other failsafe is an old one: store more information on paper. The oldest surviving book, found in a cave in China, dates back to the 9th century AD. If books are kept in stable conditions and away from hungry pests, there is no reason why they (and their information) cannot survive for a thousand years or more.

  A more systematic approach has been thought up by the the Long Now Foundation, a California-based organization. Its alternative to books is the “Rosetta Disk,” which is made from nickel and holds descriptions of 1,000 languages. On one side it is etched with text that starts off at readable size and then goes down to the nanoscale (billionths of a meter across). On the reverse of the disk are up to 14,000 pages of text, viewable with an enclosed magnifying glass. The foundation reckon that the disk could remain legible for thousands of years.

  We revel in information today. The ease with which our lives are connected to each other and to the vast resources of the World Wide Web has given us a sense of security that perhaps it will always be like this. What we keep in our memories has changed in this electronic world, and not necessarily for the worse—we have access to much more knowledge today than our parents and grandparents ever had. But as we change how we remember, we need to be aware of (and prepare for) what it would be like to suddenly forget.

  Unknown Unknowns

  * * *

  We don’t know what we don’t know. The geniuses of the past would never have guessed at nuclear war or climate change. And the geniuses of today cannot know what knowledge (and the allied risks) will emerge in the future.

  * * *

  Five months after the terrorist attack on September 11, 2001, had claimed thousands of lives at the World Trade Center in New York and the Pentagon in Virginia, US secretary of defense Donald Rumsfeld was speaking to members of the press about the various threats facing America. He had been talking specifically about the regime in Iraq when a reporter put up their hand and the following interchange took place:

  Q: In regard to Iraq, weapons of mass destruction and terrorists, is there any evidence to indicate that Iraq has attempted to or is willing to supply terrorists with weapons of mass destruction? Because there are reports that there is no evidence of a direct link between Baghdad and some of these terrorist organizations.

  Rumsfeld: Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.

  Rumsfeld was given a lot of stick for his “unknown unknowns” remark. A tautology, perhaps? Management-speak bent to the worst degree? Not quite. When assessing danger and catastrophe, no conversation can be truly complete without thinking the unknown. And within that unknown, it would be the height of arrogance and hubris not to admit that there are things you know you don’t know and, inevitably, any number of things that you don’t know you don’t know.

  “We need a catch-all category,” says philosopher Nick Bostrom. “It would be foolish to be confident that we have already imagined and anticipated all significant risks. Future technological or scientific developments may very well reveal novel ways of destroying the world.”

  Predicting the unpredictable

  Take the most learned people during the Enlightenment period in western Europe—Isaac Newton, say, Francis Bacon or Bishop George Berkeley—and imagine asking them how they thought the world might come to an end. There would be tales of divine intervention (Newton believed that doomsday would happen in the 21st century, calculated from clues in the Bible), and someone might offer the idea of a bloody war causing so many casualties that nations would suffer and wither away. There might be serious consideration of other fantastical theories too, but none of these clever people could have told you about the doomsday potential of a nuclear bomb. Or an asteroid strike. Or rising sea levels due to climate change.

  Three hundred years ago, we knew less science and much less about the world and universe around us. With knowledge comes power and, inevitably, the understanding that all around us there is new danger.

  With cosmic threats to our existence, the danger has been there all along; it just took us some time to notice it: the collision of our galaxy with Andromeda, for example, or the arrival of a black hole. Common to all cosmic threats is that there is very little we can do about them even when we know the danger exists, except to try and work out how to survive the aftermath.

  Other threats are a matter of self-control. These are dangers we have made for ourselves by applying our brains, working out how to manipulate the world and using that knowledge for our own ends—nuclear weapons and climate change are good examples. In both cases, any risks have been caused by, and can also be prevented by, the action of people.

  Applying our brains is also where new, unforeseen problems for the world could turn up. How do we start to think about this, though?

  One group used to working out and predicting the unthinkable is the military. At the Pentagon, commanders talk about “disruptive threats”—weapons or tactics that come from nowhere to tilt the balance of power during combat. These have included the earlier-than-expected detonation of the first Soviet nuclear bomb (in 1949) and the launch of Sputnik 1, the world’s first orbiting satellite. Pentagon and White House analysts have identified future disruptive threats as, for example, weapons that could use “biotechnology, cyber and space operations, or directed energy weapons” or ones that could reliably shoot down missiles and warplanes from an invading force.

  In the information age, predicting the next source of big pr
oblems becomes even more complex. The US might have worried about nuclear armageddon in the past, but it always knew that it would take the Soviets lots of manpower and years of testing to get a sizeable arsenal of warheads together. Today’s wars can happen online and they can happen fast; there’s no waiting around to build and test expensive new weapons. In the online doomsday scenario, the military might not find out they are being attacked until it is too late to do anything about it.

  Black Swans

  There is another way to think about unknown unknowns. In his book The Black Swan, the economist Nassim Nicholas Taleb wrote about the idea of big surprises that end up having major impacts. He highlighted the importance, through history, of events that were hard to predict, had a huge impact, were rare and which went beyond the normal expectations of history, science or economics. Included in this list of so-called Black Swan events are the rise of the internet, the First World War and the 9/11 terrorist attacks. Taleb also includes almost every scientific discovery and major artistic achievement.

  “Before the discovery of Australia, people in the old world were convinced that all swans were white, an unassailable belief as it seemed completely confirmed by empirical evidence,” he wrote. “The sighting of the first black swan might have been an interesting surprise for a few ornithologists (and others extremely concerned with the coloring of birds), but that is not where the significance of the story lies. It illustrates a severe limitation to our learning from observations or experience and the fragility of our knowledge. One single observation can invalidate a general statement derived from millennia of confirmatory sightings of millions of white swans. All you need is one single (and, I am told, quite ugly) black bird.”

  He generalizes the Black Swan event as something with three attributes. First, it is an outlier, something beyond our normal expectations and for which the past is not a reliable guide. Second, it has an extreme impact. And third, it is something that makes human nature rationalize the event afterward.

  “A small number of Black Swans explain almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives,” says Taleb. “Ever since we left the Pleistocene, some ten millennia ago, the effect of these Black Swans has been increasing. It started accelerating during the industrial revolution, as the world started getting more complicated, while ordinary events, the ones we study and discuss and try to predict from reading the newspapers, have become increasingly inconsequential.”

  * * *

  Isn’t it strange to see an event happening precisely because it was not supposed to happen? What kind of defense do we have against that?

  * * *

  Think of 9/11, he says. Had the risk of an atrocity been reasonably conceivable the day before, the whole thing would not have happened. “If such a possibility were deemed worthy of attention, fighter planes would have circled the sky above the twin towers, airplanes would have had locked bulletproof doors, and the attack would not have taken place, period. Something else might have taken place. What? I don’t know. Isn’t it strange to see an event happening precisely because it was not supposed to happen? What kind of defense do we have against that? Whatever you come to know (that New York is an easy terrorist target, for instance) may become inconsequential if your enemy knows that you know it. It may be odd to realize that, in such a strategic game, what you know can be truly inconsequential.”

  This also works in businesses and scientific ideas—any “secret recipe” for setting up a successful restaurant would spread like wildfire and everyone in the street would be using it; the real big idea would be something that most restaurateurs had not yet conceived of. “The same applies to the shoe and the book businesses—or any kind of entrepreneurship. The same applies to scientific theories—nobody has interest in listening to trivialities. The payoff of a human venture is, in general, inversely proportional to what it is expected to be.”

  If the Pacific tsunami of December 2004 had been expected, it would not have caused the damage it did, says Taleb. “The areas affected would have been less populated, an early warning system would have been put in place. What you know cannot really hurt you.”

  Despite that, we all behave as though we are somehow able to predict events based on our knowledge and history. “We produce thirty year projections of social security deficits and oil prices without realizing that we cannot even predict these for next summer—our cumulative prediction errors for political and economic events are so monstrous that every time I look at the empirical record I have to pinch myself to verify that I am not dreaming. What is surprising is not the magnitude of our forecast errors, but our absence of awareness of it.”

  One answer is to become aware of our inability to predict everything and be ready for something unpredictable to happen. The left-field Black Swan events of the past are not only shocking and impactful, they tell us a lot about human behavior, about how we learn from history. When asking the question about unforeseen things that might bring the world to its knees, perhaps we need to step back and think about the question in a more reflexive way.

  “We do not spontaneously learn that we don’t learn that we don’t learn. The problem lies in the structure of our minds: we don’t learn rules, just facts, and only facts. Metarules (such as the rule that we have a tendency to not learn rules) we don’t seem to be good at getting. We scorn the abstract; we scorn it with passion,” says Taleb.

  This behavior lies deep in our animal past—being thoughtful and introspective is no use to an animal on the savannah of early Africa if all he needs to do is notice and run away from lions. “Consider that thinking is time-consuming and generally a great waste of energy, that our predecessors spent more than a hundred million years as nonthinking mammals and that in the blip in our history during which we have used our brain we have used it on subjects too peripheral to matter. Evidence shows that we do much less thinking than we believe we do—except, of course, when we think about it.”

  There is no answer to expecting the unexpected, no discrete way. But taking a lesson from Taleb, it is better to know that and use the information. In thinking about the unexpected ways in which the world could end, therefore, the past is no guide at all.

  INDEX

  Page numbers in italics denotes an illustration

  A

  ADHD drugs 37

  Afghanistan 98

  Africa

  desertification 93

  water stress 102–3

  agriculture 95

  advances in 41

  combating of desertification by 94

  and greenhouse gas emissions 60

  importance of bees to 80, 81–2

  Aichi targets 262

  al-Qaeda 34

  Aleksander, Igor 73

  Alexander, General Keith 54–5, 57

  algae 129

  aliens see extraterrestrials

  Allen Telescope Array 212

  Amflora 60

  Andromeda

  collision with Milky Way scenario 219, 221, 222

  animal viruses 16–17

  anoxic event 154–7

  Antarctica 113, 134

  anthrax 31, 33

  Antoniu, Michael 61

  Apophis (asteroid) 138

  Arctic 112–13

  Arsenault, Louise 38

  artificial hand 77

  artificial intelligence/super intelligence 69–74, 77

  Asian flu epidemics (1957/968) 14

  Asian long-horned beetle 87

  asteroids 137–41, 181, 188

  deflection methods against 139–41

  effects of a collision 139

  likelihood of a collision 141

  and mass extinctions 11, 138, 161, 167, 180

  Aum Shinrikyo 32, 32, 33

  Avian flu (H5N1) (2005) 14, 16

  B

  Balick, Bruce 216

  Ball, Philip 68

  banana Xanthomonas wilt 60

&nbs
p; Barnes, Joshua 219, 220, 221

  Barnosky, Anthony D. 11–12

  Beachy, Roger 59–60

  Beddington, John 171, 172, 173

  bees 80–4

  causes of decline 82–3

  contribution to global economy 80

  importance of to agriculture 80, 81–2

  killing of by pesticides 82, 83, 130

  killing of by varroa mite 82

  ways to combat decline 83–4

  beetles 87

  Bell, Robin 113, 115

  Big Bang 195, 202, 219

  biodiversity

  destruction of by deserts 93

  impact of invasive species on 87–8

  loss of 258–9

  biofuels, use of grain crops for 96, 97–8

  biological dirty bomb 31–3

  biotech disaster 58–63

  birth control 49

  birth rates, decline in 48

  bisphenol A 128

  Black Death 15, 49

  black holes 184–8, 187, 189, 225–6

  creation of by scientists on earth 202–6

  formation 185

  impact of on Earth 186–8

  number of 186

 

‹ Prev