By this point, Turner was starting to get truly annoyed with his children and their entreaties. He’d already stopped watching the weather on TV. “I don’t think he even knew the name of the storm,” Williams says. It was around then that he took his phone off the hook.
Blind Spots
About 80 percent of New Orleans’s population got out before the storm—a huge success compared with previous evacuations there and around the country. The vast majority of people navigated through the denial and deliberation phases and took action. But what happened to the remaining 20 percent? The consensus in most media reports was that people were simply too poor to leave. And it’s true that the more resources you have, the more choices you have about how to evacuate and where to go. About 21 percent of New Orleans households were carless when Katrina hit, according to the Census Bureau.
But poverty does not explain what happened in New Orleans. An analysis of 486 Katrina victims by Knight Ridder Newspapers found that they were not disproportionately poor—or black. Michael Lindell, director of the Hazard Reduction and Recovery Center at Texas A&M University, has studied scores of evacuations, and he says people’s behavior defies simple explanations. “If you’re looking at 100% of the variance in evacuation behavior, income accounts for no more than 5–10 percentage points,” he says. “What really accounts for the differences are people’s beliefs.”
Why wouldn’t Patrick Turner leave? Turner had an old Chevrolet and a family full of people with cars headed out of town. In New Orleans, most people knew much of the city lay below sea level. In July 2002, the New Orleans Times-Picayune ran a five-part series on the inevitable. “It’s a matter of when, not if,” wrote reporters John McQuaid and Mark Schleifstein about a hurricane decimating the city. “It’s happened before and it’ll happen again.” They described a precarious levee system and flooding that could kill thousands.
In hindsight, it’s always easy to craft a narrative for any disaster: to see all the signs stacking up like dominoes, if only we’d been paying attention. But that’s not what happened with Hurricane Katrina. It was that most unusual of fiascoes: almost nothing was a surprise. “This was not a comet hitting us,” says Stephen Leatherman, director of the International Hurricane Research Center in Miami. “This is Hurricane Alley.” Leatherman has studied hurricanes for thirty years. In 2002, he wrote a paper warning that Louisiana had lost many of its natural defenses against storms and New Orleans was particularly vulnerable. When we spoke just days after landfall, while tens of thousands of people remained stuck at the Superdome in New Orleans, he sounded sick with vindication. “You do all these computer models, but [now] you have a human face on it,” he said quietly. “It’s something. It really hurts.”
We gauge risk literally hundreds of times per day, usually well and often subconsciously. For more predictable calamities, the first phase of disaster think actually begins with this calculus. We start assessing risk before the disaster even happens. We are doing it right now. We decide where to live and what kind of insurance to buy, just like we process all kinds of everyday risks: we wear bike helmets, or not. We buckle our seat belts, smoke a cigarette, and let our kids stay out until midnight. Or not.
To deconstruct how we place these bets, I called Nassim Nicholas Taleb, a man obsessed with risk. Taleb spent twenty years as a trader in New York and London, earning money off other people’s blind spots. While other traders indulged in big short-term risks in hopes of big, short-term gains, Taleb set up his investments so that he could never win big—nor lose big. He was hedged every which way. “I never have blown up, and I never will,” he likes to say.
One autumn day, Taleb and I met for tea in Washington, D.C. Taleb, who has a balding head and a gray beard, is an author and a professor now, in addition to holding a large stake in a hedge fund. He likes to do many things at once, and he speaks so quickly that it is sometimes hard to keep up. That afternoon, he had come from the Pentagon, where he had briefed officials on his theories about uncertainty. The Pentagon was a strange place for him to be, since Taleb is a self-described pacifist. But he’s the kind of pacifist the Pentagon can tolerate—which is to say, the stoic kind. “I am a peace activist simply out of rationality,” he explains.
Taleb grew up in Lebanon, a country haunted by war’s unintended consequences. He has concluded that human beings are unable to handle war in the modern age. “We’re not really able to assess how long wars will take and what the net outcome will be.” The risk is too complex for our abilities. Once upon a time, we were better at war. “In a primitive environment, if someone is threatening me, I go kill him,” he says in his clipped, matter-of-fact way. “And I get good results most of the time.” He calls this environment “Mediocristan,” a place where it is hard to kill many people at once; a place where cause and effect are more closely connected. Homo sapiens spent hundreds of thousands of years living in Mediocristan. We rarely needed to understand probability because, most of the time, life was simpler, and the range of possible events was narrower.
But today, we live in a place Taleb calls “Extremistan,” subject to the “tyranny of the singular, the accidental, the unseen and the unpredicted.” Technology has allowed us to create weaponry that can strafe the planet in minutes. Lone individuals can alter the course of history. People kill each other every day without much physical exertion. And, at the same time, we have become ever more interdependent. What happens on one continent now has consequences for another. World War I, Taleb points out, was expected to be a rather small affair. So was Vietnam. In fact, the twentieth century was, and now the twenty-first century is, characterized by wars of unforeseen results. America’s war in Iraq was certainly not intended to create more terrorists bent on attacking the United States. But that is what happened, as a national intelligence estimate completed by U.S. government intelligence agencies concluded in April 2006.
Risk is often counterintuitive in Extremistan. Our old tricks don’t work. For example, just like Turner, many of Louisiana’s oldest residents had survived Hurricane Betsy in 1965. They had also survived Hurricane Camille, a category 5 storm that struck in 1969. Turner rode out both storms without a problem. So he saw no reason to leave for Katrina. He hunkered down in denial.
As it turned out, the veteran Louisianans were half right: Katrina was indeed less powerful than Camille. Had the world stood still since then, they would have been just fine. In Mediocristan, they would have survived.
But since Camille, rapid development had destroyed much of the wetlands that had created a natural barrier against storm surge. The force field, in other words, was down. Humankind had literally changed the shape of the earth, and we had done it faster, thanks to technology, than we could have throughout most of history. This fact was well reported in popular media. But the firsthand experience of Camille was more powerful than any warning.
As it turned out, the victims of Katrina were not disproportionately poor; they were disproportionately old. Three-quarters of the dead were over sixty, according to the Knight Ridder analysis. Half were over seventy-five. They had been middle-aged when Hurricane Camille struck. “I think Camille killed more people during Katrina than it did in 1969,” says Max Mayfield, director of the National Hurricane Center. “Experience is not always a good teacher.”
After Katrina, a poll of 680 New Orleans residents asked why they had not evacuated before the storm. The respondents could give multiple explanations. A slim majority did indeed cite a lack of transportation. But that was not the biggest reason. The most popular explanation, given by 64 percent, was that they did not think the storm would be as bad as it was. In fact, in retrospect, half of those who hadn’t evacuated said that they could have found a way to leave if they had really wanted to, according to the study, conducted for the Henry J. Kaiser Family Foundation and the Washington Post. Motivation, in other words, mattered more than transportation.
A Baseball Bat and a Crucifix
At 7:00 A.M. on Monday, August 29
, Katrina made landfall in Louisiana with winds of up to 140 mph. At 9:00 A.M., Turner’s children dialed his number again. Sometime before then, as the storm screamed by his window, he’d put his phone back on the hook.
Turner answered the phone. “It’s real windy,” he told his son. The electricity was out. And he was worried about the big tree in his backyard. Then he said something he rarely ever said: “I think I made a mistake.”
His son told him to hang in there. They’d drive out to get him as soon as they could. “My daddy was in very, very good health. No pacemakers, no surgery, nothing,” says Williams. “We figured as soon as they’d cleared the roadways, we could get him.” They hung up.
But then the floodwaters came, breaching the levees in half a dozen places and charging through the streets. Then the five-mile bridge that crosses Lake Pontchartrain broke into pieces, cutting Turner off from his children. And finally, the phones went out for good.
Turner’s neighborhood, like much of New Orleans, was in a bowl. Water poured in from the lake, rising to five feet in his one-story house. All of his possessions—the photographs, the Santa suit, all the reminders of his wife, who had died three years before—everything was sinking. Turner pulled the stairs down and went up into his attic. He brought up a gallon of water, a bucket, and two candles.
For nine days, the phones stayed down and the roads remained un-crossable. All of Turner’s children except Williams had lost their homes. They were desperate to get to their father, but they could not. Finally, the phones came back on and Williams made a frantic call to a radio station. She pleaded for someone, anyone, to go check on her father. Three hours later, she got a call from rescue officials. They had found her father in the attic, with a baseball bat and the crucifix he kept by his bed. He was dead at age eighty-five, apparently killed by a heart attack. Time of death was unknown.
In those early, chaotic days, rescue personnel were under orders to prioritize bodies that were in the water. Turner was not in the water, so it would be two weeks before they took his body away. About a month after the storm, Williams went to the house. She found the Santa suit hanging in her father’s bedroom closet, in its normal place. It had gotten wet, along with everything else, but her brother decided to hang it outside of the house as a reminder to those who passed that this had been the little holiday house. “We wanted people to see it,” Williams says. “I don’t know. When people passed by, maybe people who knew him as Santa Claus or whatever, would remember.”
In the confusion that followed the storm, the authorities lost Turner’s body. For five months, his family tried to find him. Morgue workers called Williams repeatedly to describe the bodies of dead men, none of whom were her father. “I kept telling them, ‘He doesn’t have a tattoo!’” Five months after he died, Turner’s body was found again and handed over to his family.
When we spoke a year and a half after the storm, Williams was having trouble forgiving her father. “It makes me so mad,” she said. “It didn’t have to happen. I took such good care of him for him to do something like that.” Since his death her family has not been nearly as close, she says. She wonders if they will ever reconnect. She agreed to be interviewed for this book because, she said, she wants other people to know how one decision can make all the difference.
Turner was nobody’s fool; he had accumulated a lot of wisdom in his long life. When Katrina came, he made a trade-off that is more complicated than it looks. As I came to know Turner through his daughter, I wanted to know more about his decision. Why had his risk calculus failed him this time—after working so well for so long? Could we predict these kinds of blind spots in our own risk equations? And if so, couldn’t we overcome them?
The Science of Risk
How are you most likely to die? Think for a moment: Given your own profile, what do you really think is most likely to kill you?
The facts depend upon your age, genetics, lifestyle, location, and a thousand other factors, of course. But overall, here are the leading causes of death in the United States:
1. Heart disease
2. Cancer
3. Stroke
Now ask yourself whether these most-likely scenarios are also the ones you worry about more than any other. Are these the risks you actively work hardest to avoid? Do you start each day with twenty minutes of meditation? Do you work out for at least thirty minutes a day? When you swim in the ocean, are you more terrified of getting sunburned than you are of getting bit by a shark?
The human brain worries about many, many things before it worries about probability. If we really were just concerned with preventing the most likely causes of death, we would worry more about falling down than we would about homicide. The nightly news would feature back-to-back segments on tragic heart-attack deaths. And we might spend more money on therapists than police (you are twice as likely to kill yourself than you are to be killed by someone else during your lifetime). It’s as if we don’t fear death itself so much as dying. We fear the how, not so much the what.
Curiously, we have only recently begun to understand how we process risk. For centuries, philosophers and especially economists assumed that people were rational creatures—if not individually than certainly overall. To measure risk, it was thought, humans simply multiplied the probability of something happening by the consequences of it happening.
It took two psychologists to point out that this was simply not true. In the 1970s and 1980s, Daniel Kahneman and Amos Tversky published a series of revolutionary papers on human decision making. They explained that people rely on emotional shortcuts, called “heuristics,” to make choices. The more uncertainty, the more shortcuts. And the shortcuts, while very useful, lead to a slew of predictable errors. For example, in one study, they found that a majority of subjects judged a deadly flood triggered by a California earthquake to be more likely than an equally deadly flood occurring somewhere else in North America on its own. The notion of a California earthquake resonated more than the prospect of a flood—and so it was assigned a higher probability by the people in the study.
In fact, the chances of a flood occurring for some other reason is far greater. But that kind of workaday flood—the kind that kills people every year—does not trigger the same cascading series of emotional shortcuts. It is less scary for a reason, which isn’t to say that it’s rational.
At first, Kahneman and Tversky were labeled pessimists. At a time when most Americans were enchanted by technology, they had concluded that people were in fact irrational. They were attacked for exaggerating the flaws of the human brain. More than one critic pointed to the fact that man had walked on the moon. How could a species that has evolved to walk on the moon be plagued by irrationality? But their work forever altered the study of risk. In 2002, six years after Tversky’s death, Kahneman was awarded the Nobel Prize in Economics for their work.
Today, most people who study decision making agree that human beings are not rational. “We don’t go around like risk assessors—doing calculations, multiplying probabilities. That’s been disproved,” says Paul Slovic, a psychology professor at the University of Oregon and one of the world’s most respected experts on risk. Instead, people rely on two different systems: the intuitive and the analytical. The intuitive system is automatic, fast, emotional, and swayed heavily by experiences and images. The analytical system is the ego to the brain’s id: logical, contemplative, and pragmatic.
One system can override the other, depending on the situation. For example, consider this question:
A coffee and a donut cost $1.10 in total. The coffee costs $1 more than the donut. How much does the donut cost?
If your first answer was ten cents, that’s your intuitive system talking. If you then caught yourself and came to the correct answer (five cents), that’s your analytical system policing your intuition.
Notice how deft the intuitive system is! It moved at lightning speed, and if the question were a mountain lion about to lunge at your throat, it might have sa
ved your life—or at least distracted the lion for a minute.
But it was also wrong. And this is where we come to the truth-telling moment: we all make mistakes when we judge risk. Our risk formula, especially when it comes to disasters, almost never looks this rational:
Risk = Probability × Consequence
No, if we could reduce our risk calculation to a simple formula, it might look more like this:
Risk = Probability × Consequence × Dread/Optimism
Dread. Rarely does a label used by scientists so aptly fit the emotion it describes. Think of dread as humanity in a word. It represents all of our evolutionary fears, hopes, lessons, prejudices, and distortions wrapped up in one dark X factor.
After talking about dread with risk experts, I started to imagine it as a sum of many other, powerful factors. Dread had its own equation. Each factor in the equation could raise or reduce the sensation of dread, depending on the situation. It seemed important to break dread into its parts in order to understand its imperfections. So here, with apologies to those experts for reducing their findings to a formula, is what I think the equation for dread might look like:
Dread = Uncontrollability + Unfamiliarity + Imaginability + Suffering + Scale of Destruction + Unfairness
Unthinkable: Who Survives When Disaster Strikes - and Why Page 5