The Undoing Project

Home > Other > The Undoing Project > Page 20
The Undoing Project Page 20

by Michael Lewis


  It wasn’t just sports announcers and political pundits who radically revised their narratives, or shifted focus, so that their stories seemed to fit whatever had just happened in a game or an election. Historians imposed false order upon random events, too, probably without even realizing what they were doing. Amos had a phrase for this. “Creeping determinism,” he called it—and jotted in his notes one of its many costs: “He who sees the past as surprise-free is bound to have a future full of surprises.”

  A false view of what has happened in the past makes it harder to see what might occur in the future. The historians in his audience of course prided themselves on their “ability” to construct, out of fragments of some past reality, explanatory narratives of events which made them seem, in retrospect, almost predictable. The only question that remained, once the historian had explained how and why some event had occurred, was why the people in his narrative had not seen what the historian could now see. “All the historians attended Amos’s talk,” recalled Biederman, “and they left ashen-faced.”

  After he had heard Amos explain how the mind arranged historical facts in ways that made past events feel a lot less uncertain, and a lot more predictable, than they actually were, Biederman felt certain that his and Danny’s work could infect any discipline in which experts were required to judge the odds of an uncertain situation—which is to say, great swaths of human activity. And yet the ideas that Danny and Amos were generating were still very much confined to academia. Some professors, most of them professors of psychology, had heard of them. And no one else. It was not at all clear how two guys working in relative obscurity at Hebrew University could spread the word of their discoveries to people outside their field.

  In the early months of 1973, after their return to Israel from Eugene, Amos and Danny set to work on a long article summarizing their findings. They wanted to gather in one place the chief insights of the four papers they had already written and allow readers to decide what to make of them. “We decided to present the work for what it was: a psychological investigation,” said Danny. “We’d leave the big implications to others.” He and Amos both agreed that the journal Science offered them the best hope of reaching people in fields outside of psychology.

  Their article was less written than it was constructed. (“A sentence was a good day,” said Danny). As they were building it, they stumbled upon what they saw as a clear path for their ideas to enter everyday human life. They had been gripped by “The Decision to Seed Hurricanes,” a paper coauthored by Stanford professor Ron Howard. Howard was one of the founders of a new field called decision analysis. Its idea was to force decision makers to assign probabilities to various outcomes: to make explicit the thinking that went into their decisions before they made them. How to deal with killer hurricanes was one example of a problem that policy makers might use decision analysts to help address. Hurricane Camille had just wiped out a large tract of the Mississippi Gulf Coast and obviously might have done a lot more damage—say, if it had hit New Orleans or Miami. Meteorologists thought they now had a technique—dumping silver iodide into the storm—to reduce the force of a hurricane, and possibly even alter its path. Seeding a hurricane wasn’t a simple matter, however. The moment the government intervened in the storm, it was implicated in whatever damage that storm inflicted. The public, and the courts of law, were unlikely to give the government credit for what had not happened, for who could say with certainty what would have happened if the government had not intervened? Instead the society would hold its leaders responsible for whatever damage the storm inflicted, wherever it hit. Howard’s paper explored how the government might decide what to do—and that involved estimating the odds of various outcomes.

  But the way the decision analysts elicited probabilities from the minds of the hurricane experts was, in Danny and Amos’s eyes, bizarre. The analysts would present the hurricane seeding experts inside government with a wheel of fortune on which, say, a third of the slots were painted red. They’d ask: “Would you rather bet on the red sector of this wheel or bet that the seeded hurricane will cause more than $30 billion of property damage?” If the hurricane authority said he would rather bet on red, he was saying that he thought the chance the hurricane would cause more than $30 billion of property damage was less than 33 percent. And so the decision analysts would show him another wheel, with, say, 20 percent of the slots painted red. They did this until the percentage of red slots matched up with the authority’s sense of the odds that the hurricane would cause more than $30 billion of property damage. They just assumed that the hurricane seeding experts had an ability to correctly assess the odds of highly uncertain events.

  Danny and Amos had already shown that people’s ability to judge probabilities was queered by various mechanisms used by the mind when it faced uncertainty. They believed that they could use their new understanding of the systematic errors in people’s judgment to improve that judgment—and, thus, to improve people’s decision making. For instance, any person’s assessment of probabilities of a killer storm making landfall in 1973 was bound to be warped by the ease with which they recalled the fresh experience of Hurricane Camille. But how, exactly, was that judgment warped? “We thought decision analysis would conquer the world and we would help,” said Danny.

  The leading decision analysts were clustered around Ron Howard in Menlo Park, California, at a place called the Stanford Research Institute. In the fall of 1973 Danny and Amos flew to meet with them. But before they could figure out exactly how they were going to bring their ideas about uncertainty into the real world, uncertainty intervened. On October 6, the armies of Egypt and Syria—with troops and planes and money from as many as nine other Arab countries—launched an attack on Israel. Israeli intelligence analysts had dramatically misjudged the odds of an attack of any sort, much less a coordinated one. The army was caught off guard. On the Golan Heights, a hundred or so Israeli tanks faced fourteen hundred Syrian tanks. Along the Suez Canal, a garrison of five hundred Israeli troops and three tanks were quickly overrun by two thousand Egyptian tanks and one hundred thousand Egyptian soldiers. On a cool, cloudless, perfect morning in Menlo Park, Amos and Danny heard the news of the shocking Israeli losses. They raced to the airport for the first flight back home, so that they might fight in yet another war.

  * * *

  * By the time they were finished with the project, they had dreamed up an array of hysterically bland characters for people to evaluate and judge to be more likely lawyers or engineers. Paul, for example. “Paul is 36 years old, married, with 2 children. He is relaxed and comfortable with himself and with others. An excellent member of a team, he is constructive and not opinionated. He enjoys all aspects of his work, and in particular, the satisfaction of finding clean solutions to complex problems.”

  † In a brief memoir, Fischhoff later recalled how his idea had first come to him in Danny’s seminar: “We read Paul Meehl’s (1973) ‘Why I Do Not Attend Case Conferences.’ One of his many insights concerned clinicians’ exaggerated feeling of having known all along how cases were going to turn out.” The conversation about Meehl’s idea led Fischhoff to think about the way Israelis were always pretending to have foreseen essentially unforeseeable political events. Fischhoff thought, “If we’re so prescient, why aren’t we running the world?” Then he set out to see exactly how prescient people who thought themselves prescient actually were.

  8

  GOING VIRAL

  The young woman they called him to examine that summer day was still in a state of shock. As Don Redelmeier understood it, her car had smashed head-on into another car a few hours earlier, and the ambulance had rushed her straight to Sunnybrook Hospital. She’d suffered broken bones everywhere—some of which they had detected and others, it later became clear, they had not. They’d found the multiple fractures in her ankles, feet, hips, and face. (They’d missed the fractures in her ribs.) But it was only after she arrived in the Sunnybrook operating roo
m that they realized there was something wrong with her heart.

  Sunnybrook was Canada’s first and largest regional trauma center, an eruption of red-brown bricks in a quiet Toronto suburb. It had started its life as a hospital for soldiers returning from the Second World War, but as the veterans died, its purpose shifted. In the 1960s the government finished building what would become at its widest a twenty-four-lane highway across Ontario. It would also become the most heavily used road in North America, and one of its busiest stretches passed close by the hospital. The carnage from Highway 401 gave the hospital a new life. Sunnybrook rapidly acquired a reputation for treating victims of automobile accidents; its ability to cope with one sort of medical trauma inevitably attracted other sorts of trauma. “Business begets business,” explained one of Sunnybrook’s administrators. By the turn of the twenty-first century, Sunnybrook was the go-to destination not only for victims of car crashes but for attempted suicides, wounded police officers, old people who had taken a fall, pregnant women with serious complications, construction workers who had been hurt on the job, and the survivors of gruesome snowmobile crashes—who were medevaced in with surprising frequency from the northern Canadian boondocks. Along with the trauma came complexity. A lot of the damaged people who turned up at Sunnybrook had more than one thing wrong with them.

  That’s where Redelmeier entered. By nature a generalist, and by training an internist, his job in the trauma center was, in part, to check the understanding of the specialists for mental errors. “It isn’t explicit but it’s acknowledged that he will serve as a check on other people’s thinking,” said Rob Fowler, an epidemiologist at Sunnybrook. “About how people do their thinking. He keeps people honest. The first time people interact with him they’ll be taken aback: Who the hell is this guy, and why is he giving me feedback? But he’s lovable, at least the second time you meet him.” That Sunnybrook’s doctors had come to appreciate the need for a person to serve as a check on their thinking, Redelmeier thought, was a sign of how much the profession had changed since he entered it in the mid-1980s. When he’d started out, doctors set themselves up as infallible experts; now there was a place in Canada’s leading regional trauma center for a connoisseur of medical error. A hospital was now viewed not just as a place to treat the unwell but also as a machine for coping with uncertainty. “Wherever there is uncertainty there has got to be judgment,” said Redelmeier, “and wherever there is judgment there is an opportunity for human fallibility.”

  Across North America, more people died every year as a result of preventable accidents in hospitals than died in car crashes—which was saying something. Bad things happened to patients, Redelmeier often pointed out, when they were moved without extreme care from one place in a hospital to another. Bad things happened when patients were treated by doctors and nurses who had forgotten to wash their hands. Bad things even happened to people when they pressed hospital elevator buttons. Redelmeier had actually co-written an article about that: “Elevator Buttons as Unrecognized Sources of Bacterial Colonization in Hospitals.” For one of his studies, he had swabbed 120 elevator buttons and 96 toilet seats at three big Toronto hospitals and produced evidence that the elevator buttons were far more likely to infect you with some disease.

  But of all the bad things that happened to people in hospitals, the one that most preoccupied Redelmeier was clinical misjudgment. Doctors and nurses were human, too. They sometimes failed to see that the information patients offered them was unreliable—for instance, patients often said that they were feeling better, and might indeed believe themselves to be improving, when they had experienced no real change in their condition. Doctors tended to pay attention mainly to what they were asked to pay attention to, and to miss some bigger picture. They sometimes failed to notice what they were not directly assigned to notice. “One of the things Don taught me was the value of observing the room when the patient isn’t there,” says Jon Zipursky, chief of residents at Sunnybrook. “Look at their meal tray. Did they eat? Did they pack for a long stay or a short one? Is the room messy or neat? Once we walked into the room and the patient was sleeping. I was about to wake him up and Don stops me and says, There is a lot you can learn about people from just watching.”

  Doctors tended to see only what they were trained to see: That was another big reason bad things might happen to a patient inside a hospital. A patient received treatment for something that was obviously wrong with him, from a specialist oblivious to the possibility that some less obvious thing might also be wrong with him. The less obvious thing, on occasion, could kill a person.

  The conditions of people mangled on the 401 were often so dire that the most obvious things wrong with them demanded the complete attention of the medical staff, and immediate treatment. But the dazed young woman who arrived in the Sunnybrook emergency room directly from her head-on car crash, with her many broken bones, presented her surgeons, as they treated her, with a disturbing problem. The rhythm of her heartbeat had become wildly irregular. It was either skipping beats or adding extra beats; in any case, she had more than one thing seriously wrong with her.

  Immediately after the trauma center staff called Redelmeier to come to the operating room, they diagnosed the heart problem on their own—or thought they had. The young woman remained alert enough to tell them that she had a past history of an overactive thyroid. An overactive thyroid can cause an irregular heartbeat. And so, when Redelmeier arrived, the staff no longer needed him to investigate the source of the irregular heartbeat but to treat it. No one in the operating room would have batted an eye if Redelmeier had simply administered the drugs for hyperthyroidism. Instead, Redelmeier asked everyone to slow down. To wait. Just a moment. Just to check their thinking—and to make sure they were not trying to force the facts into an easy, coherent, but ultimately false story.

  Something bothered him. As he said later, “Hyperthyroidism is a classic cause of an irregular heart rhythm, but hyperthyroidism is an infrequent cause of an irregular heart rhythm.” Hearing that the young woman had a history of excess thyroid hormone production, the emergency room medical staff had leaped, with seeming reason, to the assumption that her overactive thyroid had caused the dangerous beating of her heart. They hadn’t bothered to consider statistically far more likely causes of an irregular heartbeat. In Redelmeier’s experience, doctors did not think statistically. “Eighty percent of doctors don’t think probabilities apply to their patients,” he said. “Just like 95 percent of married couples don’t believe the 50 percent divorce rate applies to them, and 95 percent of drunk drivers don’t think the statistics that show that you are more likely to be killed if you are driving drunk than if you are driving sober applies to them.”

  Redelmeier asked the emergency room staff to search for other, more statistically likely causes of the woman’s irregular heartbeat. That’s when they found her collapsed lung. Like her fractured ribs, her collapsed lung had failed to turn up on the X-ray. Unlike the fractured ribs, it could kill her. Redelmeier ignored the thyroid and treated the collapsed lung. The young woman’s heartbeat returned to normal. The next day, her formal thyroid tests came back: Her thyroid hormone production was perfectly normal. Her thyroid never had been the issue. “It was a classic case of the representativeness heuristic,” said Redelmeier. “You need to be so careful when there is one simple diagnosis that instantly pops into your mind that beautifully explains everything all at once. That’s when you need to stop and check your thinking.”

  It wasn’t that what first came to mind was always wrong; it was that its existence in your mind led you to feel more certain than you should be that it was correct. “Beware of the delirious guy in the emergency unit with the long history of alcoholism,” said Redelmeier, “because you will say, ‘He’s just drunk,’ and you’ll miss the subdural hematoma.” The woman’s surgeons had leapt from her medical history to a diagnosis without considering the base rates. As Kahneman and Tversky long ago had pointed ou
t, a person who is making a prediction—or a diagnosis—is allowed to ignore base rates only if he is completely certain he is correct. Inside a hospital, or really anyplace else, Redelmeier was never completely certain about anything, and he didn’t see why anybody else should be, either.

  * * *

  Redelmeier had grown up in Toronto, in the same house in which his stockbroker father had been raised. The youngest of three boys, he often felt a little stupid; his older brothers always seemed to know more than he did and were keen to let him know it. Redelmeier also had a speech impediment—a maddening stammer he would never cease to work hard, and painfully, to compensate for. (When he called for restaurant reservations, he just told them his name was “Don Red.”) His stammer slowed him down when he spoke; his weakness as a speller slowed him down when he wrote. His body was not terribly well coordinated, and by the fifth grade he required glasses to correct his eyesight. His two great strengths were his mind and his temperament. He was always extremely good at math; he loved math. He could explain it, too, and other kids came to him when they couldn’t understand what the teacher had said. That is where his temperament entered. He was almost peculiarly considerate of others. From the time he was a small child, grown-ups had noticed that about him: His first instinct upon meeting someone else was to take care of the person.

  Still, even from math class, where he often wound up helping all the other students, what he took away was a sense of his own fallibility. In math there was a right answer and a wrong answer, and you couldn’t fudge it. “And the errors are sometimes predictable,” he said. “You see them coming a mile away and you still make them.” His experience of life as an error-filled sequence of events, he later thought, might be what had made him so receptive to an obscure article, in the journal Science, that his favorite high school teacher, Mr. Fleming, had given him to read in late 1977. He took the article home with him and read it that night at his desk.

 

‹ Prev