The Undoing Project

Home > Other > The Undoing Project > Page 31
The Undoing Project Page 31

by Michael Lewis


  The first to take their work personally were the psychologists whose work it had trumped. Amos’s former teacher Ward Edwards had written the original journal article in 1954 inviting psychologists to investigate the assumptions of economics. Still, he’d never imagined this—two Israelis walking into the room and making a mockery of the entire conversation. In late 1970, after reading early drafts of Amos and Danny’s papers on human judgment, Edwards wrote to complain. In what would be the first of many agitated letters, he adopted the tone of a wise and indulgent master speaking to his naive pupils. How could Amos and Danny possibly believe that there was anything to learn from putting silly questions to undergraduates? “I think your data collection methods are such that I don’t take seriously a single ‘experimental’ finding you present,” wrote Edwards. These students they had turned into their lab rats were “careless and inattentive. And if they are confused and inattentive, they are much less likely to behave more like competent intuitive statisticians.” For every supposed limitation of the human mind Danny and Amos had uncovered, Edwards had an explanation. The gambler’s fallacy, for instance. If people thought that a coin, after landing on heads five times in a row, was more likely, on the sixth toss, to land on tails, it wasn’t because they misunderstood randomness. It was because “people get bored doing the same thing all the time.”

  Amos took the trouble to answer, almost politely, that first letter from his former professor. “It was certainly a pleasure to read your detailed comments on our papers and to see that, right or wrong, you have not lost any of your old fighting spirit,” he began, before describing his former professor as “not cogent.” “In particular,” Amos continued, “the objections you raised against our experimental method are simply unsupported. In essence, you engage in the practice of criticizing a procedural departure without showing how the departure might account for the results obtained. You do not present either contradictory data or a plausible alternative interpretation of our findings. Instead, you express a strong bias against our method of data collection and in favor of yours. This position is certainly understandable, yet it is hardly convincing.”

  Edwards was not pleased, but he kept his anger to himself for a few years. “No one wanted to get in a fight with Amos,” said the psychologist Irv Biederman. “Not in public! I only once ever saw anyone ever do it. There was this philosopher. At a conference. He gets up to give his talk. He’s going to challenge heuristics. Amos was there. When he finished talking Amos got up to rebut. It was like an ISIS beheading. But with humor.” Edwards must have sensed, in any open conflict with Amos, the possibility of being on the painful end of an ISIS beheading, with humor. And yet Amos had championed the idea that man was a good intuitive statistician. He needed to say something.

  In the late 1970s he finally found a principle on which to take a stand: The masses were not equipped to grasp Amos and Danny’s message. The subtleties were beyond them. People needed to be protected from misleading themselves into thinking that their minds were less trustworthy than they actually were. “I do not know whether you realize just how far that message has spread, or how devastating its effects have been,” Edwards wrote to Amos in September of 1979. “I attended the organizational meeting of the Society for Medical Decision Making one and a half weeks ago. I would estimate that every third paper mentioned your work in passing, mostly as justification for avoiding human intuition, judgment, decision making, and other intellectual processes.” Even sophisticated doctors were getting from Danny and Amos only the crude, simplified message that their minds could never be trusted. What would become of medicine? Of intellectual authority? Of experts?

  Edwards sent Amos a working draft of his assault on Danny and Amos’s work and hoped that Amos would leave him with his dignity. Amos didn’t. “The tone is snide, the evaluation of evidence is unfair and there are too many technical difficulties to begin to discuss,” Amos wrote, in a curt note to Edwards. “We are in sympathy with your attempt to redress what you regard as a distorted view of man. But we regret that you chose to do so by presenting a distorted view of our work.” In his reply, Edwards did a fair impression of a man who has just realized that his fly is unzipped, as he backpedals off a cliff. He offered up his personal problems—they ranged from serious jet lag to “a decade’s worth of personal frustrations”—as excuses for his failed paper, and then went on to more or less concede that he wished he’d never written it. “What especially embarrasses me is that after working so long as I did on trying to put this thing together I should have been as blind to its many flaws as I was,” he wrote to both Amos and Danny, before saying how he intended to entirely rewrite his paper and hoped very much to avoid any public controversy with them.

  Not everyone knew enough to be afraid of Amos. An Oxford philosopher named L. Jonathan Cohen raised a small philosophy-sized ruckus with a series of attacks in books and journals. He found alien the idea that you might learn something about the human mind by putting questions to people. He argued that as man had created the concept of rationality he must, by definition, be rational. “Rational” was whatever most people did. Or, as Danny put it in a letter that he reluctantly sent in response to one of Cohen’s articles, “Any error that attracts a sufficient number of votes is not an error at all.” Cohen labored to demonstrate that the mistakes discovered by Amos and Danny either were not mistakes or were the result of “mathematical or scientific ignorance” in people, easily remedied by a bit of exposure to college professors. “We both make a living by teaching probability and statistics,” Stanford’s Persi Diaconis and David Freedman, of the University of California at Berkeley, wrote to the journal Behavioral and Brain Sciences, which had published one of Cohen’s attacks. “Over and over again we see students and colleagues (and ourselves) making certain kinds of mistakes. Even the same mistake may be repeated by the same person many times. Cohen is wrong in dismissing this as the result of ‘mathematical or scientific ignorance.’” But by then it was clear that no matter how often people trained in statistics affirmed the truth of Danny and Amos’s work, people who weren’t would insist that they knew better.

  * * *

  Upon their arrival in North America, Amos and Danny had published a flurry of papers together. Mostly it was stuff they’d had in the works when they’d left Israel. But in the early 1980s what they wrote together was not done in the same way as before. Amos wrote a piece on loss aversion under both their names, to which Danny added a few stray paragraphs. Danny wrote up on his own what Amos had called “The Undoing Project,” titled it “The Simulation Heuristic,” and published it with both their names on top, in a book that collected their articles, along with others by students and colleagues. (And then set out to explore the rules of the imagination not with Amos but with his younger colleague at the University of British Columbia, Dale Miller.) Amos wrote an article, addressed directly to economists, to repair technical flaws in prospect theory. “Advances in Prospect Theory,” it was called, and though Amos did much of the work on it with his graduate student Rich Gonzalez, it ran as a journal article by Danny and Amos. “Amos said that it had always been Kahneman and Tversky and that this had to be Kahneman and Tversky, and that it would be really strange to add a third person to it,” said Gonzalez.

  Thus they maintained the illusion that they were still working together, much as before, even as the forces pulling them apart gathered strength. The growing crowd of common enemies failed to unite them. Danny was increasingly uneasy with the attitude Amos took toward their opponents. Amos was built to fight. Danny was built to survive. He shied from conflict. Now that their work was under attack, Danny adopted a new policy: to never review a paper that made him angry. It served as an excuse to ignore any act of hostility. Amos accused Danny of “identifying with the enemy,” and he wasn’t far off. Danny almost found it easier to imagine himself in his opponent’s shoes than in his own. In some strange way Danny contained within himself his own opponent. He didn
’t need another.

  Amos, to be Amos, needed opposition. Without it he had nothing to triumph over. And Amos, like his homeland, lived in a state of readiness for battle. “Amos didn’t have Danny’s feeling that we should all think together and work together,” said Walter Mischel, who had been the chair of Stanford’s Psychology Department when it hired Amos. “He thought, ‘Fuck You.’”

  That sentiment must have been passing through Amos’s mind in the early 1980s even more often than it usually did. The critics publishing attacks on his work with Danny were the least of it. At conferences and in conversations, Amos heard over and over from economists and decision theorists that he and Danny had exaggerated human fallibility. Or that the kinks in the mind that they had observed were artificial. Or only present in the minds of college undergraduates. Or . . . something. A lot of people with whom Amos interacted had big investments in the idea that people were rational. Amos was perplexed by their inability to admit defeat in an argument he had plainly won. “Amos wanted to crush the opposition,” said Danny. “It just got under his skin more than it did mine. He wanted to find something to shut people up. Which of course you can never do.” Toward the end of 1980, or maybe it was early 1981, Amos came to Danny with a plan to write an article that would end the discussion. Their opponents might never admit defeat—intellectuals seldom did—but they might at least decide to change the subject. “Winning by embarrassment,” Amos called it.

  Amos wanted to demonstrate the raw power of the mind’s rules of thumb to mislead. He and Danny had stumbled upon some bizarre phenomena back in Israel and never fully explored their implications. Now they did. As ever, they crafted careful vignettes, to reveal the inner workings of the minds of the people they asked to judge them. Amos’s favorite was about Linda.

  Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

  Linda was designed to be the stereotype of a feminist. Danny and Amos asked: To what degree does Linda resemble the typical member of each of the following classes?

  1)Linda is a teacher in elementary school.

  2)Linda works in a bookstore and takes Yoga classes.

  3)Linda is active in the feminist movement.

  4)Linda is a psychiatric social worker.

  5)Linda is a member of the League of Women voters.

  6)Linda is a bank teller.

  7)Linda is an insurance salesperson.

  8)Linda is a bank teller and is active in the feminist movement.

  Danny passed out the Linda vignette to students at the University of British Columbia. In this first experiment, two different groups of students were given four of the eight descriptions and asked to judge the odds that they were true. One of the groups had “Linda is a bank teller” on its list; the other got “Linda is a bank teller and is active in the feminist movement.” Those were the only two descriptions that mattered, though of course the students didn’t know that. The group given “Linda is a bank teller and is active in the feminist movement” judged it more likely than the group assigned “Linda is a bank teller.”

  That result was all that Danny and Amos needed to make their big point: The rules of thumb people used to evaluate probability led to misjudgments. “Linda is a bank teller and is active in the feminist movement” could never be more probable than “Linda is a bank teller.” “Linda is a bank teller and active in the feminist movement” was just a special case of “Linda is a bank teller.” “Linda is a bank teller” included “Linda is a bank teller and activist in the feminist movement” along with “Linda is a bank teller and likes to walk naked through Serbian forests” and all other bank-telling Lindas. One description was entirely contained by the other.

  People were blind to logic when it was embedded in a story. Describe a very sick old man and ask people: Which is more probable, that he will die within a week or die within a year? More often than not, they will say, “He’ll die within a week.” Their mind latches onto a story of imminent death and the story masks the logic of the situation. Amos created a lovely example. He asked people: Which is more likely to happen in the next year, that a thousand Americans will die in a flood, or that an earthquake in California will trigger a massive flood that will drown a thousand Americans? People went with the earthquake.

  The force that led human judgment astray in this case was what Danny and Amos had called “representativeness,” or the similarity between whatever people were judging and some model they had in their mind of that thing. The minds of the students in the first Linda experiment, latching onto the description of Linda and matching its details to their mental model of “feminist,” judged the special case to be more likely than the general one.

  Amos wasn’t satisfied with stopping there. He wanted to hand the entire list of Lindas to groups of people and have them rank the odds of each line item. He wanted to see if a person who decided that “Linda is a bank teller activist in the feminist movement” also thought it was more probable than “Linda is a bank teller.” He wanted to show people making that glaring mistake. “Amos really loved to do that,” said Danny. “To win the argument, you want people to actually make mistakes.”

  Danny was of two minds about this new project, and about Amos. From the moment they had left Israel, they’d been like a pair of swimmers caught in different currents, losing the energy to swim against them. Amos felt the pull of logic, Danny the tug of psychology. Danny wasn’t nearly as interested as Amos in demonstrations of human irrationality. His interest in decision theory ended with the psychological insight he brought to it. “There is an underlying debate,” said Danny later. “Are we doing psychology or decision theory?” Danny wanted to return to psychology. Plus Danny didn’t believe that people would actually make this particular mistake. Seeing the descriptions side by side, they’d realize that it was illogical to say that anyone was more likely to be a bank teller active in the feminist movement than simply a bank teller.

  With something of a heavy heart, Danny put what would come to be known as the Linda problem to a class of a dozen students at the University of British Columbia. “Twelve out of twelve fell for it,” he said. “I remember I gasped. Then I called Amos from my secretary’s phone.” They ran many further experiments, with different vignettes, on hundreds of subjects. “We just wanted to look at the boundaries of the phenomenon,” said Danny. To explore those boundaries, they finally shoved their subjects’ noses right up against logic. They gave subjects the same description of Linda and asked, simply: “Which of the two alternatives is more probable?”

  Linda is a bank teller.

  Linda is a bank teller and is active in the feminist movement.

  Eighty-five percent still insisted that Linda was more likely to be a bank teller in the feminist movement than she was to be a bank teller. The Linda problem resembled a Venn diagram of two circles, but with one of the circles wholly contained by the other. But people didn’t see the circles. Danny was actually stunned. “At every step we thought, now that’s not going to work,” he said. And whatever was going on inside people’s minds was terrifyingly stubborn. Danny gathered an auditorium full of UBC students and explained their mistake to them. “Do you realize you have violated a fundamental rule of logic?” he asked. “So what!” a young woman shouted from the back of the room. “You just asked for my opinion!”

  They put the Linda problem in different ways, to make sure that the students who served as their lab rats weren’t misreading its first line as saying “Linda is a bank teller NOT active in the feminist movement.” They put it to graduate students with training in logic and statistics. They put it to doctors, in a complicated medical story, in which lay embedded the opportunity to make a fatal error of logic. In overwhelming numbers doctors made the same mistake as undergraduates
. “Most participants appeared surprised and dismayed to have made an elementary error of reasoning,” wrote Amos and Danny. “Because the conjunction fallacy is easy to expose, people who committed it are left with the feeling that they should have known better.”

  The paper Amos and Danny set out to write about what they were now calling “the conjunction fallacy” must have felt to Amos like an argument ender—that is, if the argument was about whether the human mind reasoned probabilistically, instead of the ways that Danny and Amos had suggested. They walked the reader through how and why people violated “perhaps the simplest and the most basic qualitative law of probability.” They explained that people chose the more detailed description, even though it was less probable, because it was more “representative.” They pointed out some places in the real world where this kink in the mind might have serious consequences. Any prediction, for instance, could be made to seem more believable, even as it became less likely, if it was filled with internally consistent details. And any lawyer could at once make a case seem more persuasive, even as he made the truth of it less likely, by adding “representative” details to his description of people and events.

  And they showed all over again the power of the mental rules of thumb—these curious forces that they had curiously named “heuristics.” To the Linda problem Danny and Amos added another, from work they had done in the early 1970s in Jerusalem.

  In four pages of a novel (about 2,000 words), how many words would you expect to find that have the form _ _ _ _ ing (seven-letter words that end with “ing”)? Indicate your best estimate by circling one of the values below:

 

‹ Prev