The Undoing Project

Home > Other > The Undoing Project > Page 18
The Undoing Project Page 18

by Michael Lewis


  2.The sample consists of 6 persons whose average height is 5 ft. 8 in.?

  The odds most commonly assigned by their subjects were, in the first case, 8:1 in favor and, in the second case, 2.5:1 in favor. The correct odds were 16:1 in favor in the first case, and 29:1 in favor in the second case. The sample of six people gave you a lot more information than the sample of one person. And yet people believed, incorrectly, that if they picked a single person who was five foot ten, they were more likely to have picked from the population of men than had they picked six people with an average height of five foot eight. People didn’t just miscalculate the true odds of a situation: They treated the less likely proposition as if it were the more likely one. And they did this, Amos and Danny surmised, because they saw “5 ft. 10 in.” and thought: That’s the typical guy! The stereotype of the man blinded them to the likelihood that they were in the presence of a tall woman.

  A certain town is served by two hospitals. In the larger hospital about 45 babies are born each day, and in the smaller hospital about 15 babies are born each day. As you know, about 50 percent of all babies are boys. The exact percentage of baby boys, however, varies from day to day. Sometimes it may be higher than 50 percent, sometimes lower.

  For a period of 1 year, each hospital recorded the days on which more than 60 percent of the babies born were boys. Which hospital do you think recorded more such days? Check one:

  — The larger hospital

  — The smaller hospital

  — About the same (that is, within 5 percent of each other)

  People got that one wrong, too. Their typical answer was “same.” The correct answer is “the smaller hospital.” The smaller the sample size, the more likely that it is unrepresentative of the wider population. “We surely do not mean to imply that man is incapable of appreciating the impact of sample size on sampling variance,” wrote Danny and Amos. “People can be taught the correct rule, perhaps even with little difficulty. The point remains that people do not follow the correct rule, when left to their own devices.”

  To which a bewildered American college student might reply: All these strange questions! What do they have to do with my life? A great deal, Danny and Amos clearly believed. “In their daily lives,” they wrote, “people ask themselves and others questions such as: What are the chances that this 12-year-old boy will grow up to be a scientist? What is the probability that this candidate will be elected to office? What is the likelihood that this company will go out of business?” They confessed that they had confined their questions to situations in which the odds could be objectively calculated. But they felt fairly certain that people made the same mistakes when the odds were harder, or even impossible, to know. When, say, they guessed what a little boy would do for a living when he grew up, they thought in stereoypyes. If he matched their mental picture of a scientist, they guessed he’d be a scientist—and neglect the prior odds of any kid becoming a scientist.

  Of course, you couldn’t prove that people misjudged the odds of a situation when the odds were extremely difficult or even impossible to know. How could you prove that people came to the wrong answer when a right answer didn’t exist? But if people’s judgments were distorted by representativeness when the odds were knowable, how likely was it that their judgments were any better when the odds were a total mystery?

  * * *

  Danny and Amos had their first big general idea—the mind had these mechanisms for making judgments and decisions that were usually useful but also capable of generating serious error. The next paper they produced inside the Oregon Research Institute described a second mechanism, an idea that had come to them just a couple of weeks after the first. “It wasn’t all representativeness,” said Danny. “There was something else going on. It wasn’t just similarity.” The new paper’s title was once again more mystifying than helpful: “Availability: A Heuristic for Judging Frequency and Probability.” Once again, the authors came with news of the results of questions that they had posed to students, mostly at the University of Oregon, where they now had an endless supply of lab rats. They’d gathered a lot more kids in classrooms and asked them, absent a dictionary or any text, to answer these bizarre questions:

  The frequency of appearance of letters in the English language was studied. A typical text was selected, and the relative frequency with which various letters of the alphabet appeared in the first and third positions of the words was recorded. Words of less than three letters were excluded from the count.

  You will be given several letters of the alphabet, and you will be asked to judge whether these letters appear more often in the first or in the third position, and to estimate the ratio of the frequency with which they appear in these positions. . . .

  Consider the letter K

  Is K more likely to appear in

  ____the first position?

  ____the third position?

  (check one)

  My estimate for the ratio of these two values is:________:1

  If you thought that K was, say, twice as likely to appear as the first letter of an English word than as the third letter, you checked the first box and wrote your estimate as 2:1. This was what the typical person did, as it happens. Danny and Amos replicated the demonstration with other letters—R, L, N, and V. Those letters all appeared more frequently as the third letter in an English word than as the first letter—by a ratio of two to one. Once again, people’s judgment was, systematically, very wrong. And it was wrong, Danny and Amos now proposed, because it was distorted by memory. It was simply easier to recall words that start with K than to recall words with K as their third letter.

  The more easily people can call some scenario to mind—the more available it is to them—the more probable they find it to be. Any fact or incident that was especially vivid, or recent, or common—or anything that happened to preoccupy a person—was likely to be recalled with special ease, and so be disproportionately weighted in any judgment. Danny and Amos had noticed how oddly, and often unreliably, their own minds recalculated the odds, in light of some recent or memorable experience. For instance, after they drove past a gruesome car crash on the highway, they slowed down: Their sense of the odds of being in a crash had changed. After seeing a movie that dramatizes nuclear war, they worried more about nuclear war; indeed, they felt that it was more likely to happen. The sheer volatility of people’s judgment of the odds—their sense of the odds could be changed by two hours in a movie theater—told you something about the reliability of the mechanism that judged those odds.

  They went on to describe nine other equally odd mini-experiments that got at various tricks that memory might play on judgment. Danny thought of them as very much like the optical illusions the Gestalt psychologists he had loved in his youth planted in their texts. You saw them and were fooled by them and wanted to know why. He and Amos were dramatizing tricks of the mind rather than tricks of the eye, but the effect was similar, and the material available to them appeared to be even more abundant. They read lists of people’s names to Oregon students, for instance. Thirty-nine names, read at a rate of two seconds per name. The names were all easily identifiable as male or female. A few were the names of famous people—Elisabeth Taylor, Richard Nixon. A few were names of slightly less famous people—Lana Turner, William Fulbright. One list consisted of nineteen male names and twenty female names, the other of twenty female names and nineteen male names. The list that had more female names on it had more names of famous men, and the list that had more male names on it contained the names of more famous women. The unsuspecting Oregon students, having listened to a list, were then asked to judge if it contained the names of more men or more women.

  They almost always got it backward: If the list had more male names on it, but the women’s names were famous, they thought the list contained more female names, and vice versa. “Each of the problems had an objectively correct answer,” Amos and Danny wrote, after t
hey were done with their strange mini-experiments. “This is not the case in many real-life situations where probabilities are judged. Each occurrence of an economic recession, a successful medical operation, or a divorce, is essentially unique, and its probability cannot be evaluated by a simple tally of instances. Nevertheless, the availability heuristic may be applied to evaluate the likelihood of such events. “In judging the likelihood that a particular couple will be divorced, for example, one may scan one’s memory for similar couples which this question brings to mind. Divorces will appear probable if divorces are prevalent among the instances that are retrieved in this manner.”

  The point, once again, wasn’t that people were stupid. This particular rule they used to judge probabilities (the easier it is for me to retrieve from my memory, the more likely it is) often worked well. But if you presented people with situations in which the evidence they needed to judge them accurately was hard for them to retrieve from their memories, and misleading evidence came easily to mind, they made mistakes. “Consequently,” Amos and Danny wrote, “the use of the availability heuristic leads to systematic biases.” Human judgment was distorted by . . . the memorable.

  Having identified what they took to be two of the mind’s mechanisms for coping with uncertainty, they naturally asked: Are there others? Apparently they were unsure. Before they left Eugene, they jotted down some notes about other possibilities. “The conditionality heuristic,” they called one of these. In judging the degree of uncertainty in any situation, they noted, people made “unstated assumptions.” “In assessing the profit of a given company, for example, people tend to assume normal operating conditions and make their estimates contingent upon that assumption,” they wrote in their notes. “They do not incorporate into their estimates the possibility that these conditions may be drastically changed because of a war, sabotage, depressions, or a major competitor being forced out of business.” Here, clearly, was another source of error: not just that people don’t know what they don’t know, but that they don’t bother to factor their ignorance into their judgments.

  Another possible heuristic they called “anchoring and adjustment.” They first dramatized its effects by giving a bunch of high school students five seconds to guess the answer to a math question. The first group was asked to estimate this product:

  8 × 7 × 6 × 5 × 4 × 3 × 2 × 1

  The second group to estimate this product:

  1 × 2 × 3 × 4 × 5 × 6 × 7 × 8

  Five seconds wasn’t long enough to actually do the math: The kids had to guess. The two groups’ answers should have been at least roughly the same, but they weren’t, even roughly. The first group’s median answer was 2,250. The second group’s median answer was 512. (The right answer is 40,320.) The reason the kids in the first group guessed a higher number for the first sequence was that they had used 8 as a starting point, while the kids in the second group had used 1.

  It was almost too easy to dramatize this weird trick of the mind. People could be anchored with information that was totally irrelevant to the problem they were being asked to solve. For instance, Danny and Amos asked their subjects to spin a wheel of fortune with slots on it that were numbered 0 through 100. Then they asked the subjects to estimate the percentage of African countries in the United Nations. The people who spun a higher number on the wheel tended to guess that a higher percentage of the United Nations consisted of African countries than did those for whom the needle landed on a lower number. What was going on here? Was anchoring a heuristic, the way that representativeness and availability were heuristics? Was it a shortcut that people used, in effect, to answer to their own satisfaction a question to which they could not divine the true answer? Amos thought it was; Danny thought it wasn’t. They never came to sufficient agreement to write a paper on the subject. Instead they dropped it into summaries of their work. “We had to stick anchoring in, because the result was so spectacular,” said Danny. “But as a result we wound up with a vague notion of what a heuristic is.”

  Danny would later say that it was hard to explain what he and Amos were doing in the beginning: “How can you explain a conceptual fog?” he said. “We didn’t have the intellectual tools to understand what we were finding.” Were they investigating the biases or the heuristics? The errors, or the mechanisms that produced the errors? The errors enabled you to offer at least a partial description of the mechanism: The bias was the footprint of the heuristic. The biases, too, would soon have their own names, like the “recency bias” and the “vividness bias.” But in hunting for errors that they themselves had made, and then tracking them back to their source in the human mind, they had stumbled upon errors without a visible trail. What were they to make of systematic errors for which there was no apparent mechanism? “We really couldn’t think of others,” said Danny. “There seemed to be very few mechanisms.”

  Just as they never tried to explain how the mind forms the models that underpinned the representativeness heuristic, they left mostly to one side the question of why human memory worked in such a way that the availability heuristic had such power to mislead us. They focused entirely on the various tricks it could play. The more complicated and lifelike the situation a person was asked to judge, they suggested, the more insidious the role of availability. What people did in many complicated real-life problems—when trying to decide if Egypt might invade Israel, say, or their husband might leave them for another woman—was to construct scenarios. The stories we make up, rooted in our memories, effectively replace probability judgments. “The production of a compelling scenario is likely to constrain future thinking,” wrote Danny and Amos. “There is much evidence showing that, once an uncertain situation has been perceived or interpreted in a particular fashion, it is quite difficult to view it in any other way.”

  But these stories people told themselves were biased by the availability of the material used to construct them. “Images of the future are shaped by experience of the past,” they wrote, turning on its head Santayana’s famous lines about the importance of history: Those who cannot remember the past are condemned to repeat it. What people remember about the past, they suggested, is likely to warp their judgment of the future. “We often decide that an outcome is extremely unlikely or impossible, because we are unable to imagine any chain of events that could cause it to occur. The defect, often, is in our imagination.”¶

  The stories people told themselves, when the odds were either unknown or unknowable, were naturally too simple. “This tendency to consider only relatively simple scenarios,” they concluded, “may have particularly salient effects in situations of conflict. There, one’s own moods and plans are more available to one than those of the opponent. It is not easy to adopt the opponent’s view of the chessboard or of the battlefield.” The imagination appeared to be governed by rules. The rules confined people’s thinking. It’s far easier for a Jew living in Paris in 1939 to construct a story about how the German army will behave much as it had in 1919, for instance, than to invent a story in which it behaves as it did in 1941, no matter how persuasive the evidence might be that, this time, things are different.

  * * *

  * I owe some of this to a spectacular article about the construction and destruction of the World Trade Center towers by James Glanz and Eric Lipton, published in the New York Times Magazine a few days before the first anniversary of the attacks. William Poundstone’s book Priceless offers a more detailed account of the sway room.

  † In 1986, thirty-two years after the publication of his book, Meehl wrote an essay called “Causes and Effects of My Disturbing Little Book,” in which he discussed the by then overwhelming evidence that expert judgment had its issues. “When you are pushing 90 investigations,” wrote Meehl, “predicting everything from the outcome of football games to the diagnosis of liver disease[,] and when you can hardly come up with a half dozen studies showing even a weak tendency in favor of the clinician, it is time to draw a practi
cal conclusion. . . . Not to argue ad hominem but to explain after the fact, I think this is just one more of the numerous examples of the ubiquity and recalcitrance of irrationality in the conduct of human affairs.”

  ‡ Having realized at the start of their collaboration that they would never be able to work out who had contributed more to any given paper, they alternated lead authorship. Because Amos had won the coin flip to be lead author on “Belief in the Law of Small Numbers,” Danny was lead author on this new paper.

  § Standard deviation is a measurement of the dispersal of any population. The bigger the standard deviation, the more varied the population. A standard deviation of 2.5 inches in a world in which the average man is five foot ten means that roughly 68 percent of men are between 5 feet 7-1/2 inches and six feet 1/2 inch. If the standard deviation was zero, all men would be exactly five foot ten.

  ¶ Those lines come not from their published paper but from a summary of their work that they produced a year after the paper’s publication.

  7

  THE RULES OF PREDICTION

  Amos liked to say that if you are asked to do anything—go to a party, give a speech, lift a finger—you should never answer right away, even if you are sure that you want to do it. Wait a day, Amos said, and you’ll be amazed how many of those invitations you would have accepted yesterday you’ll refuse after you have had a day to think it over. A corollary to his rule for dealing with demands upon his time was his approach to situations from which he wished to extract himself. A human being who finds himself stuck at some boring meeting or cocktail party often finds it difficult to invent an excuse to flee. Amos’s rule, whenever he wanted to leave any gathering, was to just get up and leave. Just start walking and you’ll be surprised how creative you will become and how fast you’ll find the words for your excuse, he said. His attitude to the clutter of daily life was of a piece with his strategy for dealing with social demands. Unless you are kicking yourself once a month for throwing something away, you are not throwing enough away, he said. Everything that didn’t seem to Amos obviously important he chucked, and thus what he saved acquired the interest of objects that have survived a pitiless culling. One unlikely survivor is a single scrap of paper with a few badly typed words on it, drawn from conversations he had with Danny in the spring of 1972 as they neared the end of their time in Eugene. For some reason Amos saved it:

 

‹ Prev