The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us

Home > Other > The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us > Page 20
The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us Page 20

by Christopher Chabris


  One night, he and Amy [his sister] watched “The Karen Carpenter Story,” a maudlin TV movie about the singer, who died of a heart attack brought on by anorexia. When it was over, Wallace’s sister, who was working on her own M.F.A., at the University of Virginia, told David that she had to drive back to Virginia. David asked her not to go. After she went, he tried to commit suicide with pills.

  What do you make of this passage about Wallace’s earlier suicide attempt? To us, the most natural interpretation is that the movie upset Wallace, that he wanted his sister to stay with him but she refused, and that in despair over losing her companionship, he overdosed. But if you read the passage again, you will see that none of these facts are stated explicitly. Strictly speaking, even the idea that he wanted her to stay is only implied by the sentence, “David asked her not to go.” Max is almost clinically sparing in his just-the-facts approach. But the interpretation we attach to these facts seems obvious; we come to it automatically and without conscious thought, indeed without even realizing that we are adding in information that is not present in the source. This is the illusion of cause at work. When a series of facts is narrated, we fill in the gaps to create a causal sequence: Event 1 caused Event 2, which caused Event 3, and so on. The movie made Wallace sad, which made him ask Amy to stay; she went, so she must have refused him, causing him to attempt suicide.

  In addition to automatically inferring cause when it is only implied by a sequence, we also tend to remember a narrative better when we have to draw such inferences than when we don’t. Consider the following pairs of sentences, taken from a study by University of Denver psychologist Janice Keenan and her colleagues:12

  Joey’s big brother punched him again and again. The next day his body was covered by bruises.

  Joey’s crazy mother became furiously angry with him. The next day his body was covered by bruises.

  In the first case, no inference is needed—the cause of Joey’s bruising is stated explicitly in the first sentence. In the second case, the cause of the bruises is implied but not stated. For this reason, understanding the second pair of sentences turns out to be slightly harder (and takes slightly longer) than understanding the first. But what you’re doing as you read the sentences is crucial. To understand the second pair of sentences, you must make an extra logical inference that you don’t need in order to make sense of the first pair. And in making this inference, you form a richer and more elaborate memory for what you’ve read. Readers of the New Yorker story likely will remember the implied cause of Wallace’s early suicide attempt, even though it never was stated in the story itself. They will do so because they drew the inference themselves rather than having it handed to them.

  “Tell me a story,” children beg their parents. “And then what happened?” they ask if they hear a pause. Adults spend billions of dollars on movies, television, novels, short stories, works of biography and history, and other forms of narrative. One appeal of spectator sports is their chronology; every play, every shot, every home run is a new event in a story whose ending is in doubt. Teachers—and authors of books on science—are learning that stories are effective ways to grab and control an audience’s attention.13 But there is a paradox here: Stories—that is, sequences of events—are by themselves entertaining, but not directly useful. It’s hard to see why evolution would have designed our brains to prefer receiving facts in chronological order unless there was some other benefit to be gained from that type of presentation. Unlike a specific story, a general rule about what causes what can be extremely valuable. Knowing that your brother ate a piece of fruit with dark spots on it and then vomited encourages you to infer causation (by food poisoning), a piece of knowledge that can help you in a wide variety of future situations. So we may delight in narrative precisely because we compulsively assume causation when all we have is chronological order, and it’s the causation, not the sequence of events, that our brains are really designed to crave and use.

  In the next paragraph of his David Foster Wallace profile, D. T. Max tells us that after recovering from his suicide attempt, “Wallace had decided that writing was not worth the risk to his mental health. He applied and was accepted as a graduate student in philosophy at Harvard.” Again, the causation is implied: It was Wallace’s fear of depression and suicide that drove him—perhaps ironically—to graduate study in philosophy. But what are we to conclude about how he went about it? One possibility is that he applied to Harvard, and only to Harvard. A much more common practice is to apply to a wide variety of graduate programs and to see which ones admit you. Applying just to Harvard is the act of someone who is either expressing supreme confidence or setting himself up to fail (or both); applying broadly is the act of someone who just wants to pursue his interests at the best school he can get into. The different actions signal different personalities and approaches to life.

  It seems to us that Max is implying that Wallace applied only to Harvard, because if he had applied to other schools, that fact would have been relevant for our interpretation of Wallace’s behavior, so the author would have mentioned it. We automatically make the assumption, when reading statements like this one, that we have been given all of the information we need, and that the most straightforward causal interpretation is also the correct one. Max’s words don’t explicitly say that Wallace applied only to Harvard; they just lead us, without our awareness, into concluding that he did.

  The mind apparently prefers to make these extra leaps of logic over being explicitly told the reasons for everything. This may be one reason why the timeworn advice “show, don’t tell” is so valuable to creative writers seeking to make their prose more compelling. The illusion of narrative can indeed be a powerful tool for authors and speakers. By arranging purely factual statements in different orders, or by omitting or inserting relevant information, they can control what inferences their audiences will make, without explicitly arguing for and defending those inferences themselves. D. T. Max, whether deliberately or not, creates the impression that Wallace’s suicide attempt was precipitated by his sister’s possibly callous refusal to stay with him, and that Wallace chose to apply only to Harvard for graduate school. When you know about the contribution of narrative to the illusion of cause, you can read his words differently, and see that none of these conclusions are necessarily correct. (Tip: Listen carefully for when politicians and advertisers use this technique!)

  “I Want to Buy Your Rock”

  A conversation between Homer and Lisa in an episode of The Simpsons provides one of the best illustrations of the dangers of turning a temporal association into a causal explanation.14 After a bear is spotted in Springfield, the town initiates an official Bear Patrol, complete with helicopters and trucks with sirens, to make sure no bears are in town.

  HOMER: Ahhh … not a bear in sight. The bear patrol must be working like a charm.

  LISA: That’s specious reasoning, Dad.

  HOMER: Thank you, honey.

  LISA (picking up a rock from the ground): By your logic, I could claim that this rock keeps tigers away.

  HOMER: Ooooh … how does it work?

  LISA: It doesn’t work—it’s just a stupid rock. But I don’t see any tigers around here, do you?

  HOMER: Lisa, I want to buy your rock.

  Homer assumes that the bear patrol kept away bears, but it really did nothing at all—the first bear sighting was an anomaly that would not have recurred in any case. The scene is funny because the causal relationship is so outlandish. Rocks don’t keep tigers away, but Homer draws the inference anyway because the chronology of events induced an illusion of cause. In other cases, when the causal relationship seems plausible, people naturally accept it rather than think about alternatives, and the consequences can be much greater than overpaying for an anti-tiger rock.

  In April 2009, the Supreme Court of the United States heard oral arguments in the case of Northwest Austin Municipal Utility District No. 1 v. Holder. At issue was the Voting Rights Act, one of t
he federal civil rights laws enacted during the 1960s. Among other things, the law sought to prevent political jurisdictions (utility districts, cities, school boards, counties, etc.) in southern states from drawing boundaries and setting up election rules so as to favor the interests of white over black voters. Section 5 of the law required these states to obtain “preclearance” from the federal government before changing any election procedures. The Texas utility district argued that since the law imposed these requirements only on some of the states in the union (mostly those that had been—a hundred years earlier—part of the Confederacy), it unconstitutionally discriminated against them.

  Chief Justice John Roberts asked Neal Katyal, the government’s lawyer, about the import of the fact that just one out of every two thousand applications for an election rule change is rejected. Katyal answered, “I think what that represents is that Section 5 is actually working very well; that it provides a deterrent.” Roberts might have had the bear patrol episode in the back of his mind when he replied: “Well, that’s like the old elephant whistle—you know, I have this whistle to keep away the elephants. You know, well, that’s silly. Well, there are no elephants, so it must work.”15

  Roberts’s point, though he expressed it in the language of The Simpsons rather than that of cognitive psychology, is that the illusion of cause can make us assume that one event (the passage of the law) caused another event (the virtual end of discriminatory election rules), when the available data don’t logically establish such a relationship. The fact that the government grants preclearance almost every time says nothing about whether the law caused compliance. Something other than the law—such as a gradual reduction of racism, or at least overtly racist practices, over time—might have caused the change.

  We are taking no position on whether this part of the Voting Rights Act is necessary today; it may be or it may not be. But this is precisely the point: We have no way to know how useful it is if the only information we have is that virtually nobody is violating it. It’s possible that they would behave consistently with the proscriptions of the law even if it were no longer on the books.

  The problem illustrated by the arguments over the Voting Rights Act is endemic in public policy. How many laws are passed, renewed, or repealed on the basis of a truly causal understanding of their effects on behavior? We often speak of the clichéd danger of unintended consequences, but we rarely think about how little we can actually say about the intended consequences of government action. We know what was happening before the law or regulation went into effect, and we may know that something different happened afterward, but that alone does not prove that the law caused the difference. The only way to measure the causal effect of a law would be to conduct an experiment. In the case of the Voting Rights Act, the closest one could come would be to repeal Section 5 for a randomly selected group of jurisdictions and compare those with the rest over time, examining how many discriminatory electoral rules are enacted in each case. If the rate of discrimination differs between the two groups, then we could infer that the law has an effect.16 Of course, the law might still violate the Constitution, but there are some questions that even clever experimentation and data analysis can’t answer!

  This tendency to neglect alternative paths to the same outcome in favor of a single narrative pervades many of the bestselling business books.17 Almost every report claiming to identify the key factors that lead companies to succeed, from In Search of Excellence to Good to Great, errs by considering only companies that succeeded and then analyzing what they did. They don’t look at whether other companies did those same things and failed. Malcolm Gladwell’s bestseller The Tipping Point describes the remarkable reversal of fortune for the maker of unfashionable Hush Puppies after their shoes suddenly became trendy. Gladwell argues that Hush Puppies succeeded because they were adopted by a trendy subculture, which made them appealing and generated buzz. And he’s right that Hush Puppies generated buzz. But the conclusion that the buzz caused their success follows only from a retrospective narrative bias and not from an experiment. In fact, it’s not even clear that there’s an association between buzz and success in the data. To establish even a noncausal association we would need to know how many other similar companies took off without first generating a buzz, and how many other companies generated similar buzz but remained grounded. Only then could we start worrying about whether the buzz caused the success—or whether the causation really ran in the other direction (success leading to buzz), or even in both directions simultaneously (a virtuous cycle).

  There is one final pitfall inherent in turning chronology into causality. Because we perceive sequences of events as part of a timeline, with one leading to the next, it is hard to see that there are almost always many interrelated reasons or causes for a single outcome. The sequential nature of time leads people to act as though a complex decision or event must have only a single cause. We make fun of the enthusiasts of conspiracy theories for thinking this way, but they are just operating under a more extreme form of the illusion of cause that affects us all. Here are some statements made by Chris Matthews, host of the MSNBC news program Hardball, about the origins of the 2003 U.S. invasion of Iraq:

  “What is the motive for this war?” (February 4, 2003)

  “I wanted to know whether 9/11 is the reason, because a lot of people think it’s payback.” (February 6, 2003)

  “Do you believe the weapons of mass destruction was the reason for this war?” (October 24, 2003)

  “… the reason we went to war with Iraq was not to make a better Iraq. It was to kill the bad guys.” (October 31, 2003)

  “President Bush says he wants democracy to spread throughout the Middle East. Was that the real reason behind the war in Iraq?” (November 7, 2003)

  “Why do you think we went to Iraq? The real reason, not the sales pitch.” (October 9, 2006)

  “Their reason for this war, which they don’t regret, was never the reason they used to sell us on the war.” (January 29, 2009)

  We added the emphasis in each statement to show how it presupposes that the war must have had a single motive, reason, or cause. In the mind of a decision maker (or perhaps a “decider,” in this case), there might seem to be just one reason for a decision. But of course nearly every complex decision has multiple, complex causes. In this case, even as he searched for the one true reason, Matthews identified a wide variety of possibilities: weapons of mass destruction, Iraq’s support of terrorism, Saddam Hussein’s despotism, and the strategic goal of establishing democracy in Arab countries, to name only the most prominent. And they all arose against the backdrop of a new post-9/11 sensitivity to the possibility of enemies launching attacks on the U.S. homeland. Had one or some of these preconditions not been in place, the war might not have been launched. But it is not possible to isolate just one of them after the fact and say it was the reason for the invasion.18

  This kind of faulty reasoning about cause and effect is just as common in business as in politics. Sherry Lansing, long described as the most powerful woman in Hollywood, was CEO of Paramount Pictures from 1992 to 2004. She oversaw megahits like Forrest Gump and Titanic, and films from her studio received three Academy Awards for Best Picture. According to an article in the Los Angeles Times, after a series of failed projects and declines in Paramount’s share of box-office revenues, Lansing’s contract was not renewed. She resigned a year early, and it was widely believed that she had effectively been fired for poor performance. But just as the hits weren’t due solely to her genius, the duds couldn’t have been due solely to her screwups—hundreds of other people have creative influence on each movie, and hundreds of factors determine whether a movie captures the imagination (and cash) of audiences.

  Lansing’s successor, Brad Grey, was lauded for turning the studio around; two of the first films released under his leadership, War of the Worlds and The Longest Yard, were top grossers in 2005. However, both movies were conceived and produced during Lansing’s tenure. If
she had just hung on for a few more months, she would have received the credit and might have remained in charge.19 There’s no doubt that a CEO is officially responsible for the performance of her company, but attributing all of the company’s successes or failures to the one person at the top is a classic illustration of the illusion of cause.

  The Vaccination Hypothesis

  Let’s return to the story that began this chapter, about the six-year-old girl who contracted measles at a church meeting in Indiana after an unvaccinated missionary returned from Romania and spread the disease. We asked why parents would forgo a vaccine that helped to eliminate a serious and extremely contagious childhood disease. Now that we have discussed the three biases underlying the illusion of cause—overzealous pattern detection mechanisms, the unjustified leap from correlation to causation, and the inherent appeal of chronological narratives—we can begin to explain why some people voluntarily choose not to vaccinate their children against measles. The answer is that these parents, the media, some high-profile celebrities, and even some doctors have fallen prey to the illusion of cause. More precisely, they perceive a pattern where none actually exists and confuse a coincidence of timing for a causal relationship.

 

‹ Prev