Book Read Free

The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us

Page 19

by Christopher Chabris


  These examples represent just the tip of the iceberg that is the mind’s hyperactive tendency to spot patterns. Even trained professionals are biased to see patterns they expect to see and not ones that seem inconsistent with their beliefs. Recall Brian Hunter, the hedge fund manager who lost it all (more than once) by betting on the future price of natural gas. He thought he understood the reasons for the movements of the energy markets, and his inference of a causal pattern in the markets led to his company’s downfall. When pattern recognition works well, we can find the face of our lost child in the middle of a huge crowd at the mall. When it works too well, we spot deities in pastries, trends in stock prices, and other relationships that aren’t really there or don’t mean what we think they do.

  Causes and Symptoms

  Unlike the parade of unusual patients appearing on television dramas like Grey’s Anatomy and House, or coming to Dr. Keating’s St. Louis diagnostic clinic, the vast majority of the patients whom doctors see on a daily basis have run-of-the-mill problems. Experts quickly recognize common sets of symptoms; they’re sensitized to the most probable diagnoses, learning quite reasonably to expect to encounter the common cold more often than an exotic Asian flu, and ordinary sadness more often than clinical depression.

  Intuitively, most people think that experts consider more alternatives and more possible diagnoses rather than fewer. Yet the mark of true expertise is not the ability to consider more options, but the ability to filter out irrelevant ones. Imagine that a child arrives in the emergency room wheezing and short of breath. The most likely explanation might be asthma, in which case treating with a bronchodilator like albuterol should fix the problem. Of course, it’s also possible that the wheezing is caused by something the child swallowed that became lodged in his throat. Such a foreign body could cause all sorts of other symptoms, including secondary infections. On shows like House, that rare explanation would of course turn out to be the cause of the child’s symptoms. In reality, though, asthma or pneumonia is a far more likely explanation. An expert doctor recognizes the pattern, and likely has seen many patients with asthma, leading to a quick and almost always accurate diagnosis. Unless your job is like Dr. Keating’s, and you know that you’re dealing with exceptional cases, focusing too much on the rare causes would be counterproductive. Expert doctors consider first those few diagnoses that are the most probable explanations for a pattern of symptoms.

  Experts are, in a sense, primed to see patterns that fit their well-established expectations, but perceiving the world through a lens of expectations, however reasonable, can backfire. Just as people counting basketball passes often fail to notice an unexpected gorilla, experts can miss a “gorilla” if it is an unusual, unexpected, or rare underlying cause of a pattern. This can be an issue when doctors move from practicing in hospitals during their residencies and fellowships to practicing privately, especially if they go into family practice or internal medicine in a more suburban area. The frequencies of diseases doctors encounter in urban teaching hospitals differ greatly from those in suburban medical offices, so doctors must retune their pattern recognizers to the new environment in order to maintain an expert level of diagnostic skill.

  Expectations can cause anyone to sometimes see things that don’t exist. Chris’s mother has suffered from arthritis pain in her hands and knees for several years, and she feels that her joints hurt more on days when it is cold and raining. She’s not alone. A 1972 study found that 80–90 percent of arthritis patients reported greater pain when the temperature went down, the barometric pressure went down, and the humidity went up—in other words, when a cold rain was on the way. Medical textbooks used to devote entire chapters to the relationship between weather and arthritis. Some experts have even advised chronic pain patients to move across the country to warmer, drier areas. But does the weather actually exacerbate arthritis pain?

  Researchers Donald Redelmeier, a medical doctor, and Amos Tversky, a cognitive psychologist, tracked eighteen arthritis patients over fifteen months, asking them to rate their pain level twice each month. Then they matched these data up with local weather reports from the same time period. All but one of the patients believed that weather changes had affected their pain levels. But when Redelmeier and Tversky mapped the reports of pain to the weather the same day, or the day before, or two days before, there was no association at all. Despite the strong beliefs of the subjects who participated in their experiment, changes in the weather were entirely unrelated to reports of pain.

  Chris told his mother about this study. She said she was sure it was right, but she still felt what she felt. It’s not surprising that pain doesn’t necessarily respond to statistics. So why do arthritis sufferers believe in a pattern that doesn’t exist? What would lead people to think there was an association even when the weather was completely unpredictive? Redelmeier and Tversky conducted a second experiment. They recruited undergraduates for a study and showed them pairs of numbers, one giving a patient’s pain level and the other giving the barometric pressure for that day. Keep in mind that in actuality, pain and weather conditions are unrelated—knowing the barometric pressure is of no use in predicting how much pain a patient experienced that day, because pain is just as likely when it’s warm and sunny as when it’s cold and rainy. In the fake, experimental data there was also no relationship. Yet just like the actual patients, more than half of the undergraduates thought there was a link between arthritis and pain in the data set. In one case, 87 percent saw a positive relationship.

  Through a process of “selective matching,” the subjects in this experiment focused on patterns that existed only in subsets of the data, such as a few days when low pressure and pain happened to coincide, and neglected the rest. Arthritis sufferers likely do the same: They remember those days when arthritis pain coincided with cold, rainy weather better than those days when they had pain but it was warm and sunny, and much better than pain-free days, which don’t stand out in memory at all. Putative links between the weather and symptoms are part of our everyday language; we speak of “feeling under the weather” and we think that wearing hats in winter lessens our chances of “catching a cold.” The subjects and the patients perceived an association where none existed because they interpreted the weather and pain data in a way that was consistent with their preexisting beliefs. In essence, they saw the gorilla they expected to see even when it was nowhere in sight.7

  Beware of Belief Becoming “Because”

  Many introductory psychology textbooks ask students to think about possible reasons why ice cream consumption should be positively associated with drowning rates. More people drown on days when a lot of ice cream is consumed, and fewer people drown on days when only a little ice cream is consumed. Eating ice cream presumably doesn’t cause drowning, and news of drownings shouldn’t inspire people to eat ice cream. Rather, a third factor—the summer heat—likely causes both. Less ice cream is consumed in winter, and fewer people drown then because fewer people go swimming.8

  This example draws attention to the second major bias underlying the illusion of cause—when two events tend to happen together, we infer that one must have caused the other. Textbooks use the ice cream–drowning correlation precisely because it’s hard to see how either one could cause the other, but easy to see how a third, unmentioned factor could cause both. Unfortunately, seeing through the illusion of cause is rarely so simple in the real world.

  Most conspiracy theories are based on detecting patterns in events that, when viewed with the theory in mind, seem to help us understand why they happened. In essence, conspiracy theories infer cause from coincidence. The more you believe the theory, the more likely you are to fall prey to the illusion of cause.

  Conspiracy theories result from a pattern perception mechanism gone awry—they are cognitive versions of the Virgin Mary Grilled Cheese. Those conspiracy theorists who already believed that President Bush would stage 9/11 to justify a preconceived plan to invade Iraq were quick to see
his false memory of seeing the first plane hit the towers as evidence that he knew about the attack in advance. People who already thought that Hillary Clinton would say anything to get elected were quick to jump on her false memory of Bosnian snipers as evidence that she was lying to benefit her campaign. In both cases, people used their understanding of the person to fit the event into a pattern. They inferred an underlying cause, and they were so confident that they had the right cause that they failed to notice more plausible alternative explanations.

  Illustrations of this illusion of cause are so pervasive that undergraduates in our research methods classes have no problem completing our assignment to find a recent media report that mistakenly infers a causal relationship from a mere association. One BBC article, provocatively titled “Sex Keeps You Young,” reported a study by Dr. David Weeks of the Royal Edinburgh Hospital showing that “couples who have sex at least three times a week look more than 10 years younger than the average adult who makes love twice a week.”9 The caption to an attached photo read, “Regular sex ‘can take years off your looks.’” Although having sex could somehow cause a youthful appearance, it is at least as plausible that having a youthful appearance leads to more sexual encounters, or that a youthful appearance is a sign of physical fitness, which makes frequent sex easier, or that people who appear more youthful are more likely to maintain an ongoing sexual relationship, or … the possible explanations are endless. The statistical association between youthful appearance and sexual activity does not imply that one causes the other. Had the title been phrased in the opposite way, “Looking Young Gets You More Sex,” it would have been equally conclusory, but less surprising and therefore less newsworthy.

  Of course, some correlations are more likely to reflect an actual causal relationship than others. Higher summer temperatures are more likely to cause people to eat ice cream than are reports of drownings. Statisticians and social scientists have developed clever ways to gather and analyze correlational data that increase the odds of finding a true causal effect. But the only way—let us repeat, the only way—to definitively test whether an association is causal is to run an experiment. Without an experiment, observing an association may just be the scientific equivalent of noticing a coincidence. Many medical studies adopt an epidemiological approach, measuring rates of illness and comparing them among groups of people or among societies. For example, an epidemiological study might measure and compare the overall health of people who eat lots of vegetables with that of people who eat few vegetables. Such a study could show that people who eat vegetables throughout their lives tend to be healthier than those who don’t. This study would provide scientific evidence for an association between vegetable-eating and health, but it would not support a claim that eating vegetables causes health (or that being healthy causes people to eat vegetables, for that matter). Both vegetable-eating and health could be caused by a third factor—for instance, wealth may enable people to afford both tasty, fresh produce and superior health care. Epidemiological studies are not experiments, but in many cases—such as smoking and lung cancer in humans—they are the best way to determine whether two factors are associated, and therefore have at least a potential causal connection.

  Unlike an observed association, though, an experiment systematically varies one factor, known as the independent variable, to see its effect on another factor, the dependent variable. For example, if you were interested in learning whether people are better able to focus on a difficult task when listening to background music than when sitting in silence, you would randomly assign some people to listen to music and others to work in silence and you would measure how well they do on some cognitive test. You have introduced a cause (listening to music or not listening to music) and then observed an effect (differences in performance on the cognitive test). Just measuring two effects and showing that they co-occur does not imply that one causes the other. That is, if you just measure whether people listen to music and then measure how they do on cognitive tasks, you cannot demonstrate a causal link between music listening and cognitive performance. Why not?

  Paradoxically, properly inferring causation depends on an element of randomness. Each person must be assigned randomly to one of the two groups—otherwise, any differences between the groups could be due to other systematic biases. Let’s say you just asked people to report whether they listen to music while working and you found that people who worked in silence tended to be more productive. Many factors could cause this difference. Perhaps people who are better educated prefer working in silence, or perhaps people with attention deficits are more likely to listen to music.

  A standard principle taught in introductory psychology classes is that correlation does not imply causation. This principle needs to be taught because it runs counter to the illusion of cause. It is particularly hard to internalize, and in the abstract, knowing the principle does little to immunize us against the error. Fortunately, we have a simple trick to help you spot the illusion in action: When you hear or read about an association between two factors, think about whether people could have been assigned randomly to conditions for one of them. If it would have been impossible, too expensive, or ethically dubious to randomly assign people to those groups, then the study could not have been an experiment and the causal inference is not supported. To illustrate this idea, here are some examples taken from actual news headlines:10

  “Drop That BlackBerry! Multitasking May Be Harmful”—Could researchers randomly assign some people to lead a multitasking, BlackBerry-addicted life and others to just focus on one thing at a time all day long? Probably not. The study actually used a questionnaire to find people who already tended to watch TV, text-message, and use their computers simultaneously, and compared them with people who tended to do just one of these things at a time. Then they gave a set of cognitive tests to both groups and found that the multitaskers did worse on some of the tests. The original article describes the study’s method clearly, but the headline added an unwarranted causal interpretation. It’s also possible that people who do badly at the cognitive tests also think they can multitask just fine, and therefore tend to do it more than they should.

  “Bullying Harms Kids’ Mental Health”—Could a researcher randomly assign some kids to be bullied and others not to be bullied? No—not ethically, anyway. So the study must have measured an association between being bullied and suffering mental health problems. The causal relationship could well be reversed—children who have mental health issues might be more likely to get bullied. Or some other factors, perhaps in their family background, could cause them both to be bullied and to have mental health issues.

  “Does Your Neighborhood Cause Schizophrenia?”—This study showed that rates of schizophrenia were greater in some neighborhoods than others. Could the researchers have randomly assigned people to live in different neighborhoods? In our experience people generally like to participate in psychology experiments, but requiring them to pack up and move might be asking too much.

  “Housework Cuts Breast Cancer Risk”—We doubt experimenters would have much luck randomly assigning some women to a “more housework” condition and others to a “less housework” condition (though some of the subjects might be happy with their luck).

  “Sexual Lyrics Prompt Teens to Have Sex”—Were some teens randomly assigned to listen to sexually explicit lyrics and others to listen to more innocuous lyrics, and then observed to see how much sex they had? Perhaps an adventurous experimenter could do this in the lab, but that’s not what these researchers did. And it’s doubtful that exposing teens to the music of Eminem and Prince in a lab would cause a measurable change in their sexual behavior even if such an experiment were conducted.

  Once you apply this trick, you can see the humor in most of these misleading headlines. In most of these cases, the researchers likely knew the limits of their studies, understood that correlation does not imply causation, and used the right logic and terminology in their scientific papers.
But when their research was “translated” for popular consumption, the illusion of cause took over and these subtleties were lost. News reporting often gets the causation wrong in an attempt to make the claim more interesting or the narrative more convincing. It’s far less exciting to say that those teens who listen to sexually explicit lyrics also happen to have sex at earlier ages. That more precise phrasing leaves open the plausible alternatives—that having sex or being interested in sex makes teens more receptive to sexually explicit lyrics, or that some other factor contributes to both sexual precocity and a preference for sexually explicit lyrics.

  And Then What Happened?

  The illusory perception of causes from correlations is closely tied to the appeal of stories. When we hear that teens are listening to sexually explicit music or playing violent games, we expect there to be consequences, and when we hear that those same teens are subsequently more likely to have sex or to be violent, we perceive a causal link. We immediately believe we understand how these behaviors are causally linked, but our understanding is based on a logical fallacy. The third major mechanism driving the illusion of cause comes from the way in which we interpret narratives. In chronologies or mere sequences of happenings, we assume that the earlier events must have caused the later ones.

  David Foster Wallace, the celebrated author of the novel Infinite Jest, committed suicide by hanging himself in the late summer of 2008. Like many famous creative writers, he suffered for a long time from depression and substance abuse, and he had attempted suicide before. Wallace was something of a literary prodigy, publishing his first novel, The Broom of the System, at the age of twenty-five while he was still studying for his master of fine arts (MFA) degree. The book was praised by the New York Times, but received mixed reviews elsewhere. Wallace worked on a follow-up short story collection, but could not help feeling like a failure. His mother brought him back to live at home. According to a profile in the New Yorker by D. T. Max,11 things went downhill quickly:

 

‹ Prev