by Dan Sperber
This doesn’t mean that people seek out surprises. If you are lucky enough to find yourself in a nice environment where you know what to expect, deviations from these expectations—surprises—are likely to be bad news (hence the fake Chinese curse, “May you live in interesting times”). However, even if you don’t like surprises, when the environment has something surprising in stock for you, you’re better off learning about it as early as possible. Being fired is a very bad surprise; better be surprised by indications that you might be fired before it actually happens. As a result, people favor sources of surprising information over sources of repetitive information (provided they are equally reliable): they buy the newspaper with the proverbial “Man Bites Dog” headline, not “Dog Bites Man.”
If reason is an instrument of individual cognition aimed at better beliefs and decisions, it should be biased toward information that violates our expectations. Reason should look for counterarguments to our generalizations, reasons to question our decisions, memories that clash with our current beliefs. Failing to pay attention to negative evidence and arguments is generally more costly than failing to pay attention to their positive counterparts.
This insight is so essential that it is one of the few themes that we find again and again in scholars’ admonitions about the best way to reach sound beliefs. In the twelfth century, Robert Grosseteste suggested that to eliminate faulty hypotheses, one should take a hypothesis and see if anything follows from it that is illogical or contrary to fact.19 Four centuries later, Francis Bacon overturned the scholastic tradition to which Grosseteste belonged by putting observation at the center of his philosophy. Still, we find again the idea of looking for counterexamples. Observation should not be the “childish” enumeration of cases that fit with the researcher’s hypothesis. Scholars are to look specifically for instances that can prove their hypothesis wrong: one must “analyse nature by proper rejection and exclusion.”20
Another four centuries later, it is Karl Popper’s turn to challenge dominant views of the day, distant descendants of Bacon’s philosophy. Popper does so by stressing more than ever before the importance of falsification. For Popper, what demarcates science from other forms of knowledge is not that scientific theories are verifiable but that they are falsifiable: in principle, a single observation of an event that, according to a given theory, could never occur would falsify the theory. In practice, of course, as Popper well recognized, a single observation is not enough to cause scientists to abandon a theory—after all, the observation itself might be mistaken. Still, when falsifying observations are numerous and reliable enough, then the theory must at least be revised if not abandoned. Scientists produce theories that are at risk of being falsified. They improve their theories by looking for falsifying evidence, by rejecting falsified theories, and by holding on only to theories that have withstood repeated attempts at falsifying them.
For all their differences, these scholars agree that counterexamples and other violations of expectations, by allowing us to discard misguided beliefs, play a crucial role in the accumulation of knowledge. As Popper put it, “In searching for the truth, it may be our best plan to start by criticizing our most cherished beliefs.”21
The Confirmation Bias
Peter Wason devised the clever experimental tasks that were to have such an impact on the psychology of reasoning at University College in London, within walking distance of the London School of Economics where Popper was teaching. The proximity was more than geographic. Wason extended Popper’s insight about scientific theories to everyday reasoning, asking: Do people rely on falsification to arrive at better beliefs?
Figure 17. The standard Wason selection task.
The selection task, which was discussed in Chapter 2 (see Figure 17 for a reminder), was precisely meant to simulate aspects of scientific thinking. Participants, Wason thought, must select cards that provide the right kind of evidence to test the rule, just as scientist must identify the right kind of evidence to test their hypotheses. As we saw, the right selection of cards in this example is that of the card with the vowel E and that of the card with an odd number 7—these are the only two cards that might falsify the rule (if there is an odd number on the other side of the E, or a vowel on the other side of the 7). Most participants, however, fail to select the 7 card. Many select the card with the even number 2 presumably to see whether it has a vowel on the other side. But since the rule does not say that it should, this card is irrelevant to deciding whether the rule is true or false.
On the basis of this and other experiments, Wason suggested that participants suffered from a bias that would later be called “confirmation bias”:22 “The results do suggest … that even intelligent adults do not readily adopt a scientific attitude to a novel problem. They adhere to their own explanation with remarkable tenacity when they can produce confirming evidence for them.”23
Actually, the experiment that convinced so many people that there is a confirmation bias is not such a straightforward example of this bias. Wason and many of his followers made a logical mistake in claiming that participants try to “verify” or “confirm” the rule instead of falsifying it: it is, in fact, exactly the same cards that can falsify the rule if it is false or verify it if it is true. In the preceding example, for instance, the E and the 7 cards could each, when turned over, reveal that the rule is false. If neither falsifies the rule, then the two same cards jointly prove that the rule is true. The quite common selection of the card with the even number 2 not only fails to falsify the rule, it also fails to confirm it. Selecting it is not a confirmation strategy at all. So people who succeed on the task do not show a greater disposition to falsify than to confirm, and people who fail do not show a greater disposition to confirm than to falsify.
If the choice of any given card cannot reveal whether people have a confirmation bias, the way they choose their cards can. People’s selection of cards, we saw, is rapidly determined not by reason but by intuitions about the relevance of the different cards. After this initial, intuitive reaction, participants tend to reason long and hard about their choice. However, they do not reason as much about each of the four cards. By tracking participants’ gaze, researchers established that participants spend all their time thinking just about the cards made relevant by the rule, the cards that they have already selected.24 Moreover, when participants are asked to think aloud, it becomes clear that they mostly think of reasons supporting their intuitive choice.25 Here’s the real confirmation bias: instead of finding reasons for and against each card, participants find plenty of reasons supporting their initial card choice, neglecting reasons to pick other cards, or reasons to not pick the cards initially chosen.
We will soon argue that this confirmation bias is in fact best understood as a “myside bias,” but let’s first look at a small sample of the rich evidence demonstrating its existence.26
Deanna Kuhn, a pioneering scholar of argumentation and cognition, asked participants to take a stand on various social issues—unemployment, school failure, and recidivism. Once the participants had given their opinion, they were asked to justify it. Nearly all participants obliged, readily producing reasons to support their point of view. But when they were asked to produce counterarguments to their own view, only 14 percent were consistently able to do so, most drawing a blank instead.27
Ziva Kunda led participants to believe that extroverts are more likely to be successful than introverts. On a later memory task, these participants found it much easier to recall memories of their own extroverted rather than introverted behavior. Another group of participants was led to believe that it is introverts who are more likely to be successful than extroverts. They found it easier to recall memories of introverted rather than extroverted behavior. Both groups were simply seeking reasons to believe that they had the qualities that would make them successful.28
Charles Taber and Milton Lodge gave participants a variety of arguments on controversial issues, such as gun control or affir
mative action, and asked them to list their thoughts relative to the arguments.29 They divided the participants into two groups: those with low and those with high knowledge of political issues. The low-knowledge group exhibited a solid confirmation bias: they listed twice as many thoughts supporting their side of the issue than thoughts going the other way. But knowledge did not protect the participants from bias. The participants in the high-knowledge group found so many thoughts supporting their favorite position that they gave none going the other way. Greater political knowledge only amplified their confirmation bias.
The list could go on for pages (indeed for chapters or books, even). Moreover, as the example of Pauling suggests, it is not only ordinary participants who fall prey to the confirmation bias. Being gifted, focused, motivated, or open minded is no protection against the confirmation bias.30 A small industry of experiments has busily demonstrated the prevalence and robustness of what is “perhaps the best known and most widely accepted notion of inferential error to come out of the literature on human reasoning.”31 As the journalist Jon Ronson quipped, “Ever since I learnt about confirmation bias I’ve started seeing it everywhere.”32
A Challenge for the Intellectualist Approach
Psychologists agree that the confirmation bias is prevalent. They also agree that it is a bad thing. The confirmation bias is “irrational,”33 and it “thwart[s] the ability of the individual to maximize utility.”34 It is the “bias most pivotal to ideological extremism and inter- and intragroup conflict.”35 Raymond Nickerson aptly summarizes the common view:
Most commentators, by far, have seen the confirmation bias as a human failing, a tendency that is at once pervasive and irrational. It is not difficult to make a case for this position. The bias can contribute to delusions of many sorts, to the development and survival of superstitions, and to a variety of undesirable states of mind, including paranoia and depression. It can be exploited to great advantage by seers, soothsayers, fortune tellers, and indeed anyone with an inclination to press unsubstantiated claims. One can also imagine it playing a significant role in the perpetuation of animosities and strife between people with conflicting views of the world.36
A damning assessment indeed. Moreover, this bad thing, far from being hidden in the recesses of human psychology, is well in view. It doesn’t take a very shrewd or cynical observer of human nature to realize that humans have a confirmation bias. Why did Bacon, for instance, take such care to warn against the dangers of the “childish” enumeration of instances? Because he was well aware of people’s tendency to confirm their beliefs:
The human understanding when it has once adopted an opinion … draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate.37
Bacon, writing more than two centuries before Darwin’s discovery of natural selection, could merely observe that people have a confirmation bias and that this bias hinders the acquisition of sound beliefs. For contemporary defenders of the intellectualist approach to reason who take it for granted that reason is an outcome of Darwinian natural selection, the very existence of the confirmation bias presents a radical challenge.
In particular, a defender of the intellectualist approach should accept the following three claims: (1) reason has a confirmation bias, (2) the confirmation bias makes it harder for reason to help the lone reasoner arrive at better beliefs and better decisions, and (3) the main function of reason is to arrive at better beliefs and better decisions. This makes as much sense as accepting the three following claims: (1) the elk’s antlers are enormous, (2) the size of these antlers makes it harder for the elks to avoid predators, and (3) the function of the enormous antlers is to avoid predators.
The only ways out of this conundrum that come to mind are nonstarters. A first way out would be to argue that the confirmation bias is an unavoidable feature of reason. For instance, the extra weight that the bones, muscles, and feathers of avian wings add to the body of a bird makes it harder, not easier, for that body to fly. Yet since neither weightless wings nor wingless flight is an option, the weight of wings does not in itself raise a problem for evolutionary theory. But why would having a confirmation bias be necessary for reason to function at all?
A second escape route would be to claim that the confirmation bias serves a secondary function. Some features of an adaptation may hinder its main function but be well explained by the fact that they serve a secondary function. Razorbills, for instance, have a wing area relative to body mass (“wing loading”) that is too low for optimal flight. This feature, however, is explained by the fact that their wings are also adapted for underwater propulsion.38 Unlike razorbills’ low wing loading, the confirmation bias doesn’t just make reason’s alleged main function a bit harder to achieve; it works directly against it. Moreover, there is no particularly plausible secondary function of reason that would explain the confirmation bias the way underwater propulsion explains low wing loading in razorbills. For the intellectualist approach, the confirmation bias should be a deep puzzle.
Can Intuitions Be Blamed for Confirmation Bias?
Supporters of the intellectualist view who are aware of how problematic the confirmation bias is still have one option: deny that the confirmation bias is a feature of reason proper, shifting the blame from reason to intuition. Indeed, when the dual process approach, briefly discussed in Chapter 2, became popular, one of its main attractions was that it seemed to help solve the enigma of reason: weaknesses of reasoning could now be blamed on type 1 intuitive processes, while type 2 reasoning proper could be exonerated. Intuitions, it was claimed, make mistakes that it is one function of reason to correct.39 Stanovich, for instance, in his 2004 book The Robot’s Rebellion, listed the confirmation bias among other alleged cognitive weaknesses of intuition.40 Similarly, Evans suggested that the confirmation bias resulted from a more general “bias to think about positive rather than negative information.”41
This solution—blaming the intuitions—doesn’t make much evolutionary sense. Mechanisms of intuitive inference guide our thoughts and actions; natural selection has honed some of these mechanisms for hundreds of millions of years. Our survival and reproduction very much depend on the quality of the information provided by intuitions. As we saw, some specific biases may on the whole be advantageous when they lower the costs of cognition or make less likely particularly costly kinds of mistake. The confirmation bias carries none of these advantages.
Unsurprisingly, then, no confirmation bias emerges from studies of animal behavior. A mouse bent on confirming its belief that there are no cats around and a rat focusing its attention on breadcrumbs and ignoring other foods to confirm its belief that breadcrumbs are the best food would not pass on their genes to many descendants. Foraging behaviors adapt to changing environments. Animals abandon food patches as soon as they expect to find better elsewhere.42 Human intuitions are no worse than the inferences of other animals. Our ancestors passed on to us abilities for foraging and avoiding predators, as well as a great variety of other inferential devices that do not suffer from a confirmation bias.43
If anything, as we argued earlier, we should expect intuitions to be biased toward disconfirming, surprising information. An experiment conducted by psychologist Thomas Allen and his colleagues offers a nice demonstration of the respective biases of intuitions and reason.44 The participants were shown a picture of an individual and two statements describing his behavior, and they were then asked to form an impression of this individual. Some participants saw the picture of “a young adult Black male wearing a black headband and dark sunglasses”45 followed by two statements: “Swore at the salesgirl” and “Gave up his seat on the crowded subway to the elderly man.” If you are familiar with the stereotypes of young black mal
es in the United States, you will have figured out that the first statement was designed to comport with most participants’ expectations, and the second to be surprising.
In order to test the role of intuition in impression formation, half the participants were stopped from using reason—by having to hold in mind a long string of digits, a task that monopolizes resources necessary for sustained reasoning. These participants, then, were guided by their intuitions. As we would have predicted, these participants paid more attention to the surprising statement. By contrast, participants who could reason paid more attention to the unsurprising statement. Intuitions aimed at gathering the most useful information while reasoning aimed at confirming the participants’ stereotypes.
The Myside Bias—and What Is It For?
So far, we have taken for granted that the bias described was a confirmation bias, a bias to confirm whatever view one happens to be entertaining. However, some experiments reveal clearly that this is not a good description of what reasoning does. For instance, we saw earlier that participants have trouble finding counterarguments to their favorite theories. But when participants are asked to reason about ideas they disagree with, they easily find counterarguments.46
What these results—and many others47—show is that people have no general preference for confirmation. What they find difficult is not looking for counterevidence or counterarguments in general, but only when what is being challenged is their own opinion. Reasoning does not blindly confirm any belief it bears on. Instead, reasoning systematically works to find reasons for our ideas and against ideas we oppose. It always takes our side. As a result, it is preferable to speak of a myside bias rather than of a confirmation bias.48