The Enigma of Reason: A New Theory of Human Understanding

Home > Other > The Enigma of Reason: A New Theory of Human Understanding > Page 23
The Enigma of Reason: A New Theory of Human Understanding Page 23

by Dan Sperber


  We construct arguments when we are trying to convince others or, proactively, when we think we might have to. We evaluate the arguments given by others as a means—imperfect but uniquely useful all the same—of recognizing good ideas and rejecting bad ones. Being sometimes communicators, sometimes audience, we benefit both from producing arguments to present to others and from evaluating the arguments others present to us. Reasoning involves two capacities, that of producing arguments and that of evaluating them. These two capacities are mutually adapted and must have evolved together. Jointly they constitute, we claim, one of the two main functions of reason and the main function of reasoning: the argumentative function.21

  Main Functions, Secondary Functions, and Sea Turtles

  The central thesis of the book is that human reason has two main functions corresponding to two main challenges of human interaction: the attribution of reasons serves primarily a justificatory function, and reasoning serves primarily an argumentative function.

  Why do we say the two main functions rather than just the two functions? Frankly, out of prudence. We are not convinced that the reason module has any other function than these two, but we want to leave the possibility open.

  We have not denied, for instance, that attributing reasons to an individual may be done for explanatory rather than justificatory purpose. One may attribute reasons without assessing their justificatory value. This is, after all, what historians do when they seek to explain the actions of people from the past in a nonjudgmental way. Couldn’t such explanatory use be beneficial? Couldn’t it be at least a secondary function of the attribution of reasons?

  We have not denied that reasoning can be pursued individually, either to produce arguments aimed at convincing others or to evaluate others’ arguments. Couldn’t such individual inquisitive reasoning be beneficial? Couldn’t producing such benefits be, if not the main function, at least a secondary function of reasoning?

  Well, possibly, but to better understand what it would take to establish such claims, consider sea turtles.

  Although sea turtles are descendants of land reptiles, they spend all their time in the water, or almost all their time. Females lay eggs out of the water, in nests they dig on the beach, where therefore all sea turtles are born (and then try to get into the water as fast as possible).

  The limbs of sea turtles have become exquisitely adapted to life in the sea: their forelimbs have evolved into flippers and their hind limbs into paddles or rudders. What could be better for swimming, which is the ordinary mode of locomotion of sea turtles? On repeated occasions, however, females must use these same limbs to crawl out of the water and to dig nests in the sand. The use of limbs in such a manner, even if rare and clumsy, plays a direct role in reproduction and is clearly adaptive. To argue conclusively, however, that digging nests is a secondary function of sea turtle limbs, one would have to find features of these limbs that are best explained by this use. Otherwise, this might be best described as a beneficial side effect.22

  Female turtles’ nest digging is obviously beneficial, even if the limbs they use are not specifically adapted to the task and if their performance is notably clumsy. Similarly, if reason has beneficial effects different from its two main functions, these might correspond to secondary functions. If so, we would expect to find features of reason tailored to the achievement of these benefits. Otherwise, we might suspect that these are mere side effects of reason.

  It is plausible that the capacity to explain people’s ideas and actions by attributing to them reasons is on the whole beneficial (even if, as we saw, it is at best a distortion of the psychological processes involved). It is less obvious but not inconceivable that genuine individual inquisitive reasoning (as opposed to mentally simulated argumentation) does more good than harm in guiding beliefs and decisions. Still, even assuming that these two uses of reason are each, on average, beneficial, it does not follow that they are, properly speaking, functions of reason. To argue that they are, one would have to find some specific features of the attribution of reasons on the one hand and of reasoning on the other hand that are geared to the production of these particular benefits. We are not aware of any such evidence. In the case of reasoning, what is more, it is not just that features specifically tailored to the fulfillment of its alleged inquisitive function are lacking; it is that several well-evidenced features of reasoning would actually undermine this function.

  We should keep an open mind regarding possible secondary functions of reason, of course, but at present, the challenge is to establish what beneficial effects would explain why reason evolved at all—in other terms, it is to identify reason’s primary function or functions. In trying to answer this challenge, we denied that anything like classical Reason, with the capacity to procure better knowledge and decisions in all domains, has ever evolved. What has evolved rather is a more modest reason module—one intuitive inference module among many—specialized in producing intuitions about reasons in the service of two functions, justificatory and argumentative.

  In Parts IV and V of this book, we will demonstrate, with evidence rich and varied, that reason is precisely adapted for fulfilling these two functions. We will show, moreover, how this new interactionist approach illuminates the role reason plays in human affairs.

  IV

  * * *

  WHAT REASON CAN AND CANNOT DO

  In Chapters 7 through 10, we have developed a novel interactionist approach to reason. According to this approach, the function of reason is to produce and evaluate justifications and arguments in dialogue with others. For the standard intellectualist approach, the function of reason is to reach better beliefs and make better decisions on one’s own. As we’ll see, the two approaches yield sharply contrasting predictions that we are now in a position to test. Is reason objective or biased? Is it demanding or lazy? Does it help the lone reasoner? Does it yield better or worse results in interactive contexts? Looking at the evidence with these questions in mind throws new light on the promises and dangers of reason.

  11

  Why Is Reasoning Biased?

  There is much misunderstanding about the way to test adaptive hypotheses. It might seem that in order to understand what reason is for, why it evolved, we must be able to find out how our ancestors reasoned, using archeological or genetic data. Fortunately, million-year-old skulls and gene sequencing are not indispensable to test hypotheses about the function of an evolved mechanism. Think about it: we can tell what human eyes are for without knowing anything about when and how they evolved. What matters is the existence of a match between the function of an organ, or a cognitive mechanism, and its structure and effects. Do the features of the eye serve its function well? By and large, yes. Do eyes achieve their function well? By and large, yes.

  We can use the same logic to guide our examination of the data on human reason. Do the features of human reason serve best the functions posited by the intellectualist or the interactionist approach? Which functions does it achieve best? In many cases reason couldn’t serve both functions well at the same time, so there will be plenty of evidence to help us decide between the two approaches. We’ll start our tour of what reason can and cannot do with a historical case. How does a certified scientific genius reason? Does reason help him discard misguided beliefs and reach sounder conclusions?

  When Linus Pauling received the American Chemical Society’s Award in Pure Chemistry—he was only thirty years old—a senior colleague predicted that he would win a Nobel Prize.1 Actually, Pauling won two Nobel Prizes—Chemistry and Peace—joining Marie Curie in this exclusive club. As a serious contender in the race to discover the structure of DNA, he narrowly missed a third Nobel Prize. Indeed, James Watson was long afraid to be beaten by Pauling’s “prodigious mind.”2 A former student who would also become a Nobel laureate described him as a “god-like, superhuman, great figure.”3 Undoubtedly, Linus Pauling had a great mind, and he was steeped in the most rigorous scientific tradition.

  W
hen he was not busy winning Nobel Prizes, Pauling sometimes pondered the powers of vitamin C. At first, he advocated heavy daily doses as a prophylactic against colds and other minor ailments, stopping short of recommending vitamin C for the treatment of serious diseases. This changed when he met Ewan Cameron, a surgeon who had conducted a small study demonstrating the positive effects of vitamin C on cancer patients—or so it seemed. Pauling and Cameron joined forces to defend the potential of vitamin C as a treatment for cancer. Thanks to Pauling’s clout and to his continuous efforts, the prestigious Mayo Clinic agreed to conduct a large-scale, tightly controlled trial.4 Unfortunately for Pauling and Cameron—and for the cancer patients—the results were negative. Vitamin C had no effect whatsoever.5

  At this point, Pauling could have objectively reviewed the available evidence: on the one hand a fringe theory and a small, poorly controlled study; on the other hand, the medical consensus and a large, well-controlled clinical trial. On the basis of this evidence, the vast majority of researchers and doctors concluded that vitamin C had no proven effects on cancer. But Pauling did not reason objectively. He built a partisan case. The first Mayo Clinic study was dismissed because its participants had, according to Pauling, not been selected properly. When a second study was performed that addressed this issue,6 Pauling made up another problem: the new patients had not received vitamin C for an “indefinite time.”7 Pauling’s requirements did not match any standard cancer research, only fitting the small trial Cameron had performed.

  The most egregious example of biased reasoning on Pauling’s part is found in an article, published a few years later, in which he advanced three criteria to evaluate clinical trials of cancer treatments.8 While “most of the reported results of clinical trials of cohorts of cancer patients satisfy these criteria of validity,”9 there was one black sheep, “a reported clinical trial that fails on each of the three criteria for validity.”10 Can you guess what this outlier was? A study “described as a randomized double-blind comparison of vitamin C (10 g per day) and a lactose placebo” (emphasis added).11 The study singled out as the only flawed cancer study in a sample of several hundred is the Mayo Clinic study that embarrassed Pauling in front of the whole scientific community.

  There is no reason to accuse Pauling of conscious intellectual dishonesty. He took high doses of vitamin C daily, and his wife used the same regimen to fight—unsuccessfully—her stomach cancer. Still, for most observers, Pauling’s evaluation of the therapeutic efficacy of vitamin C is a display of selective picking of evidence and partisan arguments. Even when he was diagnosed with cancer despite having taken high doses of vitamin C for many years, he did not admit defeat, claiming that the disease would have struck earlier without it.12

  Pauling may have erred further than most respected scientists in his unorthodox beliefs, but his way of reasoning is hardly exceptional—as anyone who knows scientists can testify, they are not paragons of objectivity (more on this in Chapter 18). Undoubtedly, even the greatest minds can reason in the most biased way.

  Is Bias Always Bad?

  How should cognitive mechanisms in general go about producing sound beliefs? Part of the answer, it seems, is that they should be free of bias. Biases have a bad press, in part because a common definition is “inclination or prejudice for or against one person or group, especially in a way considered to be unfair.”13 However, psychologists often use the term in a different manner, to mean a systematic tendency to commit some specific kind of mistakes. These mistakes need not have any moral, social, or political overtones.

  A first kind of bias simply stems from the processing costs of cognition. Costs can be reduced by using heuristics, that is, cognitive shortcuts that are generally reliable but that in some cases lead to error. A good example of this is the “availability heuristic” studied by Tversky and Kahneman.14 It consists in using the ease with which an event comes to mind to guess its actual frequency. For instance, in one experiment participants were asked whether the letter R occurs more frequently in first or third position in English words. Most people answered that R occurs more often in first position, when actually it occurs more often in third position. The heuristic the participants used was to try to recall words beginning with R (like “river”) and words with R in the third place (like “bored”) and assume that the ease with which the two kinds of words came to mind reflected their actual frequency. In the particular case of the letter R (and of seven other consonants of the English alphabet), the availability heuristic happens to be misleading. In the case of the other thirteen consonants of the English alphabet, the same heuristic would give the right answer.

  Although the availability heuristic can be described as biased—it does lead to systematic errors—its usefulness is clear when one considers the alternative: trying to count all the words one knows that have R as the first or third letter. While the heuristic can be made to look bad, we would be much more worried about a participant who would engage in this painstaking process just to answer a psychology quiz. Moreover, the psychologist Gerd Gigerenzer and his colleagues have shown that in many cases using such heuristics not only is less effortful but also gives better results than using more complex strategies.15 Heuristics, Gigerenzer argues, are not “quick-and-dirty”; they are “fast-and-frugal” ways of thinking that are remarkably reliable.

  The second type of bias arises because not all errors are created equal.16 More effort should be made to avoid severely detrimental errors—and less effort to avoid relatively innocuous mistakes.

  Here is a simple example illustrating how an imbalance in the cost of mistakes can give rise to adaptive biases. Bumblebees have cognitive mechanisms aimed at avoiding predators. Among their predators are crab spiders, small arachnids that catch the bees when they forage for nectar. Some crab spiders camouflage themselves by adopting the color of the flowers they rest on: they are cryptic. To learn more about the way bumblebees avoid cryptic predators, Thomas Ings and Lars Chittka created little robot spiders.17 All the robots rested on yellow flowers, but some of them were white (noncryptic) while others were yellow (cryptic). To simulate the predation risk, Ings and Chittka built little pincers that held the bees captive for two seconds when they landed on a flower with a “spider.”

  In the first phase of the experiments, two groups of bumblebees, one facing cryptic spiders and the other facing noncryptic spiders, had multiple opportunities to visit the flowers and to learn which kind of predators they were dealing with. Surprisingly, both groups of bumblebees very quickly learned to avoid the flowers with the spiders—even when the spiders were cryptic. Yet the camouflage wasn’t ineffective: to achieve the same ability to detect spiders, the bumblebees facing camouflaged spiders spent nearly twice as long inspecting each flower. This illustrates the cost of forming accurate representations of one’s environment: the time spent inspecting could not be spent harvesting.

  But there is also an asymmetry in the costs of mistakenly landing on a flower with a spider (high cost) versus needlessly avoiding a spider-free flower (low cost). This second asymmetry also affected the bumblebees’ behavior. On the second day of the experiment, the bees had learned about the predation risks in their environment. Instead of spending forever (well, 0.85 seconds) inspecting every flower to make sure it carried no spider, the bumblebees facing the cryptic spiders settled for a higher rate of “false alarms”: they were more likely than the other bees to avoid flowers on which, in fact, there were no spiders.

  This experiment illustrates the exquisite ways in which even relatively simple cognitive systems adjust the time and energy they spend on a cognitive task (ascertaining the presence of a spider on a flower) to the difficulty of the task on the one hand, and to the relative cost of a false negative (assuming there is no spider when there is) and of a false positive (assuming there is a spider when there is not) on the other hand. This difference in cost results in a bias: making more false positive than false negative errors. This bias, however, is beneficial.

/>   From Surprise to Falsification

  While the example of the bumblebees illustrates that bias may be beneficial, it speaks of very specific costs: the costs of predation versus the costs of missing a feeding opportunity. Are there cognitive biases that would be useful more generally?

  A main goal of cognitive mechanisms is to maintain an accurate representation of the organism’s environment, or at least of relevant aspects of it. It could be argued that only the future, more particularly the immediate future, of the environment is directly relevant: it determines both what may happen to the organism and what the organism can do to modify the environment in its favor. The past and present of the environment may be relevant too, but only indirectly, by providing the only evidence available regarding the environment’s future. Another way of making the same point is to say that a main goal of cognition is to provide the organism with accurate expectations regarding what may happen next. Cognitive mechanisms should pay extra attention to any information that goes against their current expectations and use this information to revise them appropriately. Information that violates expectations causes, at least in humans, a typical reaction: surprise. The experience of surprise corresponds to the sudden mobilization of cognitive resources to readjust expectations that have just been challenged.

  Indeed, paying due attention to the unexpected is so ingrained that we are surprised by the lack of surprise in others. If your friend Joe, upon encountering two cats that are similar except that one meows and the other talks, fails to pay more attention to the talking cat, you’ll suspect there’s something wrong with him. Even one-year-old babies expect others to share their surprise. When they see something surprising, they point toward it to share their surprise with nearby adults. And they keep pointing until they obtain the proper reaction or are discouraged by the adults’ lack of reactivity.18

 

‹ Prev