Book Read Free

The Enigma of Reason: A New Theory of Human Understanding

Page 31

by Dan Sperber


  In Defense of Juries

  So far we have seen that participants provide better answers to a variety of tasks when they are allowed to discuss them in small groups. We have seen that groups of experts can also take advantage of argumentation to improve their forecasts. But this chapter opened in a jury room—albeit a fictitious one—not a well-controlled experiment or an online chat among experts. The jury room is a typical face-to-face situation: tempers may flare, the search for consensus become paramount, errors never be dispelled. Jurors are the archetypal nonexperts, with only a tenuous understanding of the law. In 12 Angry Men, the discussion is filled with prejudice, and many other biases cloud jurors’ decisions.

  All too often, jurors form the wrong intuition regarding the verdict after hearing the evidence. With its myside bias, solitary reasoning is unlikely to help correct this initial intuition. Deliberation could turn into group polarization, amplifying rather than correcting shared prejudices. In light of these limitations, 12 Angry Men seems optimistic indeed, an edifying and wishful ode to the power of argumentation. Perhaps we should follow the advice of Cass Sunstein and his colleagues: take some decisions out of juries’ hands and give them to “specialists in the subject matter.”36

  Before damning juries, we should consider the alternative: judges and experts. Sir Edward Coke was undoubtedly such an expert. This English jurist of the late sixteenth and early seventeenth centuries was “possibly the most learned common lawyer of all time.”37 He was able to base his opinions “on innumerable medieval texts, most of them manuscript rolls, which he had perused with indefatigable zeal.” All this perusing, however, was not antiquarian fancy. Coke “was clearly hoping to find precedents that would suit his legal and political convictions,” and he “sometimes misinterpreted precedents to support his case.” It may have been Coke that Sir William Blackstone had in mind when he warned, in his hugely influential Commentaries on the Laws of England of 1766, that a judge’s knowledge and intelligence are no guarantee of fair opinion: “in settling and adjusting a question of fact, when entrusted to any single magistrate, partiality and injustice have an ample field to range in; either by boldly asserting that to be proved which is not so, or more artfully by suppressing some circumstances, stretching and warping others, and distinguishing away the remainder.”38

  Readers should not be surprised that judges, however competent, have a myside bias, using their erudition to defend preconceived opinions rather than arrive at an impartial verdict. Jurors are obviously not exempt from the myside bias, but, as Blackstone realized, deliberation has the potential to compensate for each juror’s biases. Having berated judges, he continued: “Here therefore a competent number of sensible and upright jurymen … will be found the best investigators of truth, and the surest guardians of public justice.” Two centuries after Blackstone, Justice Harry Blackmun of the U.S. Supreme Court would defend juries in similar terms: “the counterbalancing of various biases is critical to the accurate application of the common sense of the community to the facts of any given case.”39

  Blackstone may have been right in his pessimistic assessment of judges,40 but was he also right that jury deliberation could balance out jurors’ biases? This is not an easy question to answer. We do not have access to the arguments exchanged by jurors. At best, postdeliberation interviews can show that a view represented by a minority at the outset sometimes becomes the final verdict—we can tell at least that deliberation can change jurors’ minds.41 To know more about the effects of deliberation, we must rely on studies of mock juries.

  In the early 1980s, Reid Hastie, Steven Penrod, and Nancy Pennington conducted a very important study of mock juries.42 In order to make their experiment as realistic as possible, they recruited people who had been called for jury duty and showed them a three-hour video reenactment of a real trial before sending them in to deliberate. In this trial, the defendant stood accused of having stabbed a man to death during a fistfight that escalated. While the killing was well established, the verdict could plausibly range from not guilty—if the defendant were found to have acted in self-defense—to first-degree murder—if the defendant were found to have premeditated his crime. While it is impossible to know what the correct answer is for sure, according to the opinion of many legal experts, the appropriate verdict was second-degree murder; it was also the verdict delivered in the real trial that inspired the study.

  Right after the jurors in the experiment had seen the three-hour video, they had to say which verdict they favored. The most common answer was manslaughter. Only a quarter of the jurors favored second-degree murder. In other words, most jurors initially got it wrong. Most juries, however, reached the best verdict. Even though the verdict of second-degree murder must have been defended by only a few jurors in some groups, these jurors often managed to convince the whole jury.43 Deliberation had allowed many juries to reach a better verdict. Just like Blackstone expected, deliberation allowed jurors to counterbalance their respective biases.

  Indeed, this is exactly what Phoebe Ellsworth, a scholar of law and psychology, observed while replicating the experiment of Hastie and his colleagues:

  Individual jurors tended to focus on testimony that favored their initial verdict preference: Testimony about the previous confrontation between the two men was generally raised by jurors who favored a murder verdict, whereas testimony that the victim punched the defendant immediately before the killing was generally raised by jurors who favored manslaughter or self-defense. This tendency is not a weakness, but rather a benefit of the deliberation process—the opportunity it affords for comparing several different interpretations of the events along with the supporting factual evidence.44

  12 Angry Men is not that fanciful. As in the movie, individual jurors in real trials may be biased, they can make mistakes, and they certainly defend dubious interpretations. Yet deliberation makes jurors review the evidence more thoroughly and more objectively, compensating for individual biases and allowing juries to reach better verdicts. Ellsworth concluded her review of the movie with these optimistic words: “12 Angry Men is an Ideal, but it is an achievable Ideal.”45

  Argumentation Works

  The interactionist approach to reason predicts that people should be good at evaluating others’ reasons, rejecting weak ones and changing their mind when the reasons are good enough. Although this might seem like a trivial prediction, it runs against a general pessimism regarding the power of argumentation. For instance, people asked to estimate how easily participants would solve a logical task either on their own or in small groups don’t think that groups would do much better than individuals.46 Even psychologists of reasoning—who should know better—underestimate how well groups perform.

  The results reviewed here belie this pessimistic but common view of argumentation. Again and again, we see people changing their minds when confronted with good arguments. Whether they solve logical tasks or look for new solutions to open-ended problems, whether they are experts or laypeople, whether they reflect on geopolitics or ponder what verdict to deliver, people reach better conclusions after debating the issue with their peers.

  In Chapters 16 through 18, we will discover still more settings in which argumentation allows good ideas to spread and groups to outperform individuals, showing that Thomas Jefferson was not unduly optimistic when he wrote that

  Truth is great and will prevail if left to herself, that she is the proper and sufficient antagonist to error, and has nothing to fear from the conflict, unless by human interposition disarmed of her natural weapons, free argument and debate, errors ceasing to be dangerous when it is permitted freely to contradict them.47

  V

  * * *

  REASON IN THE WILD

  Chapters 11 through 15 have painted a picture of reason that unambiguously supports the novel interactive approach over the standard intellectualist approach. But haven’t we focused on situations in which we were most likely to observe results that fit our pet theory? Much
of the evidence we used came from laboratory experiments using psychology students in American universities as subjects. Isn’t it a bit risky to draw conclusions about how human reason works and why it evolved from this tiny and arguably rather unrepresentative sample of humanity? In Chapters 16 through 18, we expand the range of our inquiry and look for evidence that the fundamentals of reason can be found in a wide variety of contexts: remote Mayan communities in Guatemala, kindergarten playgrounds, citizens’ forums, laboratories meetings, and more.

  16

  Is Human Reason Universal?

  With huge temperature swings—from minus 40° to plus 40° Celsius—and scarcely any rain, Uzbekistan is not a very fertile land. Its unforgiving climate and its remoteness conspired to maintain a feudal system until the early twentieth century.1 Modernization only started with integration into the Soviet Union in the 1920s, as Moscow decided to open hundreds of schools throughout the country.2 This offered Alexander Luria, of the Moscow Institute of Experimental Psychology, the perfect opportunity to test his ideas. Following his master Lev Vygotsky, Luria thought that humans acquire most of their cognitive skills through learning, including school learning. Uzbekistan provided him with people who had just begun the schooling process but were otherwise identical to the illiterate peasants from nearby villages. By comparing these two populations, Luria could pinpoint precisely the effect that even a modicum of schooling had on cognition.

  One of the objectives of the “psychological expedition to Central Asia”3 Luria launched in 1931 was to investigate logical reasoning. The Russian psychologist had no doubt that illiterate peasants were capable of reasoning with familiar materials. Indeed, they could probably win an argument with any outsider about cotton growing. What he was looking for was different, an ability to draw the conclusion of an argument for its own sake, irrespective of whether its premises are true or false—a skill he thought lay beyond the abilities of unschooled populations. Luria used problems that were logically trivial but whose content was unfamiliar to the participants, so they would have to evaluate the logic of the argument itself:

  In the Far North, where there is snow, all bears are white. Novaya Zemlya is in the Far North. What color are bears there?4

  After one or two years of formal education, the Uzbeks found this problem trivial. But when unschooled peasants were interviewed, the vast majority seemed at a loss, providing answers such as, “There are different sorts of bears” or “I don’t know; I’ve seen a black bear—I’ve never seen any others.”5

  How WEIRD Is Argumentation?

  Evolution thrives on diversity. It is only because individuals of the same species vary in their heritable features that species evolve. Natural selection, however, swamps the diversity it feeds on. When a heritable trait allows its bearer to out-reproduce its conspecifics, it spreads through the population until, some generations later, everyone carries it. There are exceptions. Some traits are sex specific, and others are more advantageous when they are not universally shared within a species, but we do not see why reason would be one of these exceptions. If we are right that reason is an adaptation that helps solve problems of coordination, reputation management, and communication encountered by all, it should be shared by normally developing humans and not just by a minority of them, or just by men or women.

  In every individual, reason needs, to develop normally, some input—conversation, arguments. Different cultures and milieus may provide different input in this respect, both in terms of quantity—argumentation is strongly encouraged in some cultures, somewhat inhibited in others—and in terms of quality—various forms of argumentation may be favored or disfavored. All human societies, however, rely on a richness of communication not found in other species, and this reliance provided the selection pressures for the emergence of reason. Hence, if our approach is right, reason could not be the cultural product of institutions that only spread in the last centuries, such as schooling.

  The very idea that reason is a historically situated cultural invention has been a commonplace in the social sciences. Before Luria’s expedition, the French theoretical anthropologist Lucien Lévy-Bruhl had painted a picture of a “primitive mentality” “uncultivated in following a chain of reasoning which is in the slightest degree abstract.”6 He and others had argued that people in other cultures may reason, but on the basis of an altogether different logic. Both views—that reason is a relatively recent historical development and that it takes radically different forms across cultures—are incompatible with the evolutionary approach we defend.

  Historical, anthropological, and linguistic evidence points to a potentially damning flaw in our argument so far: the focus on examples and experiments from Western cultures. As a group of cross-cultural psychologists and anthropologists recently put it, these are WEIRD people—people coming from Western, Educated, Industrialized, Rich, Democratic countries. The acronym is well deserved, for this sample often sits at the extreme range of the variability observed in human populations. For instance, American undergraduates—by far the largest pool of participants in psychology experiments—are more individualistic7 than their noncollege peers, who are more individualistic than Americans from the previous generation, who were already more individualistic than just about any other people on earth.8

  The importance given to argumentation among WEIRD people could be another freak trait inspired by the ancient Greeks’ reliance on argumentation in science, politics, and legal institutions. In most Western cultures, the existence of disagreements is seen as a normal aspect of human interaction, one that should be organized rather than suppressed and that can have positive effects. Universities in particular are supposed to encourage the practice of debate. Couldn’t it be, then, that seeing argumentation as beneficial is a cultural bias, and that reason—at least reason as we have described it with its justificatory and argumentative functions—is a culturally acquired skill rather than an evolved, universal trait? Could, for instance, people in other cultures be excellent solitary reasoners but terrible arguers? Or not be reasoners at all?

  How to Avoid Looking Like a Fool

  The conclusion “the bears in Novaya Zemlya are white” seems so inescapable that it strains credulity to believe some people incapable of drawing it. Still, more recent research validates Luria’s results. His experiments were successfully replicated with several unschooled populations,9 and other experiments remind us that even the most taken-for-granted skills can need to be culturally acquired.

  Imagine someone putting three coins, one by one, into an opaque container. Your task is to retrieve the coins. Can’t everybody do this? As a matter of fact, no. Only people who have learned to count can. Had you been born a Pirahã, a member of a small Amazonian tribe with no words for numbers, you would be at a loss to retrieve the coins. When Peter Gordon performed this simple experiment with Pirahã participants, only two-thirds stopped exactly at three coins—and the performance quickly deteriorated as the number of coins increased.10 The Pirahã were not simply wary of this foreigner asking them to play weird games. They did very well on other tasks. They genuinely lacked the ability to count to three.

  Is it the case, then, that some people are really unable to produce and evaluate arguments simply because they have unfamiliar premises? No, in the case of reasoning, the problem, it turns out, is merely one of motivation and, more specifically, of social propriety. For one thing, in all the populations tested, some people—a third of the participants, perhaps—easily provided the right answer. They hadn’t developed on their own a new cognitive ability. Some people were just more willing to play the experimenter’s game.

  Why would anyone be reluctant to answer such simple questions as “What color are bears there”? In small-scale populations, people are very cautious with their assertions, only stating a position when they have a good reason to (unlike, say, pundits).11 Clearly, these conditions are not met when a stranger tells a weird story about some absurdly colored bears in a far-off place.
Only a fool would dare make such a statement, a statement she could not appropriately defend. As we saw in Chapter 14, people try to avoid doing things they cannot justify. This is exactly what happened in this exchange, captured as the experimenter asked the white bear question to an unschooled adult Uzbek: “If a man was sixty or eighty and had seen a white bear and had told about it, he could be believed, but I’ve never seen one and hence I can’t say. That’s my last word. Those who saw can tell, and those who didn’t see can’t say anything!” At this point, a young Uzbek volunteered: “From your words it means that bears there are white.” But the older man concluded, “What the cock knows how to do, he does. What I know, I say, and nothing beyond that!”12 They both knew what the experimenter wanted, but only the young man was willing to wager his credit in this weird game.

  In order to let people be more comfortable engaging with unfamiliar premises, psychologist Paul Harris and his collaborators gave unschooled participants a richer context. Instead of happening in a far-off but real place, the problems—otherwise similar to those used by Luria—were set on a distant planet. In these conditions, people had less scruple to engage in playful suppositions, and they gave the logical answers more easily.13 If we learn in school to play reasoning games, drawing weird conclusions from arbitrary premises,14 basic reasoning skills require no such formal education. All normally developing humans can produce and evaluate arguments. But do they argue or keep these skills for private use?

 

‹ Prev