The Enigma of Reason: A New Theory of Human Understanding

Home > Other > The Enigma of Reason: A New Theory of Human Understanding > Page 22
The Enigma of Reason: A New Theory of Human Understanding Page 22

by Dan Sperber


  On the one hand, even the most sincere people cannot be trusted to always be truthful. On the other hand, people who have no qualms lying still end up telling the truth rather than lying on most occasions, not out of honesty but because it is in their interest to do so. Hence, standard ways of controlling or excluding cheaters might not work so well in the special case of liars. An ad hoc version of a tit-for-tat strategy15—you lie to me, I lie to you; or you lie to me, I stop listening to you—might harm us as much or more than it would punish liars. An ad hoc version of a “partner choice” strategy16—you lie to me, I cease listening to you and listen to others—would, if applied systematically, deprive of us of much useful information.

  More generally, to benefit as much as possible from communication while minimizing the risk of being deceived requires filtering communicated information in sophisticated ways. Systematically disbelieving people who have been deceitful about some topic, for instance, ignores the fact that they may nevertheless be uniquely well informed and often reliable about other topics. Automatically believing people who have been reliable in the past ignores the fact that, on some new topic where it is in their interest to deceive us, they might well do so. Well-adjusted trust in communicated information must take into account in a fine-grained way not just the past record of the communicator but also the circumstances and contents of communication.

  Epistemic Vigilance

  Communication is, on the one hand, so advantageous to human beings and, on the other hand, makes them so vulnerable to misinformation that there must have been, we have suggested in earlier work,17 strong and ongoing pressure for developing a suite of mechanisms for “epistemic vigilance” geared at constantly adjusting trust. There is no failsafe formula to calibrate trust exactly, but there are many relevant factors that may be exploited to try to do so. Reaping the benefits of communication without becoming its victim is so important that any mechanism capable of making a cost-effective contribution to the way we allocate trust is likely to have been harnessed to the task.

  The mechanisms involved in epistemic vigilance are of two kinds. Some focus on the source of information and help answer the question: Whom to believe? Other mechanisms focus on content and help answer the question: What to believe?

  Whom should we believe, when, and on what topic and issue? We accept different advice from a doctor or a plumber. We believe more easily witnesses who have no personal interests at stake. We are on alert when people hesitate or, on the contrary, insist too much. We take into account people’s reputation for honesty and competence. And so on. Much recent experimental work shows, moreover, that children develop, from an early age, the ability to take into account evidence of the competence and benevolence of communicators in deciding whom to trust.18

  Trust in the source is not everything. Not all contents are equally believable. No one could convince you that 2 + 2 = 10, that the moon is a piece of cheese, or that you are not yet born. If you were told such things by the person you trust most in the world, rather than taking them literally and at face value, you would wait for an explanation (for example, that 2 + 2 = 10 in base four, that the moon looks like a piece of cheese, and that you are not yet “born again” in a religious sense). If, on the other hand, the person you trust least in the world told you that a square has more sides than a triangle, that Paris is the capital of France, or that to reach your current age you must have eaten a lot of food, you would agree.

  Whatever their source, absurdities are unlikely to be literally believed and truisms unlikely to be disbelieved. Most information, however, falls between these two extremes; it is neither obviously true nor obviously false. What makes most claims more or less believable is how well they fit with what we already believe. If we realize that a claim is incoherent with what we already believe, we are likely to reject it. Still, rejecting a claim that challenges our beliefs may be missing an opportunity to appropriately revise beliefs of ours that may have been wrong from the start or that now need updating.

  Vigilance toward the source and vigilance toward the content may point in different directions. If we trust a person who makes a claim that contradicts our beliefs, then some belief revision is unavoidable: If we accept the claim, we must revise the beliefs it contradicts. If we reject the claim, we must revise our belief that the source is trustworthy.

  While it is far from failsafe, epistemic vigilance allows us, as audience, to sort communicated information according to its reliability and, on average, to stand to benefit from it. For communicators, on the other hand, the vigilance of their audience diminishes the benefits they might expect from deceiving others—it lowers the chances of success—and if they were caught, it increases the costs they might have to pay in the form of loss of credibility and reputation.

  Shouldn’t we be vigilant not just toward information communicated by others but also toward the output of our own belief-formation mechanisms? When it comes to beliefs that result from our own cognitive processes, we are the source. Our cognitive mechanisms have evolved to serve us. Unlike other people, these mechanisms do not really have interests of their own. Moreover, there already are, we saw, metacognitive devices that modulate the confidence we have in our own memories, perceptions, and inferences. So, there is little reason to assume that it would be advantageous to apply further vigilance toward the source when we ourselves are the source.

  So be it for vigilance toward the source, but what about vigilance toward the content? Wouldn’t it be useful to check the coherence of new perceptions and inferences with old beliefs so as better to decide what new beliefs to accept or reject and, possibly, what old beliefs to revise? Actually, coherence checking is not a simple and cheap procedure, nor is it error-proof. Moreover, if it were worth doing, shouldn’t its output itself be double- and triple-checked, and so on? If you are willing to distrust yourself, shouldn’t you distrust your own distrust? We know of no evidence showing that humans exercise vigilance toward the content of their own perceptions and inferences (except in special cases where they have positive grounds to think that their senses or intuitions aren’t functioning well). We know of no good argument, either, to the effect that doing so would be truly beneficial. More generally, people seem to be unconcerned by their own incoherencies unless something—generally someone—comes up to make them salient.

  The Argumentative Function of Reason

  Precautions have a price. Even when they are on the whole beneficial, they result in missed opportunities. You were wise, for instance, not to pick these mushrooms in the woods since you were not sure they were edible. You were wise even though it so happens that they were delicious chanterelles. Epistemic vigilance is a form of precaution, and it has a price in the same way. A valuable message may be rejected because we do not sufficiently trust the messenger, a clear case of missed opportunity.

  Epistemic vigilance, however useful or even necessary it may be, creates a bottleneck in the flow of information. Still, to receivers of information, the benefits of well-calibrated epistemic vigilance are greater than the costs. To communicators, on the other hand, the vigilance of their audience seems just costly. This vigilance stands in the way not only of dishonest communicators but also of honest ones. An honest communicator may be eager to communicate true and relevant information, but she may not have sufficient authority in the eyes of her interlocutor for him to accept it, in which case they both lose—she in influence, he in relevant knowledge.

  The more potentially relevant the message you receive from a given source, the more vigilant you should be, but the more costly this vigilance may turn out to be. Here is a dramatic historical illustration. Richard Sorge was a Soviet spy working undercover in Japan during World War II and providing the Russians with much valued intelligence. When, however, he informed them of the imminent invasion of Russia by Nazi Germany in June 1941, Stalin just dismissed his report: “There’s this bastard who … deigned to report the date of the German attack as 22 June. Are you su
ggesting I should believe him?”19 With hindsight, what a mistake! But precisely because the information was both so important if true and so damaging if false but taken to be true, it made sense to be particularly vigilant and indeed skeptical. There are comparable situations in our personal lives where, for instance, we remain skeptical of a warning or a promise precisely because it is too relevant to be accepted on trust.

  This is where reasoning has a role to play. The argumentative use of reasons helps genuine information cross the bottleneck that epistemic vigilance creates in the social flow of information. It is beneficial to addressees by allowing them to better evaluate possibly valuable information that they would not accept on trust. It is beneficial to communicators by allowing them to convince a cautious audience.

  To understand how argumentation helps overcome the limits of trust, consider what happens when communicators make claims without ever arguing for them. People in a position of power, for instance, typically expect what they say to be accepted just because they say it. They are often wrong in this expectation: they may have the power to force compliance but not to force conviction. Statements of opinion unsupported by any argument are also common in some cultural milieus where arguing, and generally being “too clever,” is discouraged. Intelligence, in all the senses of the term, doesn’t flourish in such conditions. In both situations, the flow of information is hampered. What argumentation can do, when it is allowed, is ease this flow.

  As communicators, we are addressing people who, if they don’t just believe us on trust, check the degree to which what we tell them coheres with what they already believe on the issue. Since we all are at times addressees, we are in a position to understand how, when we communicate, our audience evaluates what we tell them. We benefit from this understanding and adjust our messages accordingly. Unless we are trying to deceive our audience, we needn’t see their vigilance as just an obstacle to our communicative goals. On the contrary, we may be in a position to use their vigilance in a way that will be beneficial both to them and to us.

  To begin with, even if our audience is reluctant to accept our main point, there may be relevant background information that they will accept from us on trust. By providing such information, we may extend the common ground on the basis of which our less evident claims will be assessed. For instance,

  (The doorbell is ringing)

  Enrico: It must be Alicia or Sylvain.

  Michelle: Actually, Sylvain is out of town. I am sure it is Alicia.

  Enrico: Right!

  Here Michelle produces a piece of information that Enrico believes on trust: Michelle wouldn’t say that Sylvain is out of town if she didn’t know it for a fact. This piece of information, however, is a reason for Enrico to revise his conjecture about who might be ringing the doorbell.

  A good way to convince addressees is to actively help them check the coherence of your claims with what they already believe (including what they have just unproblematically accepted from you) or, even better if possible, to help them realize that given their beliefs, it would be less coherent for them to reject your claims than to accept them. In other words, as a communicator addressing a vigilant audience, your chances of being believed may be increased by making an honest display of the very coherence your audience will anyhow be checking. A good argument consists precisely in displaying coherence relationships that the audience can evaluate on their own.

  As an addressee, when you are provided not just with a claim but also with an argument in its favor, you may (intuitively or reflectively) evaluate the argument, and if you judge it to be good, you may end up accepting both the argument and the claim. Your interlocutor’s arguments may be advantageous to you in two ways: by displaying the very coherence that you might have to assess on your own, it makes it easier to evaluate the claim, and if this assessment results in your accepting relevant information, it makes communication more beneficial.

  How does a communicator display coherence? She searches among the very beliefs her addressee already holds (or will accept from her on trust) for reasons that support her claim. In simple cases, this may involve a single argumentative step, as in this dialogue between Ben and Krisha, two neighbors of Tang, Julia, and Mary:

  Krisha: Tang just called. He is inviting us to come over to their place for dinner with him, Julia, and Mary.

  Ben: Mary is there? Then she must have finished the essay she had to write.

  Krisha: I would be surprised if she had.

  Ben: Surely, if she hadn’t finished her essay, she would be working late in the library.

  Krisha: But you told me yourself this morning that the library would close early today.

  Ben: Ah, yes, I had forgotten. So, you’re right—Mary might still not have finished her essay after all but be at home all the same.

  Here Krisha uses a piece of information that Ben himself had provided (that the library is closed that evening) as a reason to cast doubt on his claim that Mary must have finished her essay. Krisha’s argument shows to Ben that he holds mutually incoherent views, one of which at least he should revise.

  Making the addressee see that it would be more coherent for him to agree than to disagree with the communicator’s claim may involve several argumentative steps, as in this illustration based on the pigeonhole problem we encountered in Chapter 1 (and which, for once, involves genuine deductive reasoning):

  Myra: Since you like puzzles, Boris, here is one. Let me read: “In the village of Denton, there are twenty-two farmers. All of the farmers have at least one cow. None of the farmers have more than seventeen cows. How likely is it that at least two farmers in Denton have the exact same number of cows?” So, Boris, what do you say?

  Boris: Well, I say it is likely.

  Myra: I say it is certain!

  Myra and Boris have come to different conclusions, and neither has the authority to persuade the other just by saying, “It is so.” Still, where authority fails, argument may succeed.

  Myra: Well, imagine you go to the village, gather the twenty-two farmers, and ask them to stand in groups, each group made of farmers who have exactly the same number of cows.

  Boris: How does that help? It could be that there are no farmers who have exactly the same number of cows, and then, in each of your “groups,” there would be only one farmer.

  Boris’s reply has an entailment that he is not yet aware of but that Myra highlights:

  Myra: This is impossible. Since none of the farmers have more than seventeen cows, there couldn’t be more than seventeen groups: a group for farmers with one cow, a group for farmers with two cows, and so on up to a group for farmers with seventeen cows. But then, given that there are twenty-two farmers and at most seventeen groups, there would be at least one group with several farmers, right?

  Boris: Yes, and?

  Myra: And since farmers in the same group would have the same number of cows, then it must be the case that at least two farmers have exactly the same number of cows. This is certain, as I said, not merely likely.

  Boris: You are right. I see it now.

  What Myra does is present Boris with intuitive arguments that spell out some clear implications of the situation described in the puzzle. The conclusion of the last intuitive argument, however, directly contradict Boris’s initial conclusion. When he realizes this, it is more coherent for him to agree with Myra and change his mind.

  The dialogue between Myra and Boris is a trivialized version of the Socratic method: help your interlocutors see in their own beliefs reasons to change their views. When reflection on reasoning began in the Western tradition in the work of Plato and Aristotle, the argumentative use of reasons to try to convince an interlocutor, a court, or an assembly was seen as quite central. The social aspect of reasoning was well in evidence. Socratic reasoning could be seen as reasoning par excellence. Then, however, starting already in the work of Aristotle, the study of reasoning took a different turn.

  Important normative questions about what makes an
argument valid led to a more abstract approach to reasoning and to the development of logic proper. The use of reasons in argumentation could now be seen as just one practical application among others of more general reasoning principles. A new image of the typical reasoner emerged. Rather than Socrates trying to convince his interlocutor and the interlocutor understanding the force of Socrates’s argument, the paradigm of a reasoner became the scientist reasoning on his own (more rarely on her own) to arrive at a better understanding of the world. From the point of view of the psychology of reasoning, this has been unfortunate: it has obscured the degree to which reasoning (including scientific reasoning) is a social activity, and the degree to which it is based on intuitions.20

  Reasons are not arbitrary rhetorical devices. If they were, how would they have any force? Reasons are supported by intuitions that are themselves based on genuine cognitive competencies. Intuitively good reasons are more likely to support true conclusions. Imagine for instance that instead of Myra trying to convince Boris, Boris would have tried to convince Myra that his answer was the right one. How likely would he have been to succeed? Not very. His reasons would have been intuitively wanting. In fact, in trying to formulate them, he might himself have become aware of their inadequacy. In this case, what Myra and Boris are disagreeing about is a logical problem with a demonstrable solution. Even when the question on which two people disagree does not have a demonstrably true answer, there may be intuitively much better argument for one answer than for another. Then, of course, there are cases where two mutually incompatible conclusions can each be supported by plausible but inconclusive reasons. In such cases, argumentation by itself typically fails to change people’s minds.

 

‹ Prev