by Dan Sperber
In standard cases of argumentation, that is, in the production of reasons to convince others, the same reasons have both retrospective and prospective relevance. The arguer presents herself as trying to convince the addressee of an opinion she already holds. Olav and Livia, for example, are about to order raw oysters, and Olav wonders which wine they should have. He asks Livia, and she answers: “A Muscadet! It has just the right acidity and minerality to go with oysters.” Olav has heard bad things about Muscadet, but given that Livia is more knowledgeable than he is about wine, her argument convinces him that they should have Muscadet and that he should revise his negative opinion. Livia’s arguments to convince Olav are at the same time reasons that justify her own opinions. A sincere arguer uses as arguments to convince her audience reasons that she thinks provide good, retrospective justifications of her own views.
Not only do retrospective justification and prospective reasoning overlap in many ways, not only do they draw on the same pool of reasons; they also rely, we want to argue, on one and the same mechanism, a module that delivers intuitions about reasons.
Reasons Themselves Must Be Inferred
It would be quite surprising (and interesting) to find animals other than humans that think about reasons. Reasons occupy an important place in human thinking because, we have suggested, of the unique role they play in humans’ very rich and complex social interactions. Reasons help establish personal accountability, mutual expectations, and norms. Saying this, however, doesn’t tell us how humans are capable of knowing their reasons (even if this is a quite imperfect knowledge, as we have seen).
As we pointed out in Chapter 7, it takes more to have a reason than to just recognize some fact. You can walk out and see that the pavement is wet, but you cannot just see that this is a reason to believe that it has been raining. That the pavement is wet may be an objective reason for concluding that the pavement is slippery, that the outside temperature is not below freezing point, that one’s shoes will get dirty, and so on. To think, “The pavement is wet” is not by itself to entertain a reason. You may, moreover, intuitively infer that it has been raining from the fact that the pavement is wet without this relationship between premise and conclusion being mentally represented in the process. Only if you were to entertain a thought like “From the fact that the pavement is wet it follows that it must have been raining” would you be recognizing the reason for your conclusion.
Suppose you do entertain a reason for inferring that it has been raining. The question still arises: How did you come to know that the fact that the pavement is wet is a reason for your conclusion? Reasons do not appear in our head by magic. Recognizing that some fact is a reason for inferring a given conclusion can only be achieved through—what else?—another, higher-order inference.
So, how are reasons inferred? By finding further reasons for our reasons? Sometimes, yes; most of the time, no. Assuming that the recognition of a reason must itself always be based on a higher-order reason would lead to an infinite regress: to infer a conclusion A, you would need some reason B; to infer that B is a reason to infer A, you would need a reason C; to infer that C is a reason to infer that B is a reason to infer A, you would need a reason D; and so on, without end. Hence, the recognition of a reason must ultimately be grounded not in further reasoning but in intuitive inference. This infinite regress argument is an old one: in 1895 Lewis Carroll (of Alice in Wonderland fame) published an early version of it in a short and witty note entitled “What the Tortoise Said to Achilles.”2 But there is a question that had not been addressed: What implications, if any, does all this have for psychology?
Inferences, we have argued, are made possible by the existence of regularities in the world (general regularities like the laws of physics or local regularities like the bell–food association in the lab where Pavlov kept his dogs). A regularity that makes an inference possible need not be represented as a premise in the inferential process; it can instead be incorporated in a dedicated procedure. Intuitive inferences are produced by autonomous modules using such procedures. In this perspective, the fact that the recognition of reasons is grounded in intuitive inference suggests that there must be some regularity that a module can exploit in order to recognize reasons. Is there really such a module? If so, what is the regularity involved? Is there some better alternative account to explain how reasons are identified? In the psychology of reasoning, these issues are not even discussed.
Psychologists have studied the format in which people represent premises and the method by which they infer conclusions from these premises (this is what the debate between “mental modelers” and “mental logicians” that we evoked in Chapter 1 was all about). Much less studied is when and how people infer that specific premises provide a reason for a specific conclusion. The fact that reasons must ultimately be grounded in intuitive inference has been either ignored or deemed irrelevant, as if this ultimate intuitive grounding of reasons were too far removed from the actual processes of reasoning to be of consequence to psychology. Actually, just the opposite is the case.
In everyday reasoning, higher-order reasoning about reasons is quite rare. Most of the reasons people use are directly grounded in intuitive inference. It is intuitively obvious, for instance, that the pavement being wet is a reason to infer (with a risk of error) that it has been raining. When the intuitive grounding of reasons is indirect, the chain is short. A fastidious reasoner might add one extra step: intuitively, the most likely explanation of the pavement being wet is that it has been raining; hence the pavement being wet is a reason to infer that it has been raining. Even in more formal reasoning, where people do reason about reasons, the intuitive ground is never very far. What makes reasoning possible, not just in principle but in practice, is the human capacity to intuitively recognize reasons.
The whole dual process approach of Evans, Kahneman, Stanovich, and others that we considered in Chapter 2 has at its core the assumption that intuitive inference and reasoning are achieved through two quite distinct types of mechanisms. We disagree. One of the main claims of this book is that reasoning is not an alternative to intuitive inference; reasoning is a use of intuitive inferences about reasons.
What makes humans capable of inferring their reasons is, we claim, their capacity for metarepresentational intuitive inference. To articulate this claim, we revisit and expand ideas introduced in Chapters 3 through 5 about three topics essential to understanding the human mind: intuitions, modules, and metarepresentations.
Intuitions about Reasons
Intuitions, we suggested in Chapter 3, are produced neither by a general faculty of intuition nor by distinct types of inferential process. They are, rather, the output of a great variety of inferential modules, the output of which is to some degree conscious while their operations remain unconscious. Our question now is: Is there a module that draws intuitive inferences about reasons? To answer, we must first sharpen our understanding of intuitions generally.
The inferential mechanisms that produce intuitions, that is, conscious conclusions arrived at through unconscious processes, are quite diverse. There is no intrinsic feature of intuitive inference that they would all share among themselves but not with other types of inference. By way of illustration, here are two cases of inference that are clearly both intuitive but that otherwise have very little in common.
Drawing on earlier work by Wolfgang Köhler (one of the founding fathers of Gestalt psychology), the neuroscientist Vilayanur Ramachandran and the psychologist Edward Hubbard presented people with two shapes from Figure 14.
Figure 14. Kiki and Bouba.
They told them, “In Martian language, one of these two figures is a bouba and the other is a kiki. Try to guess which is which.” Ninety-five percent of people picked the left figure as kiki and the right as bouba.3 The strong intuition that it should be so is based, it seems, on a synesthetic association between sounds and shapes. Here intuition is close to perception.
The bouba–kiki intuition
is quite concrete. Other intuitions are quite abstract. The English philosopher G. E. Moore noted in 1942 that it would be absurd to make a statement of the form “P, but I don’t believe that P” such as, “It is Monday, but I don’t believe that it is Monday.” This observation is intuitively obvious, but it is not that easily explained. Contrary to what may seem, the proposition isn’t self-contradictory: it could very well be true that it is Monday and that I don’t believe that it is. The absurdity isn’t in what is stated but in its being stated in the first person and in the present tense. That much is clear, but it isn’t enough to explain the intuition of absurdity. In fact, while the intuition of “Moore’s paradox” is uncontroversial, its explanation remains to this day a topic of controversy among philosophers.4
As the bouba–kiki and the Moore paradox examples illustrate, what renders some conclusions intuitive is neither their content nor the way in which they are produced. It is the kind of confidence we have in these conclusions. Intuitions are distinguished not by their cognitive features, but by their metacognitive features.5
Confidence in our intuitions is unlike confidence in our perception or in our memory. Perception is experienced as a direct registration of how things are. Correctness of perception is generally taken for granted. Similarly, the way we use memory at every moment of our life is experienced as the direct recall of information that had been mentally stored. When perception and memory work fluently and unhampered, we are wholly unaware of the inferential work they involve.
We experience intuitions, on the other hand, as something our mind comes up with rather than as information that we just pick up from the environment or from memory. Our confidence in our intuitions is confidence in our mind’s ability to go beyond the information given, in other terms, to draw inferences. It is not just that our intuitions feel right; we feel that we are right in coming up with them. The conclusions of intuitive inferences are experienced as personal thoughts. When we think of objective reasons to justify our intuitions, we readily assume that we must have had these objective reasons in mind to produce these intuitions. This sense of cognitive competence needn’t be conceptually articulated. It may be just a metacognitive feeling that is activated only to the degree to which we pay attention to our intuition. Still, we claim, it is this distinctive kind of self-confidence that characterizes intuitions.
We have rejected the old idea that intuitions are the outputs of a distinct faculty of intuition and the currently fashionable idea that they are the outputs of a system 1 type of mechanisms of inference. This raises a puzzle. If, in and of themselves, intuitions are neither similar to one another nor distinct from other types of inference, why should we group them together and distinguish them from other mental states at all? Why do we have this special form of self-confidence that makes us set apart as intuitions some inferences that do not have that much in common otherwise?
Why indeed do we have intuitions at all? Here, as an aside, is a speculative answer. To distinguish a thought of your own as an intuition is to take a stance of personal authority on the content of that thought. This stance, we suggest, is less relevant to your own individual thinking than it is to the way in which you might communicate that thought to others. An intuition is a thought that, you feel, you may assert on your own authority, without an argument or an appeal to the authority of a third party. To make an assertion (or propose a course of action) on the basis of your intuition is a social move that puts others in the situation of having either to accept it or to express distrust not just in what you are saying but in your authority for saying it. By expressing an intuition as such, you are raising the stakes: you stand to gain in authority if it is accepted, and to lose if it is not. Even if your assertion is rejected, however, putting it forward as an intuition of yours may help you withstand the authority or arguments of others. Intuition may license stubbornness, which sometimes is a sensible social strategy.6
Metacognition can take not only the simpler form of evaluative feelings but also the more elaborate form of metarepresentational judgments.7 Say you are asked how long it will take you to finish writing the paper you have been working on. You reply, “A couple of days,” but your feeling of self-confidence isn’t very strong. Thinking about it, you are more confident that finishing the paper will take you at least a couple of days. What is involved now is more elaborate than a mere metacognitive feeling. You are metarepresenting two representations (I will have finished writing the paper in a couple of days, and … in at least a couple of days) and comparing your relative confidence in them. This time, metacognition is also metarepresentational. Distinguishing mere metacognitive feelings and more elaborate metacognitive metarepresentations is, we will show, essential to understanding reasons.
Our intuitions result from inferences about an indefinite variety of topics: bouba and kiki, Moore’s paradox, the mood of a friend, or what film we might enjoy. Our metarepresentational abilities make us, moreover, capable of having intuitions about our intuitions.
Metarepresentational intuitions about our first-order intuitions may be focused on various aspects of these intuitions: they may be, for instance, about the reliability of first-order intuitions or about their acceptability to other people with whom we would like to share them. Some of our metarepresentational intuitions are not just about our degree of confidence in our first-order intuitions but—and this is crucial to the present argument—they are about the reasons for these intuitions.
You arrive at the party and are pleased to see that your friend Molly is there too. She seems, however, to be upset. When you have a chance to talk to her, you say, “You seem to be upset tonight.” She replies, “I am not upset. Why do you say that?” Just as you had intuited that she was upset, you now intuit reasons for your initial intuition. Here are what your two intuitions might be:
First-order intuition: Molly is upset.
Metarepresentational intuition about your reasons for your first-order intuition: the fact that Molly isn’t smiling and that her voice is strained is what gives me reasons to believe that she is upset.
You want to go to the cinema and hesitate between Superman 8 and Star Wars 12. You intuitively decide to go and see Superman 8. The film turns out to be rather disappointing, and you ask yourself why you had made that choice. An answer comes intuitively:
First-order intuitive decision: To go to see Superman 8.
Metarepresentational intuition about your reasons for your first-order intuitive decision: The fact that you had enjoyed Superman 7 more than Star Wars 11 was your reason for deciding to go to see Superman 8 rather than Star Wars 12.
We typically care about our reasons when our intuitions are challenged by other people or by further experience.
We have intuitions not only about our reasons for our own intuitions but also about other people’s reasons for their intuitions. You have spent a few hours inside, in a windowless conference room; you are now walking out of the building with your colleague Lin. The sky is blue, as it was when you walked in, and yet Lin says, “It has been raining.” What reasons does he have to say that? Yes, it is a bit cooler than you might have expected, but is this enough to justify Lin’s intuition? You look around and see a few puddles. Their presence, you intuit, provides a reason for Lin’s assertion:
Lin’s intuition: It has been raining.
Metarepresentational intuition about Lin’s reasons for his intuition: The fact that there are puddles is Lin’s reason to assume that it has been raining.
If, moreover, you think that the reason you attribute to Lin is a good reason, you don’t have to accept his assertion just on his authority; you now share a reason to accept it.
The Reason Module
What is this mechanism by means of which we intuitively infer reasons? What is the empirical regularity that makes reasons identifiable as such in the first place? While the attribution of reasons is hardly discussed, the attribution of beliefs and desires has been a central topic of debates in philosophy an
d psychology. As we saw, a common view—about which we expressed reservations—is that the regularity that helps us attribute beliefs and intentions to people is the presumption that they are rational beings. Couldn’t such a presumption of rationality help us identify people’s reasons? Should one attribute to others reasons that make their beliefs and intentions rational?
And what about our own reasons? How do we come to know them—or think we know them? Are the identification of other people’s reasons and that of our own reasons based on one and the same mechanism, or on two distinct mechanisms operating in quite different ways?
As we saw in Chapter 7, there are good grounds to reject the idea that there is a power or a faculty of introspection that allows us to directly read our own mind. Moreover, it is not that we had reasons in mind when reaching our intuitive conclusions, reasons that we might then introspect if there were such a thing as introspection. Rather, we typically construct our reasons after having reached the conclusions they support. In order to attribute reasons to ourselves, then, we have to infer them, just as we have to infer the reason we attribute to others. Of course, we typically have much richer evidence about ourselves. We have some degree of direct access to our sensations and feelings. We can talk to ourselves in inner speech. Notwithstanding these differences in the evidence available, the way we draw inferences about our own reasons should be, in essential respects, similar to the way we draw inferences about the reasons of others.