The Enigma of Reason: A New Theory of Human Understanding

Home > Other > The Enigma of Reason: A New Theory of Human Understanding > Page 13
The Enigma of Reason: A New Theory of Human Understanding Page 13

by Dan Sperber


  Whether or not it would be better to be guided by reasons, the fact is that in order to believe or decide something, we do not need to pay any attention to reasons. Purely intuitive inference, which generates so many of our beliefs and decisions, operates in a way that is opaque to us. You look at your friend Molly and somehow intuit that she is upset. What are your reasons for this intuition? Or you check what films are playing tonight at the Odeon: Star Wars 12 and Superman 8. You decide to go and see Superman 8. What are your reasons for this choice? If asked, sure, you would produce reasons, but the fact is that at the moment of intuiting that Molly was upset or of choosing Superman 8, you were not consciously entertaining, let alone pondering, reasons. The opinion and the choice came to you intuitively.

  Still, one might object, reasons may well have guided us unconsciously. Moreover, we are generally able to introspect and to become conscious of our unconscious reasons. But is this really what is happening? When we explain ourselves, do we really bring to consciousness reasons that have guided us unconsciously? Let us first look at some challenging evidence and, in the next section, propose an even more radical challenge.

  The commonsense confidence in one’s ability to know one’s mind was, of course, undermined by the work of Sigmund Freud and its focus on what he called “the Unconscious.” The existence of unconscious mental processes had been recognized long ago, by Ptolemy or Ibn Al-Haytham, but until Freud, these processes were seen as relatively peripheral. Mental life was regarded, for the most part, as typically conscious, or at least open to introspection. However, Freud made a compelling case that we are quite commonly mistaken about our real motivations. A century later, in a cognitive psychology perspective, the once radically challenging idea of the “Unconscious” seems outdated. Not some, but all mental processes, affective and cognitive, are now seen as largely or even wholly unconscious. The problem has become, if anything, to understand why and how there is something like consciousness at all.4 Freud’s challenge to the idea that we know our reasons has been, if anything, expanded.

  In a very influential 1977 article, two American psychologists, Richard Nisbett and Timothy Wilson, reviewed a rich range of evidence showing that we have little or no introspective access to our own mental processes and that our verbal reports of these processes are often confabulations.5 Actually, they argued, the way we explain our own behavior isn’t that different from the way we would explain that of others. To explain the behavior of others, we take into account what we know of them and of the situation, and we look for plausible causes (influenced by the type of causal accounts that are accepted in our culture). To know our own mind and to explain our own behavior, we do the same (drawing on richer but not radically different evidence). In his book The Opacity of Mind,6 philosopher Peter Carruthers showed how much recent research had confirmed and enriched Nisbett and Wilson’s approach (which he expands both empirically and philosophically).

  Our inferences about others are often quite insightful; our inferences about ourselves needn’t be worse. We may often succeed in identifying bits of information that did play a role in our beliefs and decisions. Where we are systematically mistaken is in assuming that we have direct introspective knowledge of our mental states and of the processes through which they are produced.

  How much does the existence of pervasive unconscious processes to which we have no introspective access challenge our commonsense view of ourselves? The long-established fact that the operations of perception, memory, or motor control are inaccessible to consciousness isn’t really the problem. Much more unsettling is the discovery that even in the case of seemingly conscious choices, our true motives may be unconscious and not even open to introspection; the reasons we give in good faith may, in many cases, be little more than rationalizations after the fact.

  We have already encountered (in Chapter 2) a clear example of such rationalization from the psychology of reasoning. In the Wason four-card selection task, participants, before they even start reasoning, make an intuitive selection of cards. Their selection is typically correct in some versions of the task and incorrect in others, even though the problem is logically the same in all versions. Asked to explain their selection, participants have no problem providing reasons. When their choice happens to be correct, the reasons they come up with are sound. When their choice happens to be mistaken, the reasons they come up with are spurious. In both cases—sound and spurious reasons—these are demonstrably rationalizations after the fact. In neither case are participants aware of the factors that, experimental evidence shows, actually drove their initial selection (and which are the same factors whether their answer is correct or mistaken). Still, such experimental findings, however robust, smack of the laboratory.

  Fortunately, not all experimental research is disconnected from real-life concerns. The brutal murder of Kitty Genovese in New York, on March 13, 1964, with dozens of neighbors who heard at least some of her cries for help but didn’t intervene, prompted social psychologists John Darley and Bibb Latané to study the conditions under which people are likely or unlikely to help.7 They discovered that when there are more people in a position to be helpful, the probability that any of them will help may actually decrease. The presence of bystanders causes people to ignore a person’s distress (a phenomenon Darley and Latané dubbed “the bystander effect”), but this is a causal factor of which people are typically unaware.

  In one study (by Latané and Judith Rodin),8 people were told that they would participate in a market study on games. Participants were individually welcomed at the door of the lab by a friendly assistant who took them to a room connected to her office, gave them a questionnaire to fill out, and went back to her office, where she could be heard shuffling paper, opening drawers, and so on. A while later, the participant heard her climb on a chair and then heard a loud crash and a scream, “Oh, my God, my foot … I … I … can’t move it. Oh … my ankle! … I … can’t get this … thing … off me.”

  In one condition, the participant was alone in the room when all this happened. In another condition, there was a man in the room who acted as if he were a participant too (but who was, in fact, a confederate of the experimenter). This man hardly reacted to the crash and the scream. He just shrugged and went on filling out the questionnaire. When real participants were on their own in the room, 70 percent of them intervened to help. When they were together with this apparently callous participant, only 7 percent of them did.

  Immediately after all this happened, participants were interviewed about their reactions. Most of those who had taken steps to help said something like: “I wasn’t quite sure what had happened; I thought I should at least find out.” Most of those who didn’t intervene reported having thought that whatever had happened was not too serious and that, moreover, other people working in nearby offices would help if needed. They didn’t feel particularly guilty or ill at ease. Had it been a real emergency, of course, they would have helped (or so they claimed).

  When asked whether the presence of another person in the room had had an effect on their decision not to help, most were adamant that it had had no influence at all. Well, we know that they had been ten times more likely to intervene when they were alone in the room than when they were not. In other terms, the presence of that other person had a massive influence on their decision. Various factors can help explain this “bystander effect”: when there are other people in a position to help, one’s responsibility is diluted, the fact that other people are not reacting creates a risk of appearing silly if one does, and so on. What is relevant here is the forceful demonstration of how badly mistaken one can be about what moves one to act or not to act (expect more striking examples in Chapter 14).

  What is happening? Do we form beliefs and make decisions for psychological reasons that are often unconscious, that we are not able to introspect, and that we reconstruct with a serious risk of mistake? Or is what generally happens even more at odds with the commonsense view of the role of r
easons in our mental life?

  Modules Don’t Have Reasons

  The evidence we have considered so far suggests that humans have limited knowledge of the reasons that guide them and are often mistaken about these reasons. We want to present an even more radical challenge to the commonsense picture. It is not that we commonly misidentify our true reasons. It is, rather, that we are mistaken in assuming that all our inferences are guided by reasons in the first place. Reasons, we want to argue, play a central role in the after-the-fact explanation and justification of our intuitions, not in the process of intuitive inference itself.

  Of course, few philosophers or psychologists would deny the obvious fact that we often form beliefs and make decisions without being conscious of reasons for doing so. Still, they would argue that whether or not we are conscious of our reasons, we are guided by reasons all the same. It is just that these reasons are “implicit.” This is what happens in intuitive inference. But what are implicit reasons? How can they play their alleged guiding role without being consciously represented?

  The word “implicit” is borrowed from the study of linguistic communication, where it has a relatively clear sense. On the other hand, when psychologists or philosophers talk of implicit reasons, they might mean either that these reasons are represented unconsciously or that they aren’t represented at all (while somehow still being relevant). Often, the ambiguity is left unresolved, and talk of “implicit reasons” is little more than a way to endorse the commonsense view that people’s thought and action must in some way be based on reasons without committing to any positive view of the psychological reality and actual role of such reasons.9

  We believe that the explicit-implicit distinction is a clear and useful one only in the study of verbal communication and that the only clearly useful sense in which one may talk of “implicit reasons” is when reasons are implicitly communicated. When, for example, Ji-Eun answers Rob’s offer of a milkshake by saying, “Thank you, but, you know, most of us Koreans are lactose intolerant,” the reason she gives for her refusal is an implicit reason, in the linguistic sense of “implicit”: it is not explicit—she doesn’t say, for instance, that she herself is lactose intolerant—but it can and should be inferred from her utterance.

  Still, one might maintain that psychological reasons can be conscious or unconscious, that at least some unconscious reasons can be made conscious, and that it makes sense to call unconscious reasons that can be made conscious “implicit reasons.” But are there really implicit reasons in this sense? We doubt it.

  Unconscious and intuitive inferences are carried out, we argued, by specialized modules. Modules take as input representations of particular facts and use specialized procedures to draw conclusions from them. When a module functions well and produces sound inferences, the facts represented in the input to the module do indeed objectively support the conclusion the module produces. This, however, is quite far from the claim that the representations of particular facts are unconscious reasons that guide the work of the module.

  Here is why not. A fact—any fact—is an objective reason not for one conclusion but for an unbounded variety of conclusions. The fact, for instance, that today is Friday is an objective reason to conclude not only that tomorrow will be a Saturday but also that the day after tomorrow will be a Sunday; that tonight begins the Jewish Shabbat; that, in many offices, employees are dressed more casually than on other working days of the week; and so on endlessly.

  The same fact, moreover, may be a strong objective reason for one conclusion and a weak one for another. For instance, the fact that the plums are ripe is a strong reason to conclude that if they are not picked they will soon fall and a weaker reason to conclude that they are about to be picked. The same fact may even be an objective reason for two incompatible conclusions. For instance, the fact that it has been snowing may be a reason to stay at home and also a reason to go skiing.

  It follows from all this that the mental representation of a mere fact is not by itself a psychological reason. The representation of a fact is a psychological reason only if this fact is represented as supporting some specific conclusion. I may know that you know that it has been snowing and not know whether this fact is for you a reason to stay home, a reason to go skiing, a reason for something else, or just a mere fact and not a reason for any particular conclusion at all. We cannot attribute reasons to others without knowing what their reasons are reasons for. Well, you might think, so what? Surely, if the representation of a fact (real or presumed) is used as a premise to derive a conclusion, then it is a psychological reason for this very conclusion. This, however, is mistaken. A belief used as a premise to derive a conclusion is not necessarily a psychological reason for this conclusion.

  A long time ago, Ibn Al-Haytham argued that the mind performs unconscious inferences by going through the steps of a syllogism. If this were truly the case, then the representation of a particular fact would serve as the minor premise of such a syllogism, the regularity that justifies inferring the conclusion from this particular fact would serve as the major premise, and these two premises taken together could be seen as a psychological reason for the conclusion of the syllogism (each premise being a partial reason in the context of the other premise). This logicist understanding of all inferences, conscious or unconscious, is still commonly accepted, and this may be why it may seem self-evident that whatever information is used as an input (or a “premise”) to an inference has to be a reason for the conclusion of this inference. The view, however, that unconscious inferences are produced by going through the steps of a syllogism or more generally through the steps of a logical derivation has been completely undermined by modern research in comparative psychology and in psychology of perception (as we argued in Chapter 5).

  Unconscious inferences, we argued, are produced by modules; these modules exploit the regular relationship that exists between the particular facts they use as input and the conclusion they produce as output, but they don’t represent this relationship either as the major premise of a syllogism or in any other way. Representations of particular facts, we pointed out, are not by themselves psychological reasons for any particular conclusion. Modules, in any case, don’t need reasons to guide them. They can use representations of facts as input without having to represent, either as a reason or in any other way, the relationship between these facts and the conclusions they derive from them. Modules don’t need motivation or guidance to churn out their output.

  Consider, as a first illustration, the case of a rudimentary inference that, even though it takes place in the nervous system, doesn’t, properly speaking, take place in the mind. Perspiration is a well-developed mechanism in humans (and in horses).10 When body temperature rises too much, the hypothalamus, a brain structure, triggers the production of sweat that cools the body. This is obviously something that happens in us and to us rather than something that we intend. Still, the perspiration mechanism performs a rudimentary practical inference: it takes as input information about body temperature provided by various neural detectors and, when appropriate, it yields as output instructions to the sweat glands. In doing so, it exploits the general fact that, above some temperature threshold, sweating is appropriate, but it does not represent this general fact. Information about current body temperature, which is represented in the module, isn’t by itself a reason for anything in particular. The module clearly does its job without being guided by any psychological reason, and it doesn’t need any such reason to perform its job.

  As a second example, take desert ants. Once they have found some piece of food, they speed back to their nest in an almost straight line (as we saw in Chapter 3). Ants know where to go thanks to what Wehner calls a “navigational toolkit,” a complex cognitive module with specialized submodules. The procedures used by the submodules (counting steps, taking into account angular changes of trajectory, and so on) each evolved to take advantage of a reliable regularity without, however, representing this regularity. On
each foray the ants make outside the nest, relevant information inferred by these submodules is automatically integrated and contributes to determining the ants’ return path. In explaining this process, there is no ground to assume that ants are guided by explicit reasons or that the modules involved are guided by implicit reasons.

  Why do we bother, you may ask, to make the obvious point that human perspiration and desert ant orientation are not guided by reasons, either explicit or implicit? Because considerations that are relevant in these two cases extend quite naturally to less straightforward cases and, to begin with, to the case of inference in perception.

  Remember the Adelson checkerboard illusion we talked about in Chapter 1 (see Figure 4)? Participants are asked which of two squares in the picture of a checkerboard is darker when, actually, both are exactly the same shade of gray. They see—here is the illusion—one of the two squares as much lighter than the other. They do so because they infer the relative darkness of a surface not just from the light it reflects to their eyes but also from the light they assume it receives. One of the two squares is depicted as being in the shadow of a cylinder and therefore as receiving less light than the other. Participants automatically compensate for this difference in light received, and see this square as lighter than it is.

  In this textbook example of the role of inference in perception, no one would argue that we have conscious reasons to see one square as darker than the other. The module involved computes the relative darkness of each square as the ratio of the light the square reflects to the light it receives because it evolved to do so; in nonillusory cases, there are objective reasons why it should do so; these reasons, however, are not represented in the mechanism. The module is not guided by unconscious psychological reasons.

 

‹ Prev