Book Read Free

The Enigma of Reason: A New Theory of Human Understanding

Page 4

by Dan Sperber


  Major premise:

  If Mary has an essay to write, then she will study late in the library.

  Minor premise:

  She has an essay to write.

  Participants had no difficulty deducing:

  Conclusion: Mary will study late in the library.

  So far, so good. To another group of people, however, Byrne presented the same problem, but this time with an additional major premise:

  First major premise:

  If Mary has an essay to write, then she will study late in the library.

  Second major premise:

  If the library stays open, then Mary will study late in the library.

  Minor premise:

  She has an essay to write.

  From a strictly logical point of view, the second major premise is of no relevance whatsoever. So, if people were logical, they should draw the same valid modus ponens conclusion as before. Actually, only 38 percent of them did.

  What Byrne was trying to prove was not that humans are irrational—mental modelers don’t believe that—but that mental logicians have the wrong theory of human rationality. If, as mental logicians claim, people had a mental modus ponens rule of inference, then that inference should be automatic, whatever the context. Participants are instructed to take the premises as true, so, given the premises “If Mary has an essay to write, then she will study late in the library” and “Mary has an essay to write,” they should without hesitation conclude that she will study late in the library. What about the possibility that the library might be closed? Well, what about it? After all, for all you know, Mary might have a pass to work in the library even when it is closed. A logician would tell you, just don’t go there. This is irrelevant to this logic task, just as the possibility that a bubble might burst would be irrelevant to the arithmetic task of adding three bubbles to two bubbles.

  Did mental logicians recognize, in the light of Byrne’s findings, that their approach was erroneous? Well, no; they didn’t have to. What they did instead was propose alternative explanations.20 People might, for instance, consolidate the two major premises presented by Byrne into a single one: “If Mary has an essay to write and if the library stays open, then Mary will study late in the library.” This, after all, is a realistic way of understanding the situation. If this is how people interpret the major premises, then the minor premise, “She has an essay to write,” is not sufficient to trigger a valid modus ponens inference, and Byrne’s findings, however intrinsically interesting, are no evidence against mental logic.

  Is There a Defendant at This Trial?

  The prosecution of reason might enjoy watching mental logicians and mental modelers, all expert witnesses for the defense, fight among themselves, but surely, at this point, the jury might grow impatient. Isn’t there something amiss, not with the reasoning of people who participate in these experiments, but rather with the demands of psychologists?

  Experimentalists expect participants to accept the premises as true whether those premises are plausible or not, to report only what necessarily follows from the premises, and to completely ignore what is merely likely to follow from them—to ignore the real world, that is. When people fail to identify the logical implications of the premises, many psychologists see this as proof that their reasoning abilities are wanting. There is an alternative explanation, namely, that the artificial instructions given to people are hard or even, in many cases, impossible to follow.

  It is not that people are bad at making logical deductions; it is that they are bad at separating these deductions from probabilistic inferences that are suggested by the very same premises. Is this, however, evidence of people’s irrationality? Couldn’t it be seen rather as evidence that psychologists are making irrational demands?

  A comparison with the psychology of vision will help. Look at Figure 4, a famous visual illusion devised by Edward Adelson. Which of the two squares, A or B, is of a lighter shade of gray? Surely, B is lighter than A—this couldn’t be an illusion! But an illusion it is. However surprising, A and B are of exactly the same shade.

  Figure 4. Adelson’s checkerboard illusion.

  In broad outline, what happens is not mysterious. Your perception of the degree to which a surface is light or dark tracks not the amount of light that is reflected to your eyes by that surface but the proportion of the light falling on that surface that is reflected by it. The higher this “reflectance” (as this proportion is called), the lighter the surface; the lower this reflectance, the darker the surface:

  The same gray surface may receive and therefore reflect more or less light to your eyes, but if the reflectance remains the same, you will perceive the same shade of gray. Your eyes, however, get information on just one of the two quantities—the light reflected to your eyes. How, then, can your brain track reflectance, that is, the proportion between the two quantities, only one of which you can sense, and estimate the lightness or darkness of the surface? To do so, it has to use contextual information and background knowledge and infer the other relevant quantity, that is, the amount of light that falls on the surface.

  When you look at Figure 4, what you see is a picture of a checkerboard, part of which is in the shadow of a cylinder.

  Moreover, you expect checkerboards to have alternating light and dark squares. You have therefore several sound reasons to judge that square B—one of the light squares in the shade—is lighter than square A—one of the dark squares receiving direct lighting. Or rather you would have good reasons if you were really looking at a checkerboard partly in the shadow of a cylinder and not at a mere picture. The illusion comes from your inability to treat this picture just as a two-dimensional pattern of various gray surfaces and to ignore the tridimensional scene that is being depicted.

  Painters and graphic designers may learn to overcome this natural tendency to integrate all potentially relevant information. The rest of us are prey to the illusion. When discovering this illusion, should we be taken aback and feel that our visual perception is not as good as we had thought it to be, that it is betraying us? Quite the opposite! The ability to take into account not just the stimulation of our retina but what we intuitively grasp of the physics of light and of the structure of objects allows us to recognize and understand what we perceive. Even when we look at a picture rather than at the real thing, we are generally interested in the properties of what is being represented rather than in the physical properties of the representation itself. While the picture of square A on the paper or on the screen is of the same shade of gray as that of square B, square A would be quite darker than square B on the checkerboard that this picture represents. The visual illusion is evidence of the fact that our perception is well adapted to the task of making sense of the three-dimensional environment in which we live and also, given our familiarity with images, to the task of interpreting two-dimensional pictures of three-dimensional scenes.

  Now back to Mary, who might study late in the library. In general, we interpret statements on the assumption that they are intended to be relevant.21 So when given the second major premise, “If the library stays open, then Mary will study late in the library,” people sensibly assume that they are intended to take this premise as relevant. For it to be relevant, it must be the case that the library might close and that this would thwart Mary’s intention to study late in the library. So, yes, participants have been instructed to accept as absolutely true that “if Mary has an essay to write, then she will study late in the library,” and they seem not to. However, being unable to follow such instructions is not at all the same thing as being unable to reason well. Treating information that has been intentionally given to you as relevant isn’t irrational—quite the contrary.

  It takes patience and training for a painter to see a color on the canvas as it is rather than as how it will be perceived by others in the context of the whole picture. Similarly, it takes patience and training for a student of logic to consider only the logical terms in a premise and to ignor
e contextual information and background knowledge that might at first blush be relevant. What painters do see and we don’t is useful to them as painters. The inferences that logicians draw are useful to them as logicians. Are the visual skills of painters and the inferential skills of logicians of much use in ordinary life? Should those of us who do not aspire to become painters or logicians feel we are missing something important for not sharing their cognitive skills? Actually, no.

  The exact manner in which people in Ruth Byrne’s experiment are being reasonable is a matter for further research, but that they are being reasonable is reasonably obvious. That people fail to solve rudimentary logical problems does not show that they are unable to reason well when doing so is relevant to solving real-life problems. The relationship between logic on the one hand and reasoning on the other is far from being simple and straightforward.

  At this point, the judge, the jury, and our readers may have become weary of the defense’s and the prosecution’s grandstanding. The trial conceit is ours, of course, but the controversy (of which we have given only a few snapshots) is a very real one, and it has been going on for a long time. While arguments on both sides have become ever sharper, the issue itself has become hazier and hazier. What is the debate really about? What is this capacity to reason that is both claimed to make humans superior to other animals and of such inferior quality? Do the experiments of Kahneman and Tversky on the one hand and those of “mental logicians” and “mental modelers” on the other hand address the same issue? For that matter, is the reasoning they talk about the same thing as the reason hailed by Descartes and despised by Luther? Is there, to use the conceit one last time, a defendant in this trial? And if there is, is it reason itself or some dummy mistaken for the real thing? Is reason really a thing?

  2

  Psychologists’ Travails

  The idea that reason is what distinguishes humans from other animals is generally traced back to the ancient Greek philosopher Aristotle.1 Aristotle has been by far the most influential thinker in the history of Western thought, where for long he was simply called “the Philosopher,” as if he were the only one worthy of the name. Among many other achievements, he is credited with having founded the science of logic. In so doing, he provided reason with the right partner for a would-be everlasting union—or so it seemed. Few unions indeed have lasted as long as that between logic and reason, but lately (meaning in the past hundred years or so), the marriage has been tottering.

  Reason and Logic? It’s Complicated

  Until the end of the nineteenth century, it went almost without saying that logic and the study of reasoning, while not exactly the same thing, were two aspects of a single enterprise. Logic, it was thought, describes good or correct reasoning. Not all reasoning is good—as we saw, far from it—but all reasoning, so the story goes, ought to be, and aims to be, logical. Bad reasoning is reasoning that tried to be logical but failed (or else it is sophistry merely pretending to be logical). Hence logic defines what reasoning is, just as a grammar defines a language, even if we often express ourselves ungrammatically.

  Typical textbook examples of reasoning begin with a bit of simple logic and often end there—without a word on what goes on in the mind of the reasoner. They generally involve a pair of premises and a conclusion. For instance (to use one of Aristotle’s best-known examples of so-called categorical syllogism):

  Premises: 1. All humans are mortal.

  2. All Greeks are humans.

  Conclusion: All Greeks are mortal.

  From the propositions that all humans are mortal and that all Greeks are humans, it logically follows that all Greeks are mortal. Similarly, from the propositions that Jack lent his umbrella either to Jill or to Susan and that he did not lend it to Jill, it logically follows that he lent it to Susan (this being an example of a “disjunctive syllogism”). One of the achievements of Aristotelian logic was to take such clear cases of valid deductions and to schematize them.

  Forget about humans, mortals, and Greeks. Take any three categories whatsoever, and call them A, B, and C. Then you can generalize and say that all syllogisms that have the form of the following schema are valid:

  Premises: 1. All As are Bs.

  2. All Cs are As.

  Conclusion: All Cs are Bs.

  Forget about umbrellas, Jack, Jill, and Susan. Take any two propositions whatsoever, and call them P and Q. Then you can generalize and again say that all syllogisms that have the form of the following schema (corresponding to the “or” rule we talked about in Chapter 1) are valid:

  Premises: 1. P or Q

  2. not P

  Conclusion: Q

  What is the point of identifying such schemas? It is to go from the intuition that some particular deductions happen to be valid—deductions, for instance, about the Greeks being mortal or about Jack having lent his umbrella to Susan—to a formal account of what makes valid not just these particular deductions but all deductions of the same form. By replacing concrete contents (that are taken to be irrelevant to deduction) with arbitrary symbols such as capital letters—a device invented by Aristotle—you end up with a “logical form” that contains just terms such as “all,” “or,” or “not” that are relevant to premise-conclusion relationships. Deduction schemas display logical forms that stand in such relationships.

  For more than two thousand years, scholars felt no need to go beyond Aristotelian logic. The author of the Critique of Pure Reason, Immanuel Kant, could, at the end of the eighteenth century, maintain that since Aristotle, logic “has been unable to take a single step forward, and therefore seems to all appearance to be finished and complete.”2 He couldn’t have been more mistaken.

  In the past two hundred years, logic has developed well beyond and away from its Aristotelian origins both in scope and in sophistication. It has diversified in many different subfields and approaches, and even into a plurality of logics. Modern deductive logic provides a formal account of a much greater variety of valid deductions than did classical logic. It does so not by means of a catalogue of deduction schemas but by deriving such schemas from first principles with elaborate methods. Many of the deductions studied in modern logic, however, even relatively simple ones, are no more part of ordinary people’s repertoire than are advanced theorems in mathematics. True, there are some research programs in logic that aim at being relevant to psychology, but modern logic as a whole does not.

  The experimental study of reasoning started in the twentieth century.3 By then, many logicians saw logic as a purely formal system closely related to mathematics. Gottlob Frege, the German founder of modern logic, had denounced the very idea that logic is about human reasoning as a fallacy, the fallacy of “psychologism”: logic is no more about human reasoning than arithmetic is about people’s understanding and use of quantities. This is now the dominant view.

  And yet, while most logicians were turning their backs on psychology, most psychologists of reasoning were still looking to logic in order to define their domain, divide it into subdomains, and decide what constitutes good and bad reasoning. Until recently, it rarely crossed their minds that this could amount to a fallacy of “logicism” in psychology symmetrical to the fallacy of psychologism in logic.4

  True, thinking of reasoning as a “logical” process can seem quite natural. When people reason, some thoughts occur first in their mind, and have to occur first for other thoughts to occur afterward. It may be tempting to equate this temporal and causal sequence of thoughts with a logical sequence of propositions in a deduction. The very words “consequence” and “follows” used in logic evoke a time sequence. But no, these words do not, in logic, refer to temporal relationships. The order of propositions in a logical sequence is no more a genuine temporal order than is the order of the positive integers, 1, 2, 3, …, in arithmetic. Psychological processes have duration and involve effort. Logical sequences have not and do not.

  In logic, the word “argument” describes a timeless and abstract seque
nce of propositions from premises to conclusion. In ordinary usage, on the other hand, an argument is the production, in one’s mind or in conversation, of one or several reasons one after the other in order to justify some conclusion. What can we do here to avoid confusion? Since the psychology of reasoning has focused on classical deductive arguments, also known as “syllogisms,” this is the term we will use in our critical discussion. We will always use “argument,” on the other hand, in the ordinary, nontechnical sense.

  Couldn’t series of reasons given to convince an audience match logical sequences from premises to conclusion? Well, this is not what usually happens. Often, when you argue, you start by stating the conclusion you want your audience to accept—think of a lawyer pleading her client’s innocence, or think of political discussions—and then you give reasons that support this conclusion. It is commonly assumed, all the same, that most, if not all, ordinary reasoning arguments must, to be arguments at all, correspond to syllogisms; if the correspondence is not manifest, then it must be implicit; some premises must have been left out for the sake of brevity. Most ordinary arguments are, according to this view, “enthymemes,” that is, truncated syllogisms. This, we will argue, is just old dogma, so much taken for granted that little or no effort is made to justify it empirically.

  Logic and the psychology of reasoning, which had been so close to one another, have moved in different directions. They still seem to have many concepts in common, but what they actually share are labels, words that have taken on different meanings in each discipline, creating much confusion.5 “Argument” is not the only word used to describe both an abstract logical thing and a concrete psychological phenomenon. Many other words, such as “inference,” “premise,” “conclusion,” “valid,” or “sound,” have been borrowed from one domain to the other and are used in both cases with little attention to the fact that they are used differently. Even the word “reasoning” has been used by logicians to talk about syllogisms, logical derivations, and proofs, and the word “logical” is commonly used as a psychological term (as in “Be logical!”). We will try to avoid the fallacies that may result from such equivocations.

 

‹ Prev