The Enigma of Reason: A New Theory of Human Understanding

Home > Other > The Enigma of Reason: A New Theory of Human Understanding > Page 9
The Enigma of Reason: A New Theory of Human Understanding Page 9

by Dan Sperber


  5

  Cognitive Opportunism

  A large army moving as a unit ignores, when it can, irregularities of the terrain, or else it treats them as obstacles to be overcome. Autonomous guerrilla groups, on the other hand, approach such local features as opportunities and try, when possible, to use them to their advantage. Steering a motorboat involves making minor adjustments to take into account the effect of winds on the boat’s course. Sailing, on the other hand, involves treating winds and wind changes as opportunities to be exploited. The general contrast is clear: similar goals may be achieved sometimes by planning a course of action and using enough power to be able to stick to it, and sometimes by exploiting opportunities along the way and moving forward in a more frugal manner.

  The classical view of inference assumes a powerful logic engine that, whatever the peculiarities of the task at hand, steers the mind on a straight and principled path. The view we favor is that inference, and cognition more generally, are achieved by a coalition of relatively autonomous modules that have evolved in the species and that develop in individuals so as to solve problems and exploit opportunities as they appear. Just as guerrilla warfare or sailing, cognition is opportunistic.

  Without Darwin’s idea of evolution by natural selection—the paradigm of an opportunistic process—would the idea that mental processes are opportunistic ever have emerged? The fact is that it emerged only when Darwin’s ideas started influencing psychology. Still, already well before Darwin, the discovery of unconscious inference presented a challenge to the classical view of the mind as unitary and principled. The first to properly understand and address the challenge was the Arab scientist Ibn Al-Haytham (also known as Alhacen), born in Basra in 965 CE, who took up the study of unconscious inference in visual perception where Ptolemy had left it eight centuries before and who developed it much further.1

  Ibn Al-Haytham’s Conjecture

  How does unconscious inference proceed? Ibn Al-Haytham wondered. Does it use the same method as conscious inference? At first sight, there is little in common between the immediate and automatic inferences involved in, say, perception and the deliberate and often painstakingly slow inferences of conscious reasoning. Ibn Al-Haytham realized that there is, as we have argued in Chapter 4, a continuum of cases between conscious and unconscious inference. He conjectured that in spite of their apparent differences, conscious and unconscious inference make use of the same tools. What tools? Aristotelian syllogisms, he thought. In his days, there were no real alternatives.

  Today, there are several quite different accounts of how inference may proceed. There are many different systems of logic. In psychology, there are several “mental logic” accounts, and there is the theory developed by Johnson-Laird and Byrne that all genuine inference is achieved by constructing and manipulating mental models. Probabilistic models of inference—in particular, those based on the ideas of the eighteenth-century English cleric and scholar Thomas Bayes—have recently inspired much novel research.2 It could be that several of these approaches each provide a good account of some specific type of inference while none of them offer an adequate account of inference in general. Most proponents of these approaches, however, tend to agree with Ibn Al-Haytham that there must exist one general method that guides inference in all its forms. They disagree with him and among themselves as to what this true method might be.

  Assuming that all inferences use the same general method, whichever it might be, raises, Ibn Al-Haytham realized, a deep puzzle. How can it be that one and the same method is sometimes deployed in a slow and effortful manner, and sometimes without any conscious expenditure of time or effort? Why not use the fast mode all of the time? His answer was that all inferences must initially be performed through conscious and effortful reasoning. Some of these inferences, having been done again and again, cease to present any difficulty; they can be performed so fast that one isn’t even aware of them. So, he argued, degrees of consciousness do not correspond to different types of inference but only to different levels of difficulty, with the most routine inferences being the easiest and least conscious. From sophisticated reasoning on philosophical issues (a rare occurrence) down to automatic inference in perceiving relative size (that occurs all the time), all inference proceeds, Ibn Al-Haytham maintained, in one and the same way.

  Arguing that, initially, all inferences are conscious and that some of them become unconscious by the force of habit is quite ingenious, but is it true? Most probably not, since it entails blatantly wrong predictions. If fast and unconscious inferences were so because they have become wholly routinized, one should, for instance, expect infants to draw inferences in a slow and conscious manner. They should reach the automaticity of routine only at a later age and through extended practice. Developmental psychologists have shown, however, that infants perform automatically a variety of ordinary inferences, years before they start engaging in deliberate, conscious reasoning, contrary to what Ibn Al-Haytham’s explanation would lead us to predict.

  Here is one example among many. Psychologists Amy Needham and Renée Baillargeon showed 4.5-month-old infants either a possible or an impossible event (see Figure 10).3 In the “possible event” condition, infants saw a hand put a box on a platform. In the “impossible event” condition, they saw the hand release the box beyond the platform in midair. In both cases, the box stayed where the hand had released it. Infants looked longer at the impossible event of the box staying in midair without falling. This difference in looking time provides good evidence that the infants expected the box to fall, just as adults would have.

  Let’s assume with Ibn Al-Haytham that all inferences are made by following a logical schema. One should, then, conclude that infants had expected the unsupported box to fall because they had performed something like the following conditional syllogism:

  Premises: 1. If an object is unsupported, it will fall.

  2. The object is unsupported.

  Conclusion: The object will fall.

  Figure 10. 4.5-month-old infants shown the physically impossible event are surprised.

  One should expect, moreover, infants to make this inference in a slow, effortful, and conscious manner (until, with age and experience, it happens so fast as to be unconscious). This is not, however, what psychologists observe.

  The evidence shows that experience does matter, but not in the way Ibn Al-Haytham might have predicted. At 4.5 months of age, infants don’t pay attention to the amount of support the box gets. Even if only 15 percent of its bottom surface is supported by the platform, they expect it to remain stable. By 6.5 months of age, they have learned better and expect the box to fall when it is not sufficiently supported.4 There is no evidence or argument, however, that this progression from the age of 4.5 to that of 6.5 months is achieved through slow, conscious, effortful reasoning becoming progressively routinized. What is much more plausible is that infants, using procedures that are adjusted to the task, automatically and unconsciously extract statistical regularities in a way that ends up enriching the procedures themselves.

  With all the extraordinary work done in the past fifty years on infant cognition, it is no longer controversial that babies are able to take account of basic properties of physical objects in their inferences and to do so with increasing competence. What is dubious is the idea that in expecting an unsupported object to fall, infants actually make use of a general conditional premise about the fall of objects. Do infants really have such general knowledge? Do they, between 4.5 and 6.5 months of age, correct this knowledge by representing the amount of support an object needs not to fall? For Ibn Al-Haytham and many modern authors, the answer would have had to be yes: an inference must be based on a logical schema and on mentally represented premises. No logic, no inference.

  If Ibn Al-Haytham had been right that, without logic, there can be no inference, shouldn’t this claim be true not just of human but also of animal inference? The philosopher Jerry Fodor has argued it is: “Darwinian selection guar
antees that organisms either know the elements of logic or become posthumous.”5

  Well, there is another way.

  Representations and Procedures

  All inference, whether made by ants, humans, or robots, involves representations and procedures. This distinction has played an important role in the development of artificial intelligence (under labels such as “data” versus “procedures” or “declarative” versus “procedural”).6 It is also highly relevant to our understanding of the evolution of the modular mind.

  A word, first, about “representation,” a notion that causes a lot of confusion. It is quite common to understand the notion of a representation on the model of an image or of a verbal statement. Pictures and utterances are familiar objects in our environment, which we produce and use to communicate with one another. We also use them as cognitive tools. We use written numerals to calculate; maps to plan a trip; shopping lists as external memory props; and so on.

  Unlike pictures and spoken or written utterances, however, most of the representations we use are located not in our environment but in our brains; we use them not to communicate with others but to process information on our own. All the same, it is tempting to assume that mental representations are somehow structured like pictures or like utterances. Don’t we, after all, have mental images? Don’t we silently talk to ourselves in our mind? Couldn’t all of our thinking be done with a mixture of images and inner speech? Such considerations, however, fall quite short of demonstrating that all or even most of our mental representations must be structured like public representations or, for that matter, must be structured at all.

  So, you might ask, what else could representations be?

  Representations, as we will use the term,7 are material things, such as activation of groups of neurons in a brain, magnetic patterns in an electronic storage medium, or ink patterns on a piece of paper. They can be inside an organism or in its environment. What makes such a material thing a representation is not its location, its shape, or its structure; it is its function. A representation has the function of providing an organism (or, more generally, any information-processing device) with information about some state of affairs. The information provided may be about actual or about desirable states of affairs, that is, about facts or about goals.

  As a very simple example, consider motion detectors used in alarm systems. Better-quality motion detectors use simultaneously two types of sensors, such as (1) a microwave sensor that emits microwaves and detects, in the reflected waves, changes typically caused by moving bodies, and (2) an infrared sensor that detects radiations emitted by a warm body. The joint use of two types of sensors lowers the risk of false alarms. When activated, each of the sensors emits an electric signal that has the function of informing the next device in the system that a sensor-activating event has happened. This next device is the same for both sensors and is known as an “AND-gate” (Figure 11). Its function is inferential: when it is informed by two electric inputs that both the first and the second sensor are being activated, it triggers an acoustic signal. This signal informs human agents that the probability of an intrusion has reached a threshold such that action is called for.

  Although they are neither picture-like nor statement-like and although they need no internal structure to perform their function, the electric signals of the two sensors that serve as inputs to the AND-gate and the acoustic signal that is the output of the AND-gate have the function of providing an electronic device or a human agent with information about a specific type of occurrence, and can therefore be described as “representations” in the sense in which we use the term.

  Figure 11. The AND-gate used in a dual-technology motion detector.

  Of course, the whole process could be described in physical terms, without talk of information, function, or representation. Still, to understand why people build, sell, and buy motion detectors, going beyond a purely physical account and describing what the device does in terms of information and function is perspicuous. Here, we exploit the case of motion detectors to introduce in the simplest possible way the notion of representation, which we need to tell our story. Our use of “representation” is quite pragmatic:8 we know of no sensible way to talk about inference and reasoning without using some such notion (whether one uses the term or not).

  Inferential procedures apply to representations. They take representations as input and may erase or modify them, or they may produce new ones (as does the AND-gate of a motion detector when it gets the proper pair of inputs). Just as representations are defined by their function, so are inferential procedures. What makes a procedure inferential is that it has the function of making more information available for processing, or of making information that is already available more reliable. An inferential procedure may, for instance, erase a representation when new evidence implies it was a misrepresentation; it may modify a representation in order to correct or update it; it may produce new representations that follow from other representations already available; it may increase or decrease the cognitive system’s reliance on a representation. A successful inferential procedure results in richer or more reliable relevant information.

  Cognitive procedures are implemented in mental modules (in the way programs can be implemented in computers or apps in smartphones). Very simple procedures, like reflexes, may implement a single procedure, whereas more complex modules may implement and combine several of them (and modules with submodules may articulate many procedures).

  A mental module implements and uses one or several procedures (just as an electronic device implements and uses programs or a smartphone implements and uses apps). A module, through its connections with other modules, feeds its procedures with the kind of input they are equipped to process. In order to process their input, procedures may need to have access to some special data. The procedures used by a reading module, for instance, need information about the shape of letters. Such data are made available to the procedures by the module, which may store it in a proprietary database or be able to request it from other modules. Modules make their output available to other modules to which they are connected.9

  In the brain of the desert ant, for example, the odometer and the compass feed their output to an integrative module that computes and updates a representation of the direction and the distance at which the ant’s nest is located, a representation that in turn is used by a motor control module to direct the ant’s movements on its way back.

  Several modules may process the same inputs but submit them to different procedures. A main benefit of having a modular system with many modules working mostly in parallel is to simultaneously achieve a plurality of outcomes. This, after all, is the kind of inferential ability an animal would need to monitor its complex environment and to detect in time different threats and opportunities.

  In the history of philosophy and psychology, the focus has been on conscious reasoning and the explicit procedures that it uses sequentially, in the slow and concentrated manner of a scholar—picture a gentleman of leisure with scholarly interests, living a well-ordered life, having entrusted the daily chores and vicissitudes of daily life to servants and womenfolk. When, starting with Ibn Al-Haytham, scholars paid attention to the mechanisms of unconscious inference as it occurs, for instance, in visual perception, they generally assumed that the procedures involved were identical or quite similar to those involved in their own conscious reasoning and that they operated on statements or statement-like representations. This, however, is neither a necessary truth nor an empirically well-supported hypothesis. What had been a daring conjecture in the work of Ibn Al-Haytham has become an old dogma.

  Beyond the Dogma

  For a long time, the dogma that all inference, conscious or unconscious, uses the same Aristotelian logical procedures and applies them to statement-like representations profited from a lack of alternatives. How else could inference proceed? Until recently, this would have been a mere rhetorical question, but not anymore. The d
ogma has been undermined both by formal and by empirical research.

  On the formal side, the progressive emergence of the theory of probabilities since the seventeenth century as well as the growth and diversification of modern logic since the nineteenth century have rendered the Aristotelian model of inference obsolete. The effect of these formal developments, however, has hardly been to question the idea that inference must be based on the same small repertoire of general procedures across domains; it has been rather to open a debate on what these general procedures might be.

  On the empirical side, work on cognition, its evolution, its diversity across species, its development in children, and its implementation in the brain, as well as advances in artificial intelligence and mathematical modeling of cognitive and brain processes, has demonstrated that inference can proceed in many different ways. A great variety of procedures may be involved, many of them specialized in extracting information from one specific empirical domain or in performing just one specific type of inferential task. Some of these procedures have little in common over and above their being inferential. Whatever their differences, they are all procedures that find in the information already available a basis to revise or expand it.

  There may well exist important commonalities across some procedures. Transitive inference (of the type “A is bigger than B; B is bigger than C; therefore A is bigger than C”) is, for instance, relevant in a variety of domains, from the physical to the social. It is quite plausible also that a great many inferential procedures are in the same business of updating probabilities of future events—making and revising probabilistic predictions, if you prefer—while doing so each in a way fine-tuned to the regularities of its specific domain.10

 

‹ Prev