Book Read Free

What Intelligence Tests Miss

Page 9

by Keith E Stanovich


  Organisms have evolved to increase the reproductive fitness of genes, not to increase the rationality of humans. Increases in fitness do not always entail increases in rationality. Take, for example, the domain of beliefs. Beliefs need not always track the world with maximum accuracy in order for fitness to increase (see the epigraph from Nassim Nicholas Taleb that introduces this chapter). Thus, evolution does not guarantee perfect epistemic rationality. For example, evolution might fail to select out epistemic mechanisms of high accuracy when they are costly in terms of organismic resources (for example, in terms of memory, energy, or attention). An additional reason that belief-forming mechanisms might not be maximally truth preserving is that “a very cautious, risk-aversive inferential strategy—one that leaps to the conclusion that danger is present on very slight evidence—will typically lead to false beliefs more often, and true ones less often, than a less hair-trigger one that waits for more evidence before rendering a judgment. Nonetheless, the unreliable, error-prone, risk-aversive strategy may well be favored by natural selection. For natural selection does not care about truth; it cares only about reproductive success” (Stich, 1990, p. 62).

  It is likewise in the domain of goals and desires. The purpose of evolution was not to maximize the happiness of human beings. As has become clear from recent research on the topic of affective forecasting, people are remarkably bad at making choices that make themselves happy.5 This should be no surprise. The reason we have pleasure circuits in our brains is to encourage us to do things (survive and reproduce, help kin) that propagate our genes. The pleasure centers were not designed to maximize the amount of time we are happy.

  The instrumental rationality of humans is not guaranteed by evolution for two further reasons. First, many genetic goals that have been lodged in our brain no longer serve our ends because the environment has changed. For example, thousands of years ago, humans needed as much fat as they could get in order to survive. More fat meant longer survival and because few humans survived beyond their reproductive years, longevity translated directly into more opportunities for gene replication. In short, our mechanisms for storing and utilizing energy evolved in times when fat preservation was efficacious. These mechanisms no longer serve the goals of people in our modern technological society where there is a McDonald’s on practically every corner—the goals underlying these mechanisms have become detached from their evolutionary context. Finally, the cultural evolution of rational standards is apt to occur at a pace markedly faster than that of human evolution—thus providing ample opportunity for mechanisms of utility maximization to dissociate from local genetic fitness maximization.6 Our evolutionary history does not guarantee that all of our brain defaults are rational.

  As I discussed in Chapter 3, research on multiple-process theories of mind has been increasingly suggesting that some processes in our brains are at war with other processes. Parts of our minds are more oriented toward instrumental rationality—toward fulfilling our goals as people. In contrast, some brain processes are more directly oriented (in a short-leashed manner) to fulfilling ancient genetic goals that might not be current personal goals (many Type 1 processes, for instance). Some of the tendencies of the cognitive miser are evolutionary defaults. They were “good enough” in their day (our environment of evolutionary adaptation of thousands of years ago), but might not be serving us well now when our environments have radically changed.

  Why Dysrationalia Is Widespread

  In short, our brains are naturally lazy. Thus, in ordinary situations—when not specifically cued to avoid minimal information processing (as we are when taking tests, for example)—all people are subject to the irrationalities entailed when one is a cognitive miser. However, there is variation in the use of many of the information processing strategies of the cognitive miser. This means that there will be variation among people in their degree of rationality, as there is for almost any other cognitive/behavioral characteristic. Furthermore, we will see that this variation displays only a weak correlation with intelligence.

  Earlier in this chapter, I said that the human brain is characterized by two broad traits that make it less than rational—one a processing problem and one a content problem. The processing problem is that we are cognitive misers. The content problem comes about because we need to acquire some very specific knowledge structures in order to think and act rationally. When knowledge structures that are needed to sustain rational behavior are not present, I will term this a mindware problem, again following Perkins’s use of this term to refer to the rules, knowledge, procedures, and strategies that a person can retrieve from memory in order to aid decision making and problem solving. In Chapters 10 and 11, I will discuss mindware problems that cause much human irrationality.

  Rational standards for assessing human behavior are social and cultural products that are preserved and stored independently of the genes. The development of probability theory, concepts of empiricism, logic, and scientific thinking throughout the centuries have provided humans with conceptual tools to aid in the formation and revision of belief and in their reasoning about action. They represent the cultural achievements that foster greater human rationality when they are installed as mindware. As societies evolve, they produce more of the cultural tools of rationality and these tools become more widespread in the population. A college sophomore with introductory statistics under his or her belt, if time-transported to the Europe of a few centuries ago, could become rich “beyond the dreams of avarice” by frequenting the gaming tables (or by becoming involved in insurance or lotteries).

  The tools of rationality—probabilistic thinking, logic, scientific reasoning—represent mindware that is often incompletely learned or not acquired at all. This incomplete learning represents a class of causes of irrationality that I label a “mindware gap.” A different type of mindware problem arises because not all mindware is helpful—either to attaining our goals (instrumental rationality) or to having accurate beliefs (epistemic rationality). In fact, some acquired mindware can be the direct cause of irrational actions that thwart our goals. This type of problem I term “contaminated mindware.”

  Being a cognitive miser is a universal human psychological characteristic—it is typical of everyone’s thinking.7 Likewise, mindware problems of some degree are characteristic of most individuals. In short, all people are cognitive misers and all experience mindware problems. Thus, irrational behavior and thinking will be characteristic of all humans to some extent. Nevertheless, there exists variability in the extent to which people process information as cognitive misers, the extent to which people have mindware gaps, and the extent to which they have been infected by contaminated mindware. None of this variation is explicitly assessed on intelligence tests. Those with higher IQs are only slightly less likely to be cognitive misers or to have mindware problems.8 Statistically, this fact guarantees that dysrationalia will be a widespread phenomenon. To put it another way, if irrationality is common and only mildly correlated with intelligence, then irrational behavior among those of high intelligence should not be rare.

  Thinking Errors and Rational Thought

  Even though this is a book about rationality—the psychology of optimal thinking—several of the following chapters will be focused on the causes of thinking errors. The reason is that rationality is a multifarious concept. It requires the presence of many different types of mindware. It requires the acquisition of various dispositions of the reflective mind, all of which help in avoiding the shortcuts of the autonomous mind when they are nonoptimal. It is hard to measure the optimal functioning of all these components—that is, to specify whether “perfect” rationality has been attained. Researchers have found it much easier to measure whether a particular rational stricture is being violated—that is, whether a person is committing a thinking error—rather than whether his or her thinking is as good as it can be. This is much like our judgments at a sporting event where, for example, it might be difficult to discern whether a quarter
back has put the ball perfectly on the money, but it is not difficult at all to detect a bad throw.

  In fact, in many domains of life this is often the case as well. It is often difficult to specify what the best type of performance might be, but performance errors are much easier to spot. Essayist Neil Postman has argued, for instance, that educators and other advocates of good thinking might adopt a stance more similar to that of physicians or attorneys.9 He points out that doctors would find it hard to define “perfect health” but, despite this, they are quite good at spotting disease. Likewise, lawyers are much better at spotting injustice and lack of citizenship than defining “perfect justice” or ideal citizenship. Postman argues that, like physicians and attorneys, educators might best focus on instances of poor thinking, which are much easier to identify, as opposed to trying to define ideal thinking. The literature on the psychology of rationality has followed this logic in that the empirical literature has focused on identifying thinking errors, just as physicians focus on disease.

  The next several chapters take up in turn the multifarious requirements of rationality. To jointly achieve epistemic and instrumental rationality, a person must display judicious decision making, adequate behavioral regulation, wise goal prioritization, sufficient thoughtfulness, and proper evidence calibration. For example, epistemic rationality—beliefs that are properly matched to the world—requires probabilistic reasoning and the ability to calibrate theories to evidence. Instrumental rationality—maximizing goal fulfillment—requires adherence to all of the axioms of rational choice. People fail to fulfill the many different strictures of rational thought because they are cognitive misers, because they lack critical mindware, and because they have acquired contaminated mindware. These errors can be prevented by acquiring the mindware of rational thought and the thinking dispositions that prevent the overuse of the strategies of the cognitive miser.

  SIX

  The Cognitive Miser: Ways to Avoid Thinking

  The rule that human beings seem to follow is to engage the brain only when all else fails—and usually not even then.

  —David Hull, Science and Selection: Essays on Biological Evolution and the Philosophy of Science, 2001

  Consider the following problem, taken from the work of Hector Levesque and studied by my research group. Try to answer before reading on: Jack is looking at Anne but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person?

  A) Yes B) No C) Cannot be determined

  Answer A, B, or C before you look ahead.

  Over 80 percent of the people who respond to this problem answer incorrectly. The vast majority of people answer C (cannot be determined) when in fact the correct answer is A (yes). The answer is easily revealed once we engage in what in the psychological literature is called fully disjunctive reasoning.1 Fully disjunctive reasoning involves considering all possible states of the world when deciding among options or when choosing a problem solution in a reasoning task. Disjunctive reasoning is slow and systematic and represents the Type 2 processing I have discussed previously.

  To solve the problem, it is necessary to consider both possibilities for Anne’s marital status (married and unmarried) to determine whether a conclusion can be drawn. If Anne is married, then the answer is “Yes” because she would be looking at George, who is unmarried. If Anne is not married, then the answer is still “Yes” because Jack, who is married, would be looking at Anne. Considering all the possibilities (the fully disjunctive reasoning strategy) reveals that a married person is looking at an unmarried person whether Anne is married or not. The fact that the problem does not reveal whether Anne is married or not suggests to people that nothing can be determined. That is the easiest conclusion to draw. Unfortunately, it happens to be an incorrect one. The shallow, Type 1 processing that is characteristic of the cognitive miser—namely, the tendency not to look for information that can be inferred but is not explicitly stated—results in the preponderance of “cannot be determined” responses to this problem. People make the easiest (incorrect) inference from the information given and do not proceed with the more difficult (but correct) inference that follows from fully disjunctive reasoning.

  Fully disjunctive reasoning requires subjects to override their tendencies to be cognitive misers; that is, to avoid giving the response that is suggested to them on the basis of the most shallow type of information processing. The truth is that most people can carry out fully disjunctive reasoning when they are explicitly told that it is necessary. But it is also true that most do not automatically do so. We might expect high-IQ individuals would excel at disjunctive reasoning when they know it is required for successful task performance. But high-IQ people are only slightly more likely to spontaneously adopt this type of processing in situations that do not explicitly require it. Note that the instructions in Levesque’s Anne problem do not cue the subject to engage in fully disjunctive reasoning. My research group found that people of high intelligence were no more likely to solve the Anne problem and similar problems than were people of lower intelligence. If told to reason through all of the alternatives, the subjects of higher intelligence would have done so more efficiently. However, without that instruction, they defaulted to computationally simple cognition when solving problems—they were cognitive misers like everyone else. Intelligence and the tendency toward spontaneous disjunctive reasoning can be quite unrelated.

  We often do not realize that we are failing to think fully disjunctively (failing to think through all the possibilities) because the Type 1 processing takes place so rapidly. Daniel Kahneman and colleague Shane Frederick described a simple experiment in which people were asked to consider the following puzzle:2

  A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?

  Many people emit the response that first comes to mind—10¢—without thinking further and realizing that this cannot be right. The bat would then have to cost $1.10 and the total cost would then be $1.20 rather than the required $1.10. People often do not think deeply enough to make this simple correction, though, and many students at very selective universities will answer incorrectly and move on to the next problem before realizing that their shallow processing has led them to make an error. They will not realize that they have failed to trump Type 1 thinking with Type 2 thinking. Frederick found that large numbers of brilliant students at MIT, Princeton, and Harvard were cognitive misers like the rest of us when given this and other similar problems.

  Attribute Substitution: The Generic Trick of the Cognitive Miser

  Kahneman and Frederick describe a trick that we cognitive misers use all the time in order to lighten our cognitive load. The trick is called attribute substitution, and it occurs when a person needs to assess attribute A but finds that assessing attribute B (which is correlated with A) is easier cognitively and so uses B instead. In simpler terms, attribute substitution amounts to substituting an easier question for a harder one.

  Many times there is no problem with attribute substitution as a cognitive strategy. If two different strategies can get you in the same ballpark of an answer, why not use the simpler one and avoid having to think so hard? Even if the attribute substituted is not quite as good a cue, it might get you so close to the right answer that it is not worth switching to the computationally more expensive attribute A. However, in certain situations in real life, overgeneralizing the attribute-substitution strategy can lead us seriously astray.

  One rather drastic mistake that people can make is to violate a dominance relationship. The latter is a technical term in decision theory, but what it is and why it is bad are easy to understand. Suppose you turn down my offer to give you $100 for successfully picking a spade or a heart out of a deck of cards on the first try and instead accept someone else’s offer to give you $100 if you draw a heart. By spurning my offer and accepting the other, you have—beyond dispute—made a very, very bad decision. You have made a
bad decision because you have violated a dominance relationship. My offer dominates the other offer because if you win the other one you win mine too, but there are additional ways you can win mine.

  Dominance relationships occur when one set of outcomes contains the other. Violations of the dominance principle occur when people judge the probability or value of the smaller set of outcomes to be higher than the larger set. Kahneman and Frederick provide a number of examples of how attribute substitution can lead people to violate dominance relationships. Here is one of the simplest examples. One group of subjects was asked to estimate the number of murders that occurred in Michigan during a particular year. This is a tough task, and people cannot retrieve this information from memory. However, to complete the task, they must retrieve relevant facts (the population of the state, what they have heard about the crime there, and other cues) that they can then put together to come up with an estimate. That people were not working too hard in coming up with information with which to derive an estimate (that they were cognitive misers) is suggested by the fact that another group of subjects who were asked to estimate the number of murders in Detroit in a year came up with an estimate that was twice as large as the Michigan group’s!

  This is a dominance violation, of course (all Detroit murders are also in Michigan), and the reason for it is clear. People are not working very hard to retrieve relevant information at all—they are using crude affect-laden images of the localities in question to generate a high or low number. Because the image of Detroit is associated with more affect-laden murder imagery than is the image of Michigan, the former as a stimulus generates a higher murder number even though on a logical or empirical basis this could not be the case. For similar reasons, forecasters assigned a higher probability to “an earthquake in California causing a flood in which more than 1,000 people will drown” than to “a flood somewhere in the United States in which more than 1,000 people will drown.” Of course, an image of a California earthquake is very accessible, and its ease of accessibility affects the probability judgment.3

 

‹ Prev