What Intelligence Tests Miss
Page 22
The model in Figure 12.1 defines a third critical function for the algorithmic mind in addition to Type 1 processing override and enabling simulation via decoupling. The third is a function that in the figure is termed serial associative cognition (arrow labeled E). This function is there to remind us that not all Type 2 processing involves strongly decoupled cognitive simulation. There are types of slow, serial cognition that do not involve simulating alternative worlds and exploring them exhaustively.
Recall that the category of Type 1 processes is composed of: affective responses, previously learned responses that have been practiced to automaticity, conditioned responses, adaptive modules that have been shaped by our evolutionary history. These cover many situations indeed, but modern life still creates many problems for which none of these mechanisms are suited. Consider Peter Wason’s four-card selection task discussed previously:
Each of the cards has a letter on one side and a number on the other side. Here is a rule: If a card has a vowel on its letter side, then it has an even number on its number side. Two of the cards are letter-side up, and two of the cards are number-side up. Your task is to decide which card or cards must be turned over in order to find out whether the rule is true or false. Indicate which cards must be turned over. The four cards confronting the subject have the stimuli K, A, 8, and 5 showing.
The correct answer is A and 5 (the only two cards that could show the rule to be false) but the majority of subjects answer (incorrectly) A and 8. However, studies have been done which have subjects think aloud while solving the problem. When these think-aloud protocols are analyzed, it has seemed that most subjects were engaging in some slow, serial processing, but of a type that was simply incomplete. A typical protocol from a subject might go something like this: “Well, let’s see, I’d turn the A to see if there is an even number on the back. Then I’d turn the 8 to make sure a vowel is in the back.” Then the subject stops.
Several things are apparent here. First, it makes sense that subjects are engaging in some kind of Type 2 processing. Most Type 1 processes would be of no help on this problem. Affective processing is not engaged, so processes of emotional regulation are no help. Unless the subject is a logic major, there are no highly practiced procedures that have become automatized that would be of any help. Finally, the problem is evolutionarily unprecedented, so there will be no Darwinian modules that would be helpful.
The subject is left to rely on Type 2 processing, but I would argue that that processing is seriously incomplete in the example I have given. The subject has relied on serial associative cognition rather than exhaustive simulation of an alternative world—a world that includes situations in which the rule is false. The subject has not constructed the false case—a vowel with an odd number on the back. Nor has the subject gone systematically through the cards asking the question of whether that card could be a vowel/odd combination. Answer: K(no), A(yes), 8(no), 5(yes). Such a procedure yields the correct choice of A and 5. Instead the subject with this protocol started from the model given—the rule as true—and then just worked through implications of what would be expected if the rule were true. A fully simulated world with all the possibilities—including the possibility of a false rule—was never constructed. The subject starts with the focal rule as given and then just generates associates that follow from that. Hence my term for this type of processing: serial associative cognition.
Thus, it is correct to argue that Type 2 processing is occurring in this task, but it is not full-blown cognitive simulation of alternative world models. It is thinking of a shallower type—cognition that is inflexibly locked into an associative mode that takes as its starting point a model of the world that is given to the subject. In the selection task, subjects accept the rule as given, assume it is true, and simply describe how they would go about verifying it. They then reason from this single focal model—systematically generating associations from this focal model but never constructing another model of the situation. This is what I would term serial associative cognition with a focal bias.
One way in which to characterize serial associative cognition with a focal bias is as a second-stage strategy of the cognitive miser. Traditional dual-process theory has heretofore highlighted only Rule 1 of the Cognitive Miser: default to Type 1 processing whenever possible. But defaulting to Type 1 processing is not always possible—particularly in novel situations where there are no stimuli available to domain-specific evolutionary modules, nor perhaps any information with which to run overlearned and well-compiled procedures that have been acquired through practice. Type 2 processing will be necessary, but a cognitive miser default is operating even there. Rule 2 of the Cognitive Miser is: when Type 2 processing is necessary, default to serial associative cognition with a focal bias (not fully decoupled cognitive simulation).
My notion of focal bias conjoins several current ideas in cognitive science under the overarching theme that they all have in common—that humans will find any way they can to ease the cognitive load and process less information.3 Focal bias is the basic idea that the information processor is strongly disposed to deal only with the most easily constructed cognitive model. The most easily constructed model tends to represent only one state of affairs; it accepts what is directly presented and models what is presented as true; it ignores moderating factors—probably because taking account of those factors would necessitate modeling several alternative worlds, and this is just what a focal processing allows us to avoid; and finally, given the voluminous literature in cognitive science on belief bias and the informal reasoning literature on myside bias, the easiest models to represent are those closest to what a person already believes in and has modeled previously (myside bias and belief bias).
With this discussion of serial associative cognition, we can now return to Figure 12.1 and identify a third function of the reflective mind—initiating an interrupt of serial associative cognition (arrow F). This interrupt signal alters the next step in a serial associative sequence that would otherwise direct thought. This interrupt signal might have a variety of outcomes. It might stop serial associative cognition altogether in order to initiate a comprehensive simulation (arrow C). Alternatively, it might start a new serial associative chain (arrow E) from a different starting point by altering the temporary focal model that is the source of a new associative chain. Finally, the algorithmic mind often receives inputs from the computations of the autonomous mind via so-called preattentive processes (arrow G).4
A Preliminary Taxonomy of Rational Thinking Problems
With a more complete generic model of the mind in place, in Figure 12.2 I present an initial attempt at a taxonomy of rational thinking problems. At the top of the figure are three characteristics of the cognitive miser listed in order of relative cognitive engagement. The characteristic presented first is defaulting to the response options primed by the autonomous mind. It represents the shallowest kind of processing because no Type 2 processing is done at all. The second type of processing tendency of the cognitive miser is to engage in serial associative cognition with a focal bias. This characteristic represents a tendency to over-economize during Type 2 processing—specifically, to fail to engage in the full-blown simulation of alternative worlds or to engage in fully disjunctive reasoning (see Chapter 6).
Figure 12.2. A Basic Taxonomy of Thinking errors
The third category is that of override failure, which represents the least miserly tendency because, here, Type 2 cognitive decoupling is engaged. Inhibitory Type 2 processes try to take the Type 1 processing of the autonomous mind offline in these cases, but they fail. So in override failure, cognitive decoupling does take place, but it fails to suppress the Type 1 processing of the autonomous mind.
In Figure 12.2 mindware problems are divided into mindware gaps and contaminated mindware. In the category of mindware gaps, the curved rectangles in the figure are meant to represent missing knowledge bases. I have not represented an exhaustive set of knowledge partitionings—
to the contrary, the figure shows only a minimal sampling of a potentially large set of coherent knowledge bases in the domains of probabilistic reasoning, causal reasoning, logic, and scientific thinking, the absence of which could result in irrational thought or behavior. The two I have represented are mindware categories that have been implicated in research in the heuristics and biases tradition: missing knowledge about probability and probabilistic reasoning strategies; and ignoring alternative hypotheses when evaluating hypotheses. These are just a few of many mindware gaps that have been suggested in the literature on behavioral decision making. There are many others, and the box labeled “Many Domain-Specific Knowledge Structures” indicates this.
Finally, at the bottom of the figure is the category of contaminated mindware. Again, the curved rectangles represent problematic knowledge and strategies. They do not represent an exhaustive partitioning (the mindware-related categories are too diverse for that), but instead indicate some of the mechanisms that have received some discussion in the literature. First is a subcategory of contaminated mindware that is much discussed—mindware that contains evaluation-disabling properties. Some of the evaluation-disabling properties that help keep some mindware lodged in a host are: the promise of punishment if the mindware is questioned; the promise of rewards for unquestioning faith in the mindware; or the thwarting of evaluation attempts by rendering the mindware unfalsifiable.
The second subcategory of contaminated mindware that has been discussed by several theorists is a concept of “self” that serves to encourage egocentric thinking.5 The self, according to these theorists, is a mechanism that fosters one characteristic of focal bias: that we tend to build models of the world from a single myside perspective. The egocentrism of the self was of course evolutionarily adaptive. Nonetheless, it is sometimes nonoptimal in an environment different from the environment of evolutionary adaptation because myside processing makes difficult such modern demands as: unbiasedness; sanctioning of nepotism; and discouragement of familial, racial, and religious discrimination. Finally, the last subcategory of contaminated mindware pictured in the figure is meant to represent what is actually a whole set of categories: mindware representing specific categories of information or maladaptive memeplexes. As with the mindware gap category, there may be a large number of instances of misinformation-filled mindware that would support irrational thought and behavior.6
Lay psychological theory is represented as both contaminated mindware and a mindware gap in Figure 12.2. Lay psychological theories are the theories that people have about their own minds. Mindware gaps are the many things about our own minds that we do not know; for example, how quickly we will adapt to both fortunate and unfortunate events. Other things we think we know about our own minds are wrong. These misconceptions represent contaminated mindware. An example would be the folk belief that we accurately know our own minds. This contaminated mindware accounts for the incorrect belief that we always know the causes of our own actions and think that although others display myside and other thinking biases, we ourselves have special immunity from the very same biases.7
Finally, note the curved, double-headed arrow in this figure indicating an important relationship between the override failure category and the mindware gap category. In a case of override failure, an attempt must be made to trump a response primed by the autonomous mind with alternative conflicting information or a learned rule. For an error to be classified as an override failure, one must have previously learned the alternative information or an alternative rule different from the Type 1 response. If, in fact, the relevant mindware is not available because it has not been learned (or at least not learned to the requisite level to sustain override) then we have a case of a mindware gap rather than override failure.
Note one interesting implication of the relation between override failure and mindware gaps—the fewer gaps one has, the more likely that an error may be attributable to override failure. Errors made by someone with considerable mindware installed are more likely to be due to override failure than to mindware gaps. Of course, the two categories trade off in a continuous manner with a fuzzy boundary between them. A well-learned rule not appropriately applied is a case of override failure. As the rule is less and less well instantiated, at some point it is so poorly compiled that it is not a candidate to override the Type 1 response, and thus the processing error becomes a mindware gap. Consider the example of the John F. Kennedy Jr. aircraft crash presented at the opening of Chapter 9. Presumably, Kennedy knew the rules of night flying but failed to use them to override natural physiological and motor responses in an emergency. We thus classify his actions as an override failure. Had Kennedy not known the night flying rules at all, his ineffectual actions would no longer be classified as an override failure but would be, instead, a mindware gap.
In Table 12.1 I have classified many of the processing styles and thinking errors discussed in the book so far in terms of the taxonomy in Figure 12.2.8 For example, the three Xs in the first column signify defaults to the autonomous mind: vividness effects, affect substitution, and impulsively associative thinking. Recall that defaulting to the most vivid stimulus is a common way that the cognitive miser avoids Type 2 processing. Likewise defaulting to affective valence is often used in situations with emotional salience. And affect substitution is a specific form of a more generic trick of the cognitive miser, attribute substitution—substituting an easier question for a harder one.9 Recall from Chapter 6 the bat and ball problem (A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?) and the Levesque problem (“Jack is looking at Anne but Anne is looking at George”). Failure on problems of this type is an example of the miserly tendency termed impulsively associative thinking. Here, subjects look for any simple association that will prevent them from having to engage in Type 2 thought (in this case associating Anne’s unknown status with the response “cannot be determined”).
The second category of thinking error presented in Table 12.1 is over-reliance on serial associative cognition with a focal bias (a bias toward the most easily constructed model). This error often occurs in novel situations where some Type 2 processing will be necessary. Framing effects are the example here (“the basic principle of framing is the passive acceptance of the formulation given”: Kahneman, 2003a, p. 703). The frame presented to the subject is taken as focal, and all subsequent thought derives from it rather than from alternative framings because the latter would require more thought.
Table 12.1. A Basic Taxonomy of Thinking Errors
Pure override failure—the third category of thinking errors presented in Table 12.1—is illustrated by the three effects that were discussed in Chapter 9: belief bias effects (“roses are living things”), denominator neglect (the Epstein jelly bean task), and self-control problems such as the inability to delay gratification. It is also involved in the failure of moral judgment override such as that displayed in the trolley problem.
Table 12.1 also portrays two examples of mindware gaps that are due to missing probability knowledge: conjunction errors and noncausal base-rate usage. Listed next is the bias blind spot—the fact that people view other people as more biased than themselves. The bias blind spot is thought to arise because people have incorrect lay psychological theories. They think, incorrectly, that biased thinking on their part would be detectable by conscious introspection. In fact, most social and cognitive biases operate unconsciously.
Multiply Determined Problems of Rational Thought
Several of the remaining tasks illustrated in Table 12.1 represent irrational thought problems that are hybrids. That is, they are co-determined by several different cognitive difficulties. For example, I speculate that problems with the Wason four-card selection task are multiply determined. It is possible that people have trouble with that task because they have not well instantiated the mindware of alternative thinking—the learned rule that thinking of the false situation or thinking about a hypothesis o
ther than the one you have might be useful. Alternatively, people might have trouble with the task because of a focal bias: they focus on the single model given in the rule (vowel must have even) and do all of their reasoning from only this assumption without fleshing out other possibilities. Table 12.1 represents both of these possibilities.
Another thinking error with multiple determinants is myside processing, which is no doubt fostered by contaminated mindware (our notion of “self” that makes us egocentrically think that the world revolves around ourselves). But a form of focal bias may be contributing to that error as well—the bias to base processing on the mental model that is the easiest to construct. What easier model is there to construct than a model based on our own previous beliefs and experiences? Such a focal bias is different from the egocentric mindware of the self. The focal bias is not egocentric in the motivational sense that we want to build our self-esteem or sense of self-worth. The focal bias is simply concerned with conserving computational capacity, and it does so in most cases by encouraging reliance on a model from a myside perspective. Both motivationally driven “self” mindware and computationally driven focal biases may be contributing to myside processing, making it another multiply determined bias.
Errors in affective forecasting are likewise multiply determined. Affective forecasting refers to our ability to predict what will make us happy in the future. Research in the last decade has indicated that people are surprisingly poor at affective forecasting.10 We often make choices that reduce our happiness because we find it hard to predict what will make us happy. People underestimate how quickly they will adapt to both fortunate and unfortunate events. One reason that people overestimate how unhappy they will be after a negative event is that they have something missing from their lay psychological theories (the personal theories they use to explain their own behavior). They fail to take into account the rationalization and emotion-dampening protective thought they will engage in after the negative event (“I really didn’t want the job anyway,” “colleagues told me he was biased against older employees”). People’s lay theories of their own psychology do not give enough weight to these factors and thus they fail to predict how much their own psychological mechanisms will damp down any unhappiness about the negative event.