Another, and even more important, source of affective forecasting errors is focal bias. Researchers in the affective forecasting literature have theorized specifically about focalism interfering with hedonic predictions. For example, a sports fan overestimates how happy the victory of the home team will make him two days after the event. When making the prediction, he fixates on the salient focal event—winning the game—simulates the emotion he will feel in response to the event, and projects that same emotion two days into the future. What does not enter into his model—because such models are not easy to construct in imagination (hence too effortful for the cognitive miser)—are the myriad other events that will be happening two days after the game and that will then impinge on his happiness in various ways (it is the case that most of these other events will not be as happiness-inducing as was winning the game). In a much-cited study, David Schkade and Daniel Kahneman found that subjects from Michigan and California were about equal in life satisfaction. However, when predicting the satisfaction of the other, both Michigan and California subjects thought that California subjects would be more satisfied with life. The comparative judgment made focal an aspect of life, the weather, that in fact was not one of the most important dimensions in life satisfaction (job prospects, financial considerations, social life, and five other factors ranked higher). As Schkade and Kahneman have argued, ‘Nothing that you focus on will make as much difference as you think’ (1998, p. 345). Thus, as Table 12.1 indicates, errors in affective forecasting are a complex mix of focal bias and gaps in lay psychological theories.
In the remainder of this chapter, I will discuss the linkage between each of the major categories of rational thinking error in Figure 12.2 and intelligence. However, before I do, I need to define a sixth category of irrational thinking, one whose characteristics I did not discuss in earlier chapters because it is not a fully cognitive category. I include this category here for completeness—it fills in a fuller taxonomy of the sources of irrational thought and action.
The Mr. Spock Problem: Missing Input from the Autonomous Mind
In his book Descartes’ Error, neurologist Antonio Damasio describes one of his most famous patients, Elliot. Elliot had had a successful job in a business firm, and served as a role model for younger colleagues. He had a good marriage and was a good father. Elliot’s life was a total success story until one day, Damasio tells us, it began to unravel. Elliot began to have headaches and he lost his focus at work. It was discovered that the headaches had been caused by a brain tumor, which was then surgically removed. Subsequent to the surgery, it was determined that Elliot had sustained substantial damage to the ventromedial area of the prefrontal cortex.
That was the bad news. The good news was that on an intelligence test given subsequent to the surgery, Elliot scored in the superior range. Further good news came from many other neuropsychological tests on which Elliot scored at least in the normal range. In short, there were numerous indications that Elliot’s algorithmic mind was functioning fine. There was just one little problem here—one little remaining piece of bad news: Elliot’s life was a mess.
At work subsequent to the surgery Elliot was unable to allocate his time efficiently. He could not prioritize his tasks and received numerous admonitions from supervisors. When he failed to change his work behavior in the face of this feedback, he was fired. Elliot then charged into a variety of business ventures, all of which failed. One of these ventures ended in bankruptcy because Elliot had invested all of his savings in it. His wife divorced him. After this, he had a brief relationship with an inappropriate woman, married her quickly, and then, just as quickly, divorced her. Elliot had just been denied social security disability benefits when he landed in Dr. Damasio’s office.
Damasio described why it took so long and so much testing to reveal the nature of Elliot’s problem: “I realized I had been overly concerned with the state of Elliot’s intelligence” (p. 44). It was in the realm of emotion rather than intelligence that Elliot was lacking: “He had the requisite knowledge, attention, and memory; his language was flawless; he could perform calculations; he could tackle the logic of an abstract problem. There was only one significant accompaniment of his decision-making failure: a marked alteration of the ability to experience feelings” (p. xii). Elliot was a relatively pure case of what we will call here the Mr. Spock problem, naming it after the Star Trek character depicted as having attenuated emotions. Elliot had a problem in decision making because of a lack of regulatory signals from emotion modules in the autonomous mind. Because Elliot was an individual of high intelligence, his lack of rationality represents a type of dysrationalia, but different from any of the categories we have considered before.
Antoine Bechara, Damasio, and colleagues developed a laboratory marker for the type of problem that Damasio had observed in Elliot—the Iowa Gambling Task.11 The task mirrors real-life situations where ventromedial prefrontal damage patients like Elliot have difficulty because it requires real-time decision making, involves rewards and punishments, is full of uncertainty, and requires estimates of probabilities in a situation where precise calculation is not possible.
Damasio argued that individuals with ventromedial prefrontal damage seem to lack emotional systems that mark positive and negative outcomes with evaluative valence and that regenerate these valences the next time a similar situation arises. The key insight here is that there are two ways in which the rational regulation involving the autonomous mind can go wrong. The override failures discussed previously are one way. In these situations, the signals shaping behavior from the autonomous mind are too pervasive and are not trumped by Type 2 processing. The second way that behavioral regulation involving the autonomous mind can go awry has the opposite properties. In this case, the automatic and rapid regulation of goals is absent and Type 2 processing is faced with a combinatorial explosion of possibilities because the constraining function of autonomous modules such as emotions is missing. Behavioral regulation is not aided by crude but effective autonomous signals that help to prioritize goals for subsequent action. A module failure of this type represents a case where there is not too much regulation from the autonomous mind but instead too little.12
The problem manifest in the case of Elliot, the Mr. Spock problem, represents a relatively pure case of dysrationalia. Does the Mr. Spock problem occur in individuals who have no overt and identified brain damage caused by tumor or sudden insult? There is increasing evidence that the Mr. Spock form of dysrationalia may extend beyond extreme clinical cases such as that of Elliot (with measurable ventromedial prefrontal damage). Several groups of people with problems of behavioral regulation perform poorly on the Iowa Gambling Task despite having near-normal intelligence. For example, it has been found that heroin addicts also displayed more disadvantageous choices in the Iowa Gambling Task than controls of equal intelligence. My own research group examined the performance of a nonclinical sample of adolescents who were experiencing problems of behavioral adjustment (multiple school suspensions) on the Iowa Gambling Task. Like Damasio’s patients, our participants with suspensions did not differ from their controls in general intelligence. The students with multiple suspensions in our study made significantly poorer choices. Other studies of subjects without overt brain damage have also shown subpar performance on the Iowa Gambling Task, for example, pathological gamblers. Likewise, neuropsychological research has demonstrated a variety of mental disabilities—for example, alexithymia (difficulty in identifying feelings) and schizophrenia—that implicate defects in various types of autonomous monitoring activities that are independent of intelligence.13
The Taxonomy in Terms of Intelligence/Rationality Correlations
With the introduction of the Mr. Spock problem, we can now present a fuller taxonomy of the categories of rational thinking error, and it is illustrated in Figure 12.3. Each of the six categories represents a separate explanation of why human thought and action are sometimes irrational. Each category dissociates fro
m intelligence to some extent and thus is a source of dysrationalia. In this section, I will discuss the empirical evidence and theoretical arguments regarding the extent to which the thinking error represented by each category is dissociated from intelligence.
The Mr. Spock problem represents the most clear-cut category because it is likely to be as prevalent in high-IQ individuals as in low-IQ individuals. The reason is that these problems result from inadequate (or incorrect) input from the autonomous mind (for example, from modules of emotional regulation). Variation in the subprocesses of the autonomous mind is largely independent of intelligence.
The next category (defaulting to the autonomous mind and not engaging at all in Type 2 processing) is the most shallow processing tendency of the cognitive miser. The ability to sustain Type 2 processing is of course related to intelligence. But the tendency to engage in such processing or to default to autonomous processes is a property of the reflective mind that is not assessed on IQ tests. Consider the Levesque problem (“Jack is looking at Anne but Anne is looking at George”) as an example of avoiding Type 2 processing. The subjects who answer this problem correctly are no higher in intelligence than those who do not, at least in a sample of university students studied by Maggie Toplak in my own laboratory.
Disjunctive reasoning problems such as Levesque’s Anne problem require the decoupling of cognitive representations and the computation of possible worlds with the decoupled representations—one of the central operations of the algorithmic mind (and one of the processes at the heart of measured intelligence). But clearly one has to discern the necessity of disjunctive reasoning in this situation in order to answer correctly. One has to avoid the heuristic reaction: “Oh, since we don’t know whether Anne is married or not we cannot determine anything.” And with respect at least to these particular problems, individuals of high intelligence are no more likely to do so. Goal directions to engage in decoupling operations are not sent from higher-level systems of strategic control in the reflective mind to the algorithmic mind. No doubt, were they sent, the decoupled operations would be more reliably sustained by people of higher intelligence. But intelligence is of no use in this task unless the instruction is sent to engage in the modeling of possible worlds.
Figure 12.3. A Basic Taxonomy of Thinking errors
Theoretically, one might expect a positive correlation between intelligence and the tendency of the reflective mind to initiate Type 2 processing because it might be assumed that those of high intelligence would be more optimistic about the potential efficacy of Type 2 processing and thus be more likely to engage in it. Indeed, some insight tasks do show a positive correlation with intelligence, one in particular being the task studied by Shane Frederick and mentioned in Chapter 6: A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost? Nevertheless, the correlation between intelligence and a set of similar items is quite modest, .43–.46, leaving plenty of room for performance dissociations of the type that define dysrationalia.14 Frederick has found that large numbers of high-achieving students at MIT, Princeton, and Harvard when given this and other similar problems rely on this most primitive of cognitive miser strategies.
A somewhat more demanding strategy of the cognitive miser is to rely on serial associative processing with a focal bias. It is a more demanding strategy in that it does involve Type 2 processing. It is still a strategy of the miser, though, in that it does not involve fully fleshed-out mental simulation. Framing effects provide examples of the focal bias in the processing of the cognitive miser. When between-subjects framing effects are examined, the tendency to display this type of bias is virtually independent of intelligence. When examined within subjects, the tendency to avoid framing does show a very small correlation with intelligence.15 Individuals of high intelligence are almost as likely to display irrational framing effects as those of lower intelligence. Thus, dysrationalia due to framing will be common.
In the next category of thinking error, override failure, inhibitory Type 2 processes try to take the Type 1 processing of the autonomous mind offline in order to substitute an alternative response, but the decoupling operations fail to suppress the Type 1 response. We would expect that this category of cognitive failure would have the highest (negative) correlation with intelligence. This is because intelligence indexes the computational power of the algorithmic mind that can be used for the decoupling operation. Theoretically, though, we should still expect the correlation to be somewhat less than perfect. The reflective mind must first trigger override operations before any individual differences in decoupling could become apparent. The tendency to trigger override could be less than perfectly correlated with the capacity to sustain override.
That is the theory. What does the evidence say? We might begin by distinguishing so-called hot override from so-called cold override. The former refers to the override of emotions, visceral drives, or short-term temptations (by analogy to what has been called “hot” cognition in the literature). The latter refers to the override of overpracticed rules, Darwinian modules, or Type 1 tendencies which are not necessarily linked to visceral systems (by analogy to what has been called “cold” cognition in the literature).
In the domain of hot override, we know most about delay of gratification situations. Psychologist Walter Mischel pioneered the study of the delay of gratification paradigm with children. The paradigm has many variants, but the essence of the procedure is as follows. Age appropriate rewards (toys, desirable snacks) are established, and the child is told that he or she will receive a small reward (one marshmallow) or a larger reward (two marshmallows). The child will get the larger reward if, after the experimenter leaves the room, the child waits until the experimenter returns and does not recall the experimenter by ringing a bell. If the bell is rung before the experimenter returns, the child will get only the smaller reward. The dependent variable is the amount of time the child waits before ringing the bell.16
Rodriguez, Mischel, and colleagues observed a correlation of just .39 between measured intelligence and delay in this paradigm. Likewise, in a similar study of young children, David Funder and Jack Block observed a correlation of .34 between intelligence and delay (consistent with the idea that this paradigm involves the reflective mind as well as the algorithmic mind, personality measures predicted delay after the variance due to intelligence had been partialled out). Data from adults converge with these findings.
Real-life override failures correlate with intelligence too, but the correlations are modest. For example, the control of addictive behaviors such as smoking, gambling, and drug use is often analyzed in terms of override failure. Thus, it is interesting that Elizabeth Austin and Ian Deary report analyses of the longitudinal Edinburgh Artery Study looking at whether intelligence might be a long-term protective factor against both smoking and drinking (presumably through the greater ability to sustain inhibition of the autonomous mind). In this study, they found no evidence at all that, longitudinally, intelligence served as a protective against problem drinking. There was a very small but significant longitudinal link between intelligence and smoking.17
The correlations in the studies I have been discussing were statistically significant, but they are, by all estimates, moderate in absolute magnitude. They leave plenty of room for dissociations between intelligence and successful override of autonomous systems. A very similar story plays out when we look at the relationship between intelligence and “cold” override failure. Two cold override tasks discussed in Chapter 9—belief bias tasks (“roses are living things”) and the Epstein jelly bean task (pick from a bowl with 1 of 10 red versus one with 8 of 100 red)—provide examples. Successful override correlates with intelligence in the range of .35–.45 for belief bias tasks and in the range of .25–.30 for the Epstein task.18 Again, these are significant but modest associations—ones that leave plenty of room for the dissociation that defines dysrationalia.
Continuing down in the taxonomy in Figure 12
.3, we see that irrational behavior can occur for a fifth reason: the right mindware (cognitive rules, strategies, knowledge, and belief systems) is not available to use in decision making. We would expect to see a correlation with intelligence here because mindware gaps most often arise because of lack of education or experience. Nevertheless, while it is true that more intelligent individuals learn more things than less intelligent individuals, much knowledge (and many thinking dispositions) relevant to rationality is picked up rather late in life. Explicit teaching of this mindware is not uniform in the school curriculum at any level. That such principles are taught very inconsistently means that some intelligent people may fail to learn these important aspects of critical thinking. Correlations with cognitive ability have been found to be roughly (in absolute magnitude) in the range of .25–.35 for various probabilistic reasoning tasks, in the range of .20–.25 for various covariation detection and hypothesis testing tasks, and in the range of .05–.20 for various indices of Bayesian reasoning—again, relationships allowing for substantial discrepancies between intelligence and the presence of the mindware necessary for rational thought.19
What Intelligence Tests Miss Page 23