What Intelligence Tests Miss
Page 15
—George Ainslie, Breakdown of Will, 2001
One evening, in July 1999, John F. Kennedy Jr. set off for Martha’s Vineyard in a small aircraft with his wife and sister-in-law and, just miles from his destination, piloted the aircraft into the ocean after he became disoriented in the dark and haze. Journalist Malcolm Gladwell describes Kennedy’s errors as an instance of override failure.1 Kennedy could not trump Type 1 tendencies with Type 2 thinking. He could not trump normal thinking defaults with the rules he had learned about instrument flying. Specifically, he could not keep his wings level when he could not find lights marking the horizon, did not realize that the plane was in a bank, and ended up with his plane in a spiral dive.
Without a visible horizon, a bank is undetectable through felt gravitational forces and the pilot feels level, although in fact he is not. This is just the situation that creates the necessity for an override by our conscious minds—brain subsystems are signaling a response that is nonoptimal, and they must be overridden by acquired knowledge. In this case, the nonoptimal response was to navigate the plane up and down in an effort to escape the clouds and haze and thus reveal the horizon. The acquired knowledge and correct response was to use the instruments to keep the plane level, but this is what Kennedy could not bring himself to do consistently. According to Gladwell, “Kennedy needed to think, to concentrate on his instruments, to break away from the instinctive flying that served him when he had a visible horizon” (p. 90). Instead, the learned tendencies of instrument flying lost a war with basic perceptual instincts not applicable to the situation. By the end, “he had fallen back on his instincts—on the way the plane felt—and in the dark, of course, instinct can tell you nothing” (p. 90). The National Transportation Safety Board report on the crash, which details the plane’s movements in the last few minutes, reveals a desperate attempt to find the horizon visually—the natural tendency built into us. But night flying requires that this tendency be overridden and other learned behaviors be executed instead.
In previous chapters, I have discussed many situations where the cognitive miser fails to consciously process information and unthinkingly uses default processing modes that lead to irrational responses in some situations. The Kennedy case seems not quite like this. It seems not quite like passively accepting a frame that is provided (Chapter 7) or thoughtlessly responding to a novel problem like the Levesque problem of Chapter 6 (“Jack is looking at Anne but Anne is looking at George. Jack is married but George is not”). Kennedy was not a cognitive miser in the sense that he failed to think at all. Plus—he knew the right things to do. Kennedy had been taught what to do in this situation and, given that his life and the lives of others were at stake, he clearly was thinking a lot. What happened was that the right behavioral patterns lost out to the wrong ones. The right response was probably in Kennedy’s mind at some point (unlike the case of the Levesque problem), but it lost out to the wrong response. Kennedy was thinking, but the right thinking lost out—which of course raises the question: lost out to whom? Given that all thinking is going on in the same brain, this suggests the possibility that there are different minds in the same brain—precisely what the tripartite model discussed in Chapter 3 suggested. There are many different nonconscious subsystems in the brain that often defeat the reflective, conscious parts of our brains.2 In Kennedy’s case, he lost out to ancient evolutionarily adapted modules for balance, perception, and orientation. This is a common occurrence, but an even more common one is the tendency of rational response tendencies to lose out to a suite of evolutionarily adapted modules related to emotional regulation.
The Trolley Problem: Overriding the Emotions
To get ourselves warmed up for thinking about the emotions, let’s contemplate killing someone. Do not get too upset, though—it will be for a good cause. I would like to discuss a hypothetical situation that is well traveled in moral philosophy—the trolley problem. There are many variants in the literature,3 but basically it goes like this. Imagine you are watching a runaway trolley that has lost its brakes and is rolling down a hill toward five people standing on the tracks below, who will certainly be killed by it. The only way to avoid this tragedy is for you to hit a nearby switch. This switching device will send the trolley down an alternative track on which there is only one person standing who will be killed by the trolley. Is it correct for you to hit the switch?
Most people say that it is—that it is better to sacrifice one person in order to save five.
Consider now an alternative version of this hypothetical studied by Harvard psychologist Joshua Greene, who has done work on the cognitive neuroscience of moral judgment. This alternative version is called the footbridge problem. As before, a runaway trolley that has lost its brakes is rolling down a hill and is certain to kill five people on the tracks below. This time you are standing on a footbridge spanning the tracks in between the trolley and the five people. A large stranger is leaning over the footbridge and if you push him over the railing he will land on the tracks, thus stopping the trolley and saving the five people (and no one will see the push). Should you push him over? Most people say no.
We all certainly can understand why there is a tendency to say no in the second case. The second case is just . . . yucky . . . in a way the first case is not. So, the fact that we all have these intuitions is understandable. The problem comes about when some people want to justify these intuitions, that is, to say that both intuitions are right—that it is right to sacrifice one to save five in the first case and not right to sacrifice one to save five in the second. As Greene notes, “while many attempts to provide a consistent, principled justification for these two intuitions have been made, the justifications offered are not at all obvious and are generally problematic. . . . These intuitions are not easily justified. . . . If these conclusions aren’t reached on the basis of some readily accessible moral principle, they must be made on the basis of some kind of intuition. But where do these intuitions come from?” (2005, p. 345).
To address this question, Greene and colleagues ran studies in which subjects responded to a variety of dilemmas like the trolley problem (termed less personal dilemmas) and a variety of dilemmas like the footbridge dilemmas (termed more personal dilemmas) while having their brains scanned. The brain scanning results confirmed that the more personal dilemmas were more emotionally salient and activated to a greater extent brain areas associated with emotion and social cognition: the posterior cingulate cortex, amygdala, medial prefrontal cortex, and superior temporal sulcus. The less personal dilemmas, in contrast, “produced relatively greater neural activity in two classically ‘cognitive’ brain areas associated with working memory function in the inferior parietal lobe and the dorsolateral prefrontal cortex” (Greene, 2005, p. 346). These are brain areas associated with overriding the decisions of the unconscious mind.
One interesting finding concerned the subjects who defied the usual pattern and answered yes to the footbridge-type problems—who sacrificed one to save five even in the highly personal dilemmas. They took an inordinately long time to make their responses. Greene and colleagues looked deeper into this finding and compared the brain scans on slow trials on which subjects said yes to footbridge-like problems (save the five) to the brain scans on fast trials on which subjects gave the majority response on such personal problems (the no response—don’t save the five). The brain looked different on yes trials. The areas of the brain associated with overriding the emotional brain—the dorsolateral prefrontal cortex and parietal lobes—displayed more activity on those trials. What was happening for these individuals was that they were using Type 2 processing to override Type 1 processing coming from brain centers that regulate emotion. These subjects were realizing that if it was correct to divert the train toward one person to save five, it was also the right thing to do to push the large man over the footbridge in order to save five.
Most subjects are not like these subjects, however—they do not override the emotions in the footbr
idge dilemma. They engage in a cognitive struggle but their “higher” mind loses out to the emotions. It is thus not surprising that at a later time these subjects can find no principled reason for not sacrificing for greater gain in the footbridge case—because no principle was involved. The part of their minds that deals with principles lost out to the emotional mind. These people were left in a desperate attempt to make their two totally contradictory responses cohere into some type of framework—a framework that their conscious minds had not actually used during their responses to the problems.
It is a general fact of cognition that subjects are often unaware that their responses have been determined by their unconscious minds, and in fact they often vociferously defend the proposition that their decision was a conscious, principled choice. We tend to try to build a coherent narrative for our behavior despite the fact that we are actually unaware of the brain processes that produce most of it. The result is that we tend to confabulate explanations involving conscious choice for behaviors that were largely responses triggered unconsciously, a phenomenon on which there is a large literature.4 The tendency to give confabulated explanations of behavior may impede cognitive reform that can proceed only if we are aware of the autonomous nature of certain of our brain subsystems.
Fighting “Cold” Heuristic Tendencies and Still Losing
Psychologists differentiate thought that is affect-laden from thought that is relatively free of affect. Trying to think in ways that override the contaminating effects of the emotions is an example of what psychologists term hot cognition. But our conscious thinking can lose out to unconscious thinking even when the emotions are not involved—that is, when we are engaged in purely cold cognition.5 We can, in fact, let nonconscious processing determine our behavior even when, consciously, we know better. For example, would you rather have a 10 percent chance of winning a dollar or an 8 percent chance of winning a dollar? A no-brainer? But consider that if you are like many people in experiments by psychologist Seymour Epstein and colleagues you would have actually chosen the latter.6
Yes—Epstein found that it is actually possible to get subjects to prefer an 8 percent chance of winning a dollar over a 10 percent chance of winning a dollar. Here’s how. Subjects in several of his experiments were presented with two bowls of jelly beans. In the first were nine white jelly beans and one red jelly bean. In the second were 92 white jelly beans and 8 red. A random draw was to be made from one of the two bowls and if a red jelly bean was picked, the subject would receive a dollar. The subject could choose which bowl to draw from. Although the two bowls clearly represent a 10 percent and an 8 percent chance of winning a dollar, a number of subjects chose the 100-bean bowl, thus reducing their chance of winning. The majority did pick the 10 percent bowl, but a healthy minority (from 30 to 40 percent of the subjects) picked the 8 percent bowl. Although most of these subjects were aware that the large bowl was statistically a worse bet, that bowl also contained more enticing winning beans—the 8 red ones. Many could not resist trying the bowl with more winners despite some knowledge of its poorer probability. That many subjects were aware of the poorer probability but failed to resist picking the large bowl is indicated by comments from some of them such as the following: “I picked the one with more red jelly beans because it looked like there were more ways to get a winner, even though I knew there were also more whites, and that the percents were against me” (Denes-Raj and Epstein, 1994, p. 823). In short, the tendency to respond to the absolute number of winners, for these subjects, trumped the formal rule (pick the one with the best percentage of reds) that they knew was the better choice.
Perhaps you think that you would have picked the small bowl, the correct one (you are probably right—the majority do in fact pick that bowl). Perhaps you do not feel that this was a problem of cold cognition that involved much of a struggle for you. Then maybe you will experience more of a struggle—of a cognitive battle that “you” may well lose—in the next example.
Consider the following syllogism. Ask yourself if it is valid—whether the conclusion follows logically from the two premises:
Premise 1: All living things need water
Premise 2: Roses need water
Therefore, Roses are living things
What do you think? Judge the conclusion either logically valid or invalid before reading on.
If you are like about 70 percent of the university students who have been given this problem, you will think that the conclusion is valid. And if you did think that it was valid, you would be wrong.7 Premise 1 says that all living things need water, not that all things that need water are living things. So, just because roses need water, it does not follow from Premise 1 that they are living things. If that is still not clear, it will probably become clear after you consider the following syllogism with exactly the same structure:
Premise 1: All insects need oxygen
Premise 2: Mice need oxygen
Therefore, Mice are insects
Now it seems pretty clear that the conclusion does not follow from the premises.
If the logically equivalent “mice” syllogism is solved so easily, why is the “rose” problem so hard? Well, for one thing, the conclusion (roses are living things) seems so reasonable and you know it to be true in the real world. And that is the rub. Logical validity is not about the believability of the conclusion—it is about whether the conclusion necessarily follows from the premises. The same thing that made the rose problem so hard made the mice problem easy. The fact that “mice are insects” is false in the world we live in made it easier to see that the conclusion did not follow logically from the two premises.
In both of these problems, prior knowledge about the nature of the world (that roses are living things and that mice are not insects) is becoming implicated in a type of judgment that is supposed to be independent of content: judgments of logical validity. In the rose problem, prior knowledge was interfering, and in the mice problem prior knowledge was facilitative. The rose syllogism is an example of cold cognition involving a conflict between a natural response and a more considered rule-based response. Even if you answered it correctly, you no doubt felt the conflict. If you did not answer it correctly, then you have just experienced a situation in which you thought a lot but lost out to a more natural processing tendency to respond to believability rather than validity.
Syllogisms where validity and prior knowledge are in conflict assess an important thinking skill—the ability to maintain focus on reasoning through a problem without being distracted by our natural tendency to use the easiest cue to process (our natural tendency to be cognitive misers). These problems probe our tendencies to rely on attribute substitution when the instructions tell us to avoid it. In these problems, the easiest cue to use is simply to evaluate whether the conclusion is true in the world. Validity is the harder thing to process, but it must be focused on while the easier cue of conclusion believability is ignored and/or suppressed.
It is important to realize that the rose-type syllogism is not the type of syllogism that would appear on an intelligence test. It is the type of item more likely to appear on a critical thinking test, where the focus is on assessing thinking tendencies and cognitive styles. The openness of the item in terms of where to focus (on the truth of the conclusion or the validity of the argument) would be welcome in a critical thinking test, where the relative reliance on reasoning versus context may well be the purpose of the assessment. This openness would be unwanted on an intelligence test, where the focus is on (ostensibly) the raw power to reason when there is no ambiguity about what constitutes optimal performance. On an intelligence test (or any aptitude measure or cognitive capacity measure) the syllogism would be stripped of content into “all As are Bs” form. Alternatively, unfamiliar content would be used, such as this example with the same form as the “rose” syllogism:
Premise 1: All animals of the hudon class are ferocious
Premise 2: Wampets are ferocious
Ther
efore, Wampets are animals of the hudon class
Items like this strip away the “multiple minds in conflict” aspect of the problem that was the distinguishing feature of the rose syllogism. Problems that do not involve such conflict tap only the power of the algorithmic mind and fail to tap important aspects of the reflective mind. For example, research has shown that performance on rose-type syllogisms is correlated somewhat with intelligence. However, thinking dispositions that are part of the reflective mind—dispositions such as cognitive flexibility, open-mindedness, context independence, and need for cognition—can predict variance in conflict syllogisms that intelligence cannot.8
Finally, although the rose syllogism may seem like a toy problem, it is indexing a cognitive skill of increasing importance in modern society—the ability to reason from the information given and at least temporarily to put aside what we thought before we received new information. For example, many aspects of the contemporary legal system put a premium on detaching prior belief and world knowledge from the process of evidence evaluation. There has been understandable vexation at the rendering of odd jury verdicts that had nothing to do with the evidence but instead were based on background knowledge and personal experience. Two classic cases from the 1990s provide examples. If the polls are to be believed, a large proportion of Americans were incensed at the jury’s acquittal of O. J. Simpson. Similar numbers were appalled at the jury verdict in the first trial of the officers involved in the Rodney King beating. What both juries failed to do was to detach the evidence in their respective cases from their prior beliefs.