Book Read Free

What Intelligence Tests Miss

Page 14

by Keith E Stanovich


  You can see if you were subject to the phenomenon of overconfidence by looking at the answers to the questions that were presented at the beginning of this section:6

  1. 39 years;

  2. 39 books;

  3. the year 1756;

  4. 645 days;

  5. 36,198 feet.

  Recall that you were forewarned about the phenomenon of overconfidence by the title of this section. Because you were forming 90 percent confidence intervals, 90 percent of the time your confidence interval should contain the true value. Only one time in 10 should your interval fail to contain the actual answer. So because you answered only five such questions, your intervals should have contained the correct answer each time—or, at the very most, you should have answered incorrectly just once. Chances are, based on past research with these items, that your confidence intervals missed the answer more than once, indicating that your probability judgments were characterized by overconfidence (despite the warning in the title) like those of most people.

  Overconfidence effects have been found in perceptual and motor domains as well as in knowledge calibration paradigms. They are not just laboratory phenomena, but have been found in a variety of real-life domains such as the prediction of sports outcomes, prediction of one’s own behavior or life outcomes, and economic forecasts. Overconfidence is manifest in the so-called planning fallacy—the fact that it is a ubiquitous fact of human behavior that we routinely underestimate the time it will take to complete projects in the future (for example, to complete an honors thesis, to complete this year’s tax forms, to finish a construction project). Nobel Prize winner Daniel Kahneman tells a humorous story of how intractable the planning fallacy is, even among experts who should know better. Years ago, with such a group of decision experts, Kahneman was working on a committee to develop a curriculum to teach judgment and decision making in high schools. The group was meeting weekly to develop the curriculum and to write a textbook. At one point in the series of meetings, Kahneman asked the group, which included the Dean of Education, to estimate how long they thought it would take them to deliver the curriculum and textbook that they were writing. The range of estimates, including those made by the Dean and Kahneman himself, was between eighteen months and two and half years. At that point it occurred to Kahneman that, because it was the early 1970s and many curriculum and textbook initiatives had been taking place, he should ask the Dean about the many other curriculum groups that the Dean had chaired. He asked the Dean to think back to previous groups concerned with similar projects. How long did it take them to finish? The Dean pondered a bit, then looked a little embarrassed, and told the group that roughly 40 percent of the groups in the past had never finished! Noting the discomfort in the room, Kahneman asked the Dean of those that finished, how long did it take them? The Dean, again looking somewhat embarrassed, told the committee that he could not think of any group that finished in less than seven years!7

  The cognitive bias of overconfidence in knowledge calibration has many real-world consequences. People who think they know more than they really do have less incentive to learn more or to correct errors in their knowledge base. People who think their motor or perceptual skills are excellent are critical of the performance of other people but do not subject their own behavior to criticism. For example, surveys consistently show that most people think that their driving skill is above average. Consider a survey by the Canada Safety Council in which 75 percent of drivers admitted to either talking on the phone, eating, shaving, or applying makeup while driving. Oddly, 75 percent of the same people said they were frustrated and appalled by other drivers they saw eating or talking on the phone! Similarly, thousands of people overconfidently think that their driving is unimpaired by talking on their cell phones. This failure of epistemic rationality (beliefs tracking reality) is proving increasingly costly as inattention-based accidents increase due to the addition of more technological distractions to the driver’s environment. The failure to achieve good probabilistic calibration represents an epistemic irrationality in humans that appears to be widespread and that may have pervasive consequences. For example, overconfidence among physicians is a pervasive and dangerous problem.8

  The poor calibration of driving abilities relates to a larger area of social psychological research that has been focused on biased self-assessments. People systematically distort self perceptions, often but not always in self-enhancing ways.9 In a self-evaluation exercise conducted with 800,000 students taking the SAT Test, less than 2 percent rated themselves less than average in leadership abilities relative their peers. Over 60 percent rated themselves in the top 10 percent in the ability to get along with others. In a study by Justin Kruger and David Dunning it was found that the bottom 25 percent of the scorers on a logic test thought, on average, that they were at the 62nd percentile of those taking the test. In short, even the very lowest scorers among those taking the test thought that they were above average!

  There is a final recursive twist to this myside processing theme. Princeton psychologist Emily Pronin has surveyed research indicating that there is one additional domain in which people show biased self-assessments. That domain is in the assessment of their own biases.10 Pronin summarizes research in which subjects had to rate themselves and others on their susceptibility to a variety of cognitive and social psychology biases that have been identified in the literature, such as halo effects and self-serving attributional biases (taking credit for successes and avoiding responsibility for failures). Pronin and colleagues found that across eight such biases, people uniformly felt that they were less biased than their peers. In short, people acknowledge the truth of psychological findings about biased processing—with the exception that they believe it does not apply to them.

  In explaining why this so-called bias blind spot exists, Pronin speculated that when estimating the extent of bias in others, people relied on lay psychological theory. However, when evaluating their own bias, she posited, they fell back on an aspect of myside processing—monitoring their own conscious introspections. Modern lay psychological theory allows for biased processing, so biased processing is predicted for others. However, most social and cognitive biases that have been uncovered by research operate unconsciously. Thus, when we go on the introspective hunt for the processes operating to bias our own minds we find nothing. We attribute to ourselves via the introspective mechanism much less bias than we do when we extrapolate psychological theory to others.

  Another important aspect of myside processing is our tendency to have misplaced confidence in our ability to control events. Psychologist Ellen Langer has studied what has been termed the illusion of control—that is, the tendency to believe that personal skill can affect outcomes determined by chance. In one study, two employees of two different companies sold lottery tickets to their co-workers. Some people were simply handed a ticket, whereas others were allowed to choose their ticket. Of course, in a random drawing, it makes no difference whether a person chooses a ticket or is assigned one. The next day, the two employees who had sold the tickets approached each individual and attempted to buy the tickets back. The subjects who had chosen their own tickets demanded four times as much money as the subjects who had been handed their tickets! In several other experiments, Langer confirmed the hypothesis that this outcome resulted from people’s mistaken belief that skill can determine the outcome of random events.

  People subject to a strong illusion of control are prone to act on the basis of incorrect causal theories and thus to produce suboptimal outcomes. That this is a practical outcome of acting on illusory feelings of control is well illustrated in a study by Mark Fenton-O’Creevy and colleagues. They studied 107 traders in four different investment banks in the City of London. The degree of illusory control that characterized each trader was assessed with an experimental task. The subjects pressed keys that they were told either might or might not affect the movement of an index that changed with time. In reality, the keys did not affect the mo
vement of the index. The degree to which subjects believed that their key presses affected the movement of the index was the measure of the degree to which subjects’ thought processes were characterized by an illusion of control. Fenton-o’Creevy and colleagues found that differences in feelings of illusory control were (negatively) related to several measures of the traders’ performance. Traders who were high in the illusion of control earned less annual remuneration than those who were low in the illusion of control. A one standard deviation increase in illusion of control was associated with a decrease in annual remuneration of £58,000.11

  Myside Processing: Egocentrism in Communication and Knowledge Assumptions

  Myside processing biases can disrupt our communication attempts, especially in certain settings. Kruger and colleagues have studied egocentrism in e-mail communication.12 Of course, any written communication requires some perspective-taking on our part because we know that the normal cues of tone, expression, and emphasis are not present. E-mail may be particularly dangerous in this respect, because its ease, informality, and interactiveness might encourage us to think that it is more like face-to-face communication than it really is. In their first study, Kruger and colleagues had one group of subjects send e-mail messages to another group of subjects who then interpreted the messages. Half of the messages sent were sarcastic (“I really like going on dates because I like being self-conscious”) and half were not. Receivers were asked to judge which were sarcastic and which were not, and senders were asked to estimate whether they thought the receiver would properly classify each particular message. Senders were quite optimistic that the receivers could decode virtually every message—the senders thought that the receivers would achieve 97 percent accuracy in their classification. In fact, the receivers correctly interpreted only 84 percent of the messages. Senders had a difficult time adjusting their myside perspective in order to understand that without expressive cues and intonation, it was hard to see that some of these messages were sarcastic.

  That the difficulty people have in understanding the possibility of miscommunication in e-mail really is due to egocentrism was suggested by another experiment. This experiment was one in which the senders read their e-mail messages aloud. However, the oral recordings were not sent to the receiver. Just as in the previous experiment, the receiver interpreted e-mails alone. The purpose of the oral recording was to induce a less egocentric mindset in one group of senders. One group of senders recorded the messages in a manner consistent with their meaning—the senders read sarcastic messages sarcastically and serious messages seriously. The other group, however, read the messages inconsistently—sarcastic messages were read seriously and serious messages sarcastically. As Kruger and colleagues put it, “Our reasoning was simple. If people are overconfident in their ability to communicate over e-mail partly because of the difficulty of moving beyond their own perspective, then forcing people to adopt a perspective different from their own ought to reduce this overconfidence. As a result, participants who vocalized the messages in a manner inconsistent with the intended meaning should be less overconfident than those who vocalized the messages in a manner consistent with the intended meaning” (p. 930).

  Replicating the earlier studies, the consistent group showed a very large overconfidence effect. Their receivers correctly identified only 62.2 percent of the messages, whereas the senders had thought that 81.9 percent of their messages would be accurately interpreted. In contrast, in the inconsistent group, while the receivers correctly identified 63.3 percent of the messages, the senders had been much less optimistic about how many would be interpreted correctly. These senders had predicted (correctly) that only 62.6 percent of their messages would be accurately interpreted.

  What the Kruger findings illustrate is how automatically we egocentrically project what we know into the minds of others. Indeed, their studies were inspired by an even more startling demonstration of this tendency. They describe a doctoral dissertation study by Elizabeth Newton in which subjects were asked to tap out the rhythm of a popular song to a listener. The tapper then estimated how many people would correctly identify the song from the taps if the taps were presented to a large group of listeners. The tappers estimated that roughly 50 percent of the listeners would be able to identify the song they were tapping. Actually, only 3 percent of the listeners were able to identify the song from the tapping. We all know this phenomenon. The song is just so clear in our own mind, we can’t believe our cryptic hums or taps do not immediately trigger it in our listener. Even knowing about such myside biases does not inoculate us from this illusion—that what is in our heads does not loom as large to other people as it does to us.

  Myside thinking of this type is implicated in the phenomena of “feature creep” and “feature fatigue” discussed in the consumer literature on electronic devices.13 As more and more complicated features are added to electronic products, the devices become less useful because consumers cannot afford the time that it takes to master the appliance. One study done by the Philips Electronics company found that one half of their returned products had nothing wrong with them. Instead, in half the cases, the problem was that the consumer could not figure out how to use the device.

  Many companies are designing products with additional features that actually make the product less useful in the end. Writer James Surowiecki mentions the obvious example of Microsoft Word 2003, which has 31 toolbars and over 1500 commands. Why does this feature creep occur? The problem arises because the designers of the products cannot avoid falling into myside thinking. The myside bias of the designers is well described by cognitive scientist Chip Heath, who notes that he has “a DVD remote control with 52 buttons on it, and every one of them is there because some engineer along the line knew how to use that button and believed I would want to use it, too. People who design products are experts. . . . and they can’t imagine what it’s like to be as ignorant as the rest of us” (Rae-Dupree, 2007, p. 3).14

  Intelligence and Myside Processing

  In this chapter, I have discussed only a small sampling of the many different ways that psychologists have studied myside processing tendencies.15 Myside processing is ubiquitous. Is high intelligence an inoculation against myside processing bias?

  In several studies of myside bias like that displayed in the Ford Explorer problem that opened this chapter, my colleague Richard West and I have found absolutely no correlation between the magnitude of the bias obtained and intelligence. The subjects above the median intelligence in our sample were just as likely to show such biases as the subjects below the median. It is likewise with the argument generation paradigm that I described (“Tuition should be raised to cover the full cost of a university education”). The tendency to generate more myside arguments than otherside arguments was unrelated to intelligence.16 In several studies, Klaczynski and colleagues found that the higher-IQ subjects in experiments were just as likely to evaluate experimental evidence in a biased manner as were the lower-IQ subjects. Over-confidence effects have been modestly associated with intelligence in a few studies. Subjects with higher intelligence have been shown to display slightly lower overconfidence. Again, though, these are statistically significant but modest associations—ones that leave plenty of room for the dissociation that defines dysrationalia in this domain (highly unwarranted overconfidence in an individual of high intelligence).

  Most of the myside situations for which we have the strongest evidence for the lack of a link to intelligence involve what West and I call natural myside bias paradigms. These are situations where people show a tendency to evaluate propositions from within their own perspective when given no explicit instructions or cues to avoid doing so. It is probably important to note that these studies did not strongly instruct the subject that myside thinking was to be avoided or that multiple perspective-taking was a good thing. It is likely that the higher-intelligence subjects in the experimental sample would have been better able to comply with these instructions.

&n
bsp; The findings surveyed here on myside bias suggest exactly the same ironic conclusion that was mentioned in the previous chapter on framing effects: Intelligent people perform better only when you tell them what to do. If you tell an intelligent person what a rational requirement is—in the case of this chapter, if you tell them to avoid myside bias or in the case of the last chapter you tell them to avoid framing effects—and then give them a task that requires following that stricture, individuals with higher intelligence will adhere to the stricture better than individuals of lower intelligence.

  It is important to note that the literature in education that stresses the importance of critical thinking actually tends to focus on avoiding natural myside bias. It is thus not surprising that we observe massive failures of critical thinking among, for example, university students. They have been selected, in many cases, by admissions instruments that are proxies for intelligence, but such instruments contain no measures of critical thinking defined in this way. Note that, in theory, the tests could contain such assessments. I have just discussed a very small and select sample of tasks that are used to assess myside processing. There are many more. They represent ways to examine an important aspect of rational thought that is not tapped by intelligence tests. Such an aspect of thought (myside bias) represents an important part of cognition that intelligence tests miss.

  NINE

  A Different Pitfall of the Cognitive Miser: Thinking a Lot, but Losing

  The prosperity of modern civilization contrasts more and more sharply with people’s choice of seemingly irrational, perverse behaviors, behaviors that make many individuals unhappier than the poorest hunter/gatherer. As our technical skills overcome hunger, cold, disease, and even tedium, the willingness of individuals to defeat their own purposes stands in even sharper contrast. In most cases these behaviors aren’t naive mistakes, but the product of robust motives that persist despite an awareness of the behaviors’ cost

 

‹ Prev