Book Read Free

What Intelligence Tests Miss

Page 16

by Keith E Stanovich


  The need to detach knowledge and prior belief from current action characterizes many work settings in contemporary society. Consider the common admonition in the retail service sector that “the customer is always right.” This admonition is often interpreted to include even instances where customers unleash unwarranted verbal assaults on the employee. Knowledge that the customer is wrong in this instance must be set aside, and the peculiar logic of the retail sector must be carried out by the employee or she will be fired. The service worker is supposed to remain polite and helpful and realize that this is the socially constructed domain of the market-based transaction. The worker must realize that he or she is not in an actual social interaction with this person, but in a special, indeed unnatural, realm where different rules apply.

  I am not arguing that it is always better to ignore what you know. Obviously, most of the time we bring to bear all the prior knowledge we can in order to solve a problem. I am simply pointing to the fact that modernity is creating more and more situations where such unnatural decontextualization is required. The science on which modern technological societies is based often requires “ignoring what we know or believe.” Testing a control group when you fully expect it to underperform an experimental group is a form of ignoring what you believe. The necessity for putting aside prior knowledge is not limited to science and the law. Modernity increasingly requires decontextualizing in the form of stripping away what we personally “know” by its emphasis on such characteristics as: fairness, rule-following despite context, even-handedness, sanctioning of nepotism, unbiasedness, universalism, inclusiveness, contractually mandated equal treatment, and discouragement of familial, racial, and religious discrimination.

  Visceral Urges and Willpower: Thinking an Awful Lot and Still Losing

  The tendencies that we have toward miserly information processing are often not apparent to us. When people are presented with a problem, they are often not even aware that there is an alternative framing. They are not aware that they are failing to think as much as they could. When people are engaged in myside thinking, they often are not aware of alternative ways of processing information. When we use anchoring and adjustment or have our thinking affected by vividness, we are rarely aware of alternative ways of thinking. This makes sense. The purpose of the cognitive shortcuts used by the cognitive miser is to provide answers without taxing awareness. If we were aware of having chosen between alternative strategies, then these would not be cognitive shortcuts! Their purpose is subverted if we are aware of choosing alternative ways of decision making and problem solving.

  The situations discussed in the current chapter are different, however. If you felt that you would have chosen the 8 percent bowl in the Epstein jelly bean task, you were at least aware that the 10 percent was probably a better bet. If you thought that it was wrong to push the man over the footbridge in that version of the trolley problem, you were at least aware of a conflicting argument for doing so—you were aware of the argument that you were sentencing four more people to death because you failed to push the man over. On the rose syllogism if you defaulted to the easy solution of just saying “valid” when you saw the phrase “roses are living things” you were probably aware that you had been a bit lazy and not thought about the premises of the problem all that much. People are aware that there is a conflict in these situations between what we might call hard thinking and easy thinking. They have some awareness that the hard thinking is pulling them in one direction and the easy thinking in the other direction.

  There are still other situations where people have no trouble at all realizing that they are made up of multiple minds. In fact, the struggle between minds is almost the defining feature of these situations. They are situations where we have to resist temptation: where we have to get up and make breakfast despite wanting to sleep; have to resist an extra $3 coffee in the afternoon because we know the budget is tight this month; are on a diet and know that our snack should be carrots and not chips; know the garage needs to be cleaned this Saturday, but the Michigan–Notre Dame game is on; have to study for a midterm but there are two parties this weekend; are at a casino having promised to lose no more than $100 and are $107 down now and we really should stop but . . .

  It is only too apparent to us in these instances that there are parts of our brains at war with each other. Our natural language even has a term to designate the hard thinking that is attempting to overcome the easy thinking in these instances: willpower. Willpower is a folk term, but in the last two decades cognitive researchers have begun to understand it scientifically.9

  The Kennedy airplane crash incident that opened this chapter was an example where basic perceptual and cognitive processes needed to be overridden. However, these are not the situations in which we usually refer to willpower. Instead, our colloquial notion of willpower usually refers to the ability to delay gratification or to override visceral responses prompting us to make a choice that is not in our long-term interests. The inability to properly value immediate and delayed rewards is a source of irrationality that keeps many people from maximizing their goal fulfillment. The logic of many addictions, such as alcoholism, overeating, and credit card shopping, illustrate this point. From a long-term perspective, a person definitely prefers sobriety, dieting, and keeping credit card debt low. However, when immediately confronted with a stimulus that challenges this preference—a drink, a dessert, an item on sale—the long-term preference is trumped by the short-term desire. There are a whole variety of so-called self-control problems that fall into this class: drug abuse, smoking, over-eating, over-spending, gambling, procrastination.

  Psychologists have studied this issue using delayed-reward paradigms in which people have been shown to display an irrationality called intertemporal preference reversal.10 It is an irrational pattern of preference because it prevents us from getting what we most want over the long term. For example, imagine that you have the choice of receiving $100 immediately or $115 in one week’s time (it is assumed that the money is held in escrow by the federal government, so that the thought experiment eliminates issues concerning the probability of receiving the money). Not all people choose the $115 when given this choice. For whatever reasons, some prefer to receive the $100 immediately. The same subjects who were given the first choice now receive another choice: receive $100 in 52 weeks or $115 in 53 weeks. Almost all subjects—regardless of the choice made earlier—prefer the $115 in this comparison. But for the people who chose the $100 in the earlier example, this is a rank inconsistency. In 52 weeks they will be in exactly the situation of the first example—they could be receiving $100 immediately or waiting one week for $115.

  Why does waiting one week seem substantial at one time (so substantial that it is worth $15 to avoid) and insubstantial at another (making the decision one year in advance)? The answer is that humans display this inconsistency because they have so-called hyperbolic discount curves. These are simply the functions that determine how rapidly we discount a reward that is at a distance. Our functions are hyperbolic for sound evolutionary reasons. The only problem is that however well this type of function might have served genetic fitness, it is not a rational discount function for a human trying to maximize personal utility (an exponential curve is the right function for optimal human choice). Hyperbolic functions cause us to overvalue rewards that are close in time and thus to sometimes neglect longer-term goals. They cause us to reverse our preferences across time. From the standpoint of any planner of projects or actions this is suboptimal. Plans for a project made at an earlier point in time will be abandoned at a later point—and, at a still later point, this abandonment will be regretted!

  Our hyperbolic discount functions account for many of the temptations that are dysfunctional when we succumb to them. We set the alarm for 7 A.M. when we go to bed at midnight because we judge that the tasks of the next day are better served by getting up at this time than by arising at 9 A.M. But when the alarm rings at 7 A.M. and we press the
snooze button, we have reversed our earlier judgment—and later we will regret the reversal. We stock our fridges with expensive diet foods anticipating a period of weight loss, then find ourselves almost instinctively saying yes to the question “supersize that, sir?” at the fast-food counter. We must override the short-term response in situations like this, and the failure to do so is what folk psychology has labeled “lack of willpower.”

  Consider an example of willpower related in Newsweek magazine. In a profile of the wife of Senator John Edwards, Elizabeth Edwards, the writer reveals an anecdote from the 2004 presidential election campaign. Mrs. Edwards was struggling to stay on the South Beach Diet. The writer reports that while she was on a connecting flight, an airline attendant came by with the dessert tray and asked Mrs. Edwards if she would like a brownie. Mrs. Edwards replied: “The answer is yes. But if you go away I’ll be happier in the long run” (Henneberger, 2004, p. 31). Mrs. Edwards exercised so-called willpower here, but she also may have used a cognitive tool that is an example of mindware that supports rational thought. She could have called up a so-called bundling strategy well described by psychologists George Ainslie and Howard Rachlin.11 We want to pursue a long-term goal (weight loss through dieting) but a short-term reward (a brownie) tempts us. We know that we should not be eating a brownie every day. That would surely thwart our preeminent goal—weight loss through dieting. On the other hand, we find, in the heat of the moment, that visceral responses to the short-term goal are dominant. And they even have on their side another argument: Why not have the brownie now and start the diet tomorrow? In short, why not get the benefits of the brownie now and have the benefits of the long-term diet? Our ability to create alternative worlds—a key feature of the Type 2 processing carried out by the algorithmic and reflective minds—tells us why not: Tomorrow, we will be in exactly the same position as we are now, and we will opt for the brownie then as well, and so on, and so on.

  At this point we could do some hard thinking and go one more step. We could create a rule that reclassifies the meaning of having a brownie today: Having a brownie today stands for having a brownie on every day of the future. This rule makes it clear that having the brownie today totally thwarts our preeminent goal of dieting. If I have a brownie—even this one—my whole weight-loss plan is threatened. The total loss is now magnified so that it can at least compete with the over-valued short-term gain from eating the brownie.

  We now have a reframing of the problem that has the motivational force to at least compete with short-term visceral interests (this is of course not to say that the rule will win; only that it has the force now to make this a genuine struggle—an overtime game rather than a blowout, to use a sports analogy). Using our facility for language, we can instantiate rules that have the effect of “bundling” together behavioral actions to be taken in the future so that they can acquire the motivational force to override an action that right now threatens our long-term goals.

  The example here points up an important issue that leads us to the topic of the next chapter. Overriding the responses primed by our unconscious minds is a process that utilizes content. Declarative knowledge and strategic rules (linguistically coded strategies) are brought to bear during the override process. This mindware is often a language-based proposition with motivational force that can prime a response system. We might upload such information as basic and simple as near universally understood aphorisms—“a penny saved is a penny earned,” “beauty is only skin deep”—all in an effort to damp response priming from visceral or emotion modules.

  A problem arises, though, in cases where the relevant mindware has not been learned by the individual—it is not available as an alternative control system that could influence behavior. There are thus situations where the individual might wish to override automatic responses but not have the appropriate mindware installed for the situation. This is a mental problem that causes irrational behavior and that I have called a mindware gap.

  TEN

  Mindware Gaps

  Debates about rationality have focused on purely cognitive strategies, obscuring the possibility that the ultimate standard of rationality might be the decision to make use of superior tools.

  —Richard Larrick, Blackwell Handbook of Judgment and Decision Making, 2004

  We cannot defy the laws of probability, because they capture important truths about the world.

  —Amos Tversky and Daniel Kahneman, Judgment under Uncertainty: Heuristics and Biases, 1982

  In the past several chapters, I have sketched some of the characteristics of the cognitive miser. But being a cognitive miser is not the only cause of poor thinking. People also fail to reach their goals because of mindware problems. Mindware is a generic label for the rules, knowledge, procedures, and strategies that a person can retrieve from memory in order to aid decision making and problem solving. Good thinking may be impossible because people have failed to acquire important mindware—they might lack the rules, strategies, and knowledge that support rational thinking. A second mindware problem arises because some knowledge can actually be the cause of irrational behavior and thought. The first problem, what I call mindware gaps, is the focus of this chapter. The second problem, termed contaminated mindware, is the subject of the next.

  Mindware Problems in the Real World: Two Tragic Examples of the Effects of Mindware Gaps

  Autism is a developmental disability characterized by impairment in reciprocal social interaction, delayed language development, and a restricted repertoire of activities and interests. The noncommunicative nature of many autistic children, who may be normal in physical appearance, makes the disorder a particularly difficult one for parents to accept. It is therefore not hard to imagine the excitement of parents of autistic children when, in the late 1980s and early 1990s, they heard of a technique coming out of Australia that enabled autistic children who had previously been totally nonverbal to communicate.

  This technique for unlocking communicative capacity in nonverbal autistic individuals was called facilitated communication, and it was uncritically trumpeted in such highly visible media outlets as 60 Minutes, Parade magazine, and the Washington Post. The claim was made that autistic individuals and other children with developmental disabilities who had previously been nonverbal had typed highly literate messages on a keyboard when their hands and arms had been supported over the typewriter by a sympathetic “facilitator.” Not surprisingly, these startling verbal performances on the part of autistic children who had previously shown very limited linguistic behavior spawned incredibly high hopes among their parents. It was also claimed that the technique worked for individuals with severe intellectual disability who were nonverbal. The excitement of the parents was easy to understand, and everyone sympathized with their hope.

  Sadly, though, this story has no happy ending. Throughout the early 1990s, behavioral science researchers the world over watched in horrified anticipation, almost as if observing cars crash in slow motion, while a predictable tragedy unfolded before their eyes—predictable because the researchers had much experience with trying to fill (via teaching) the mindware gap that made the tragedy inevitable.

  That mindware gap was a failure to appreciate some of the most critical features of scientific thinking—most notably the feature of testing alternative explanations by using a control group. The claims for the efficacy of facilitated communication were disseminated to hopeful parents before any controlled studies had been conducted. The need for controlled studies was imperative in this situation because there were many obvious alternative explanations for the phenomenon being observed. The facilitator, almost always a sympathetic individual who was genuinely concerned that the child succeed, had numerous opportunities to consciously or unconsciously direct the child’s hand to the vicinity of keys on the keyboard. The fact that cuing by the facilitator might have been occurring was also suggested by the additional observation that the children sometimes typed out complicated messages while not even looking
at the keyboard. Additionally, highly literate English prose was produced by children who had not previously been exposed to the alphabet.

  By now, over a decade’s worth of controlled studies have been reported that have tested the claims of facilitated communication by using appropriate experimental controls.1 Each study has unequivocally demonstrated the same thing: The autistic child’s performance depended on tactile cuing by the facilitator. Many of these studies set up a situation in which the child and the facilitator were each presented with a drawing of an object but neither could see the other’s drawing. When both child and facilitator were looking at the same drawing, the child typed the correct name of the drawing. However, when the child and the facilitator were shown different drawings, the child typed the name of the facilitator’s drawing, not the one at which the child was looking. Thus, the responses were being determined by the facilitator rather than the child. It is no overstatement to say that facilitated communication did indeed result in tragedy. For example, at some centers, during facilitated sessions on the keyboard, clients reported having been sexually abused by a parent in the past. Children were subsequently removed from their parents’ homes, only to be returned later when the charges of abuse proved to be groundless.

  The clinicians responsible for the facilitated communication tragedy were not unintelligent people. Nevertheless, their beliefs and behavior were irrational and caused great harm because they had a mindware gap. They lacked critical thinking strategies that would have prevented them from making mistaken causal inferences. They were smart people acting foolishly because of a mindware gap.

  Another mindware gap was exposed with consequences just as tragic, in an even more recent set of cases. In 2003, Sally Clark, an English attorney, was released from prison when her conviction for killing her two young infants was overturned. Five months later, pharmacist Trupti Patel from Maidenhead, England, had her conviction for murdering her children overturned as well.2 Mrs. Clark and Mrs. Patel had many things in common. They both had suffered recurring infant deaths in their families. They both had been charged with killing their infants. The evidence against both of the mothers was quite ambiguous. And—finally—both were convicted because the judge, the jury, and, most notably, an expert witness at their trials suffered from mindware gaps.

 

‹ Prev