A large body of research in decision science has indicated that one attribute that is regularly substituted for an explicit assessment of decision costs and benefits is an affective valuation of the prospect at hand.4 This is often a very rational attribute to substitute—affect does convey useful signals as to the costs and benefits of outcomes. A problem sometimes arises, however, when affective valuation is not supplemented by any analytic processing and adjustment at all. For example, sole reliance on affective valuation can make people insensitive to probabilities and to quantitative features of the outcome that should effect decisions. One study demonstrated that people’s evaluation of a situation where they might receive a shock is insensitive to the probability of receiving the shock because their thinking is swamped by affective evaluation of the situation. People were willing to pay almost as much to avoid a 1 percent probability of receiving a shock as they were to pay to avoid a 99 percent probability of receiving a shock. Clearly the affective reaction to the thought of receiving a shock was overwhelming the subjects’ ability to evaluate the probabilities associated.
Likewise, research by resource economists studying the public’s valuation of environment damage indicates again that affective reaction interferes with people’s processing of numerically important information. It was found that people would pay little more to save 200,000 birds from drowning in oil ponds (mean estimate $88) than they would pay to save 2000 birds ($80). The authors speculated that the affective reaction to a bunch of birds drowning in oil is determining the response here—that the actual number of birds involved has become overwhelmed by the affect-laden imagery. Christopher Hsee and colleagues confirmed this interpretation in a study where they had subjects respond to a hypothetical situation in which a group of university researchers had found pandas in a remote Asian region and the subjects were asked how much they would donate to save four pandas. Another group was asked what they would donate to save one panda. Both groups simply received a paragraph without supplemental visual information. Because the numbers are lower here than in the bird study, they were easier to evaluate and think about and, in these conditions, subjects would donate more to save four pandas (a mean of $22.00) than to save one (a mean of $11.67). In two comparable conditions, groups evaluated their prospective donations to save pandas in the presence of pictures of cute pandas. When the questions were accompanied by affect-laden photos, subjects donated no more to save four pandas than to save one. The quantitative aspect of thinking in the situation was lost because it was overwhelmed by a judgment determined solely by affect valuation.
Affect substitution is implicated in the difficulty that people have following the standard advice to buy low and sell high in stock market investing. When the stock market is high, euphoria reigns and a positive affect hangs over stock investing—encouraging nonprofessionals (and many professionals as well!) whose judgment is primed by affective cues to buy. The opposite happens when the market has fallen. People have lost money, and fear of more loss dominates the evaluative atmosphere. Thinking of the stock market triggers negative affective reactions and people do not buy, and often they are prompted to sell. Thus, affect valuation primes people to buy high and sell low—just the opposite of what they should do. And, in this domain, being a cognitive miser can be costly. As illustrated in the example of the mutual funds discussed in the last chapter, people lost billions of dollars in forgone gains during the period 1998–2001 because they bought high and sold low. Affect substitution is one cognitive characteristic (others are loss aversion, overconfidence, and over-explaining chance) that contributes to this costly irrational behavior.
Tools of the Cognitive Miser: Vividness, Salience, and Accessibility
The cognitive miser is very sensitive to vivid presentations of information. The inability to override the influence of vivid but unrepresentative data is a recurring cause of dysrationalic behavior and beliefs in the real world. Here is an example. A friend drives you 20 miles to the airport where you are getting on a plane for a trip of about 750 miles. Your friend is likely to say, “Have a safe trip,” as you part. This parting comment turns out to be sadly ironic, because your friend is three times more likely to die in a car accident on the 20-mile trip back home than you are on your flight of 750 miles. Driving automobiles is an extremely dangerous activity, compared to almost any other activity in our lives, yet the deaths due to automobile crashes are not presented as vividly and saliently as the crash of a large airliner.5 It is the way we are biased toward vivid information that accounts for the apparent irrationality of person A’s wishing person B safety, when it is person A who is in more danger.
Subsequent to the terrorist attacks of September 11, 2001, travel by airlines decreased because people were afraid of flying. Of course, people continued to travel. They did not just stay home. They simply took their trips by other means—in most cases by automobile. Since automobile travel is so much more dangerous than flying, it is a statistical certainty that more people died because they switched to driving. In fact, researchers have estimated that over 300 more people died in the final months of 2001 because they took trips by car rather than flew. One group of researchers were able to come up with a vivid statistic to convey just how dangerous driving is. They calculated that for driving to be as dangerous as flying, an incident on the scale of September 11 would have to occur once a month!
Misleading personal judgments based on the vividness of media-presented images are widespread in other areas as well. For example, risks that we face such as the possibility of developing diabetes cause less worry than risks such as developing staph infections in hospitals, even though the former will affect 45 million Americans and the latter only 1500 in a year. This is despite the fact that, personally, we can do something about the former (by changing our diet and exercising) but not the latter.
The cognitive miser relies on the easily processed cue of salience, but this can lead the cognitive miser astray. Certain formats for information appear to be more salient than others. A study by Kimihiko Yamagishi demonstrated a similar phenomenon by showing that people rated a disease that killed 1286 out of 10,000 people as more dangerous than one that killed 24.14 percent of the population. Again, the vividness of representing 1286 actual people rather than an abstract percentage is what triggers an affective response that leads to a clearly suboptimal judgment. Pointing to the potentially important practical implications of such a finding, Yamagishi titled his article “When a 12.86% Mortality Is More Dangerous than 24.14%: Implications for Risk Communication.”6
Of course, even more vivid than a frequency statistic is a picture or a photograph—that is, something that turns a statistic into a person. Cognitive scientist Paul Slovic reported a study in which people were asked to donate money to the charity Save the Children. In one condition, termed the Statistical Victims condition, subjects were given statistical information such as the following: “Food shortages in Malawi are affecting more than 3 million children; In Zambia, severe rainfall deficits have resulted in a 42% drop in maize production from 2000; As a result, an estimated 3 million Zambians face hunger; More than 11 million people in Ethiopia need immediate food assistance.” Subjects were asked to donate money to help ease these problems. In another condition, termed the Identifiable Victim condition, subjects were shown a photograph of an individual and told a story about the person containing information like the following: “Rokia, a 7-year-old girl from Mali, Africa, is desperately poor and faces a threat of severe hunger or even starvation. Her life will be changed for the better as a result of your financial gift.” Twice as much money was donated in the Identifiable Victim condition as in the Statistical Victims condition.
One salience-related effect that has been studied by behavioral economists is called the money illusion.7 This illusion occurs when people are overly influenced by nominal monetary value. Simply put, it is when the cognitive miser responds only to the face value of a monetary amount without contextualizing it with facto
rs that affect actual buying power such as inflation, time, and currency exchange rates. The most stunning example of the money illusion was reported in a study where it was found that people underspend in a foreign currency when the foreign currency is a multiple of the home currency (for example, 1 US dollar = 4 Malaysian ringgits) and overspend when the foreign currency is a fraction of the home currency (for instance, 1 US dollar = .4 Bahraini dinar). This effect shows that there is an influence of the face value of the currency: items look expensive if they cost a multiple of the home currency (and thus people attenuate spending), and they look cheap when they cost a fraction of the home currency (and thus people are enticed to spend). The effect shows that people cannot suppress the miserly tendency to respond to the face value of the currency even though they know that its face value prior to conversion to the home currency is irrelevant.
The money illusion has some very real public policy consequences. Throughout 2006 and early 2007 there was consternation (and calls for political action) in the United States as gasoline prices spiked to the unprecedented price of over $3 per gallon. There was just one problem. These prices were not unprecedented. Throughout 2006 and early 2007 the price of gas remained below its inflation-adjusted price in 1981. In fact, in terms of affordability (price adjusted for income) the price of gasoline in 2006 was substantially below what it was in 1978–1981.
Heuristic Processing: Quantity versus Quality in Decision Making
By giving examples of the thinking shortcuts taken by the cognitive miser and their pitfalls, I do not mean to imply that using such shortcuts is always wrong. To the contrary, there is a rich literature in psychology showing that in many situations such heuristic processing is quite useful.8 Heuristic processing is a term often used for Type 1 processing—processing that is fast, automatic, and computationally inexpensive, and that does not engage in extensive analysis of all the possibilities. Thus, one way to describe cognitive misers is to say that they rely to a large extent on heuristic processing.
So certainly I do not wish to deny the usefulness of heuristic processing. Nevertheless, my emphasis will be the opposite—to highlight the dangers of using these heuristics in too many situations, including those that modern society has deliberately designed in order to trap cognitive misers. When we are over-reliant on heuristic processing we lose personal autonomy. Being a cognitive miser makes us vulnerable to exploitation. We give up our thinking to those who manipulate our environments, and we let our actions be determined by those who can create the stimuli that best trigger our shallow automatic processing tendencies. We make the direction of our lives vulnerable to deflection by others who control our symbolic environment. This is what makes defaulting to such heuristics a two-edged sword. Being a cognitive miser preserves processing capacity for other tasks. At the same time, heuristics can be over-generalized to situations that require not a quick approximation but, rather, precise calculating.
The number of situations where the use of heuristics will lead us astray may not be large, but such situations may be of unusual importance. The importance of a thinking strategy is not assessed by simply counting the number of instances in which it is engaged. We cannot dismiss conscious analytic thinking by saying that heuristics will get a “close enough” answer 98 percent of the time, because the 2 percent of the instances where heuristics lead us seriously astray may be critical to our lives. This point is captured in an interview in Money Magazine with Ralph Wanger, a leading mutual fund manager. Wanger says, “The point is, 99 percent of what you do in life I classify as laundry. It’s stuff that has to be done, but you don’t do it better than anybody else, and it’s not worth much. Once in a while, though, you do something that changes your life dramatically. You decide to get married, you have a baby—or, if you’re an investor, you buy a stock that goes up twentyfold. So these rare events tend to dominate things” (Zweig, 2007, p. 102).
In short, a small subset of all the decisions we will make in our lives might end up being the dominating factors in determining our life satisfaction. Deciding what occupation to pursue, what specific job to take, whom to marry, how to invest, where to locate, how to house ourselves, and whether to have children may, when we look back on our lives decades later, turn out to have determined everything. In terms of raw numbers, these might represent only 20–30 decisions out of thousands that we have made over many years. But the thousands are just the “laundry of life,” to use Wanger’s phrase. These 20 are what count. The 20 “nonlaundry” decisions may also be quite unique, and this may render heuristics unhelpful for two reasons. Events that are small in number and not recurring give unconscious implicit learning mechanisms no chance to abstract information that could be used heuristically. Second, if they are unique, they are probably unprecedented from an evolutionary point of view, and thus there is no chance that unconscious modules that are evolutionary adaptations could help us. For both of these reasons, it is doubtful that heuristics will be adequate. The “quick and dirty” answers that heuristics are likely to provide in the “nonlaundry” part of life could lead us seriously astray.
Cognitive Shortcuts and Personal Autonomy
Consider how some very useful processing heuristics can be easily turned around to work against us because they are too easy to trigger. Several decades ago Amos Tversky and Daniel Kahneman discovered the so-called anchoring and adjustment heuristic.9 The anchoring and adjustment process comes into play when we have to make a numerical estimation of an unknown quantity. In this strategy, we begin by anchoring on the most easily retrievable relevant number that we know. Then we adjust that anchor up or down based on the implications of specific facts that we may know.
This does not seem to be such a bad procedure. A problem arises, however, when the number most available to anchor on is not relevant to the calculation at hand. In a classic experiment, Tversky and Kahneman demonstrated that the anchoring tendency is much too miserly—that it does not bother to assess for relevance. They had subjects watch a spinning wheel and, when the pointer landed on a number (rigged to be the number 65), asked them whether the percentage of African countries in the United Nations was higher or lower than this percentage. After answering higher or lower to this question, the subjects then had to give their best estimate of the percentage of African countries in the United Nations. For another group of subjects it was arranged that the pointer land on the number 10. They were also asked to make the higher or lower judgment and then to estimate the percentage of African countries in the United Nations. Now it is clear that because a spinning wheel was used, the number involved in the first question is totally irrelevant to the task of answering the second question. Yet the number that came up on the spinning wheel affected the answer to the second question. The mean estimate of the first group (the group where the spinning wheel stopped at 65) turned out to be significantly larger (45) than the mean estimate (25) for the second group.
It is clear what is happening here. Both groups are using the anchoring and adjustment heuristic—the high anchor group adjusting down and the low group adjusting up—but their adjustments are “sticky.” They are not adjusting enough because they have failed to fully take into account that the anchor is determined in a totally random manner. The anchoring and adjustment heuristic is revealing a miserly tendency to rely on an anchor regardless of its relevance.
Even when the anchor is not randomly determined, the cognitive miser tends to rely on it too much because using it is easier than trying to retrieve from memory facts about the situation that are actually relevant. It has been found that even experienced real estate agents are overly affected by the listing price of a home when trying to assess its actual value. Anchoring and adjustment is also a critical feature in sales of new automobiles. The salesperson wants the customer to anchor on the MSRP (the manufacturer’s suggested retail price) and bargain down from there—knowing that the adjustment will be “sticky,” that is, that it will be overly influenced by the MSRP and not move far
enough from it. Consumer magazines and websites recommend, in contrast, that the customer obtain the invoice price (what the dealer paid the manufacturer for the car) and bargain up from there. For used cars, a similar thing happens. The salesperson wants to bargain from the advertised price. Consumer publications recommend bargaining from a “bluebook” price. Both the salesperson and the consumer magazines are correct. Both know that where the negotiation starts will have a primary influence on where it ends up. Both know that whoever controls the anchor will largely control the negotiation.
Heuristically relying on anchors has been shown to affect such important contexts as judicial decisions and awards. Likewise, in personal injury cases, the amount of compensation requested affects the judgment itself as well as the amount awarded to the plaintiff. Also, it has been shown that, statistically, prosecution requests for sentences affect the sentencing by judges as well as bail decisions. Judges appear to be cognitive misers too—they succumb to simple heuristics that promise to lighten the cognitive load.
Anchoring effects are related to the mindless use of reference points. Such mindless processing can result in absurd behavior. For example, it can lead people to prefer getting less to getting more (that is, to prefer $5 to $6). How is this possible? A study by Slovic and colleagues provides an example. They found that people rated a gamble with 7/36 chance to win $9 and 29/36 to lose 5¢ more favorably than a gamble with 7/36 chance to win $9 and 29/36 chance to win nothing. Indeed, they report that the latter gamble was even rated less desirable than a gamble having a 7/36 chance to win $9 and 29/36 to lose 25¢. In the two loss conditions, the 5¢ and 25¢ provide reference points against which the $9 looks very large. The no-loss condition does not provide a readily usable small reference point and hence is not rated as favorably. Note that the subjects in this study violated the dominance stricture discussed above, a very fundamental rule of rational choice.10
What Intelligence Tests Miss Page 10