Decision scientists have studied the so-called equality heuristic in decision making.3 In a typical experiment, the critical comparison involves that between two different groups of subjects. One group of subjects is asked to allocate the profits in a firm of partners where the partners themselves had generated unequal revenue—some have earned more for the firm than others. The most common allocation strategy among this group of subjects was to allocate each partner an equal share of the profits. A common rationale for this allocation choice was that “they’re all in it together.”
That this rationale was not very thoughtfully derived was indicated by the results from the second group of subjects. This group of subjects was also asked to make a judgment about the allocation in a firm of partners where the partners themselves had generated unequal revenue. However, this time, the subjects were told to allocate the firm’s expenses for the year (rent, secretarial salaries, etc.) rather than the profits. The most common allocation strategy used by this group of subjects was to allocate each partner an equal share of the expenses. Of course, allocating the expenses equally results in unequal profits. Likewise the subjects in the first group, in opting for equal profits, were implicitly opting for unequal expenses. Both quantities cannot be equalized. Interestingly, in the second condition, where subjects made profits unequal by equalizing expenses, they tended to give the very same rationale (“they’re all in it together”) as did the subjects in the first condition!
These results suggest that people were not thoughtfully deciding upon equal profit outcomes (in the first condition) or thoughtfully deciding that equality of fixed costs is really fair (in the second condition) but were instead just settling on a cognitively undemanding heuristic of “equal is fair.” The “equalizing” subjects in these experiments had not thought through the problem enough to realize that there is more than one dimension in play and all cannot be equalized at once. Instead, they ended up equalizing the one dimension that was brought to their attention by the way the problem was framed.
There is no doubt that people who use the heuristic of “divide things equally” think they are making a social decision and they think it is a fair one. But the design logic of these experiments reveals that people are not making a social or ethical judgment at all. Think of what the logic of these experiments has done. It has turned people into Marxists (the first condition) or into advocates of the Wall Street Journal editorial page (the second condition) at a whim—by simply rephrasing the question. These experiments reinforce my earlier warning that framing effects are a threat to personal autonomy (as are other cognitive miser tendencies). One of the implications of these experiments and those of McCaffery and Baron is that those who pose the questions—those who frame them—may have more control over your political and economic behavior than you do.
There is the unsettling idea latent here that people’s preferences come from the outside (from whoever has the power to shape the environment and determine how questions are phrased) rather than from internal preferences based in their unique psychologies. Since most situations can be framed in more than one way, this means that rather than a person’s having stable preferences that are just elicited in different ways, the elicitation process itself can totally determine what the preference will be!
Professor of medicine Peter Ubel has studied how the overuse of the equality heuristic can lead to irrational framing effects in decisions about the allocation of scarce medical resources. Subjects were asked to allocate 100 usable livers to 200 children awaiting a transplant.4 When there were two groups of children, Group A with 100 children and Group B with 100 children, there was an overwhelming tendency to allocate 50 livers to each group. The equality heuristic seems reasonable here. Even though the nature of the groups was unspecified, it is reasonable to assume that Group A and Group B refer to different geographic areas, different hospitals, different sexes, different races, or some other demographic characteristic. However, in another condition of Ubel’s experiments with colleague George Loewenstein, the equality heuristic seems much more problematic. It was found that some subjects applied it when the groups referred to children having differing prognoses. Group A was a group of 100 children with an 80 percent average chance of surviving if transplanted, and Group B was a group of 100 children with only a 20 percent average chance of surviving if transplanted. More than one quarter of Ubel’s subjects nevertheless allocated the livers equally—50 livers to Group A and 50 to Group B. This decision results in the unnecessary deaths of 30 children (the 80 that would be saved if all 100 were allocated to Group A minus the 50 that will be saved if the equality heuristic is used).
Before condemning the equality heuristic, though, perhaps we should ask whether subjects had a rationale for it. Perhaps they viewed other principles as being at work here beyond sheer numbers saved. It turns out that many subjects did indeed have rationales for their 50/50 split. Common justifications for using the equality heuristic were that “even those with little chance deserve hope” and that “needy people deserve transplants, whatever their chance of survival.” We must wonder, though, whether such justifications represent reasoned thought or mere rationalizations for using the first heuristic that came to mind—the equality heuristic. Another condition in Ubel’s experimentation suggests the latter. Ubel notes that when the candidates for transplant were ranked from 1 to 200 in terms of prognoses (that is, listed as individuals rather than broken into groups), “people are relatively comfortable distributing organs to the top 100 patients . . . yet if the top 100 patients are called group 1 and the bottom 100 group 2, few people want to abandon group 2 entirely” (2000, p. 93). This finding makes it seem that the mere word “group” is triggering the equality heuristic in some subjects. The finding also suggests that the rationale “even those with little chance deserve hope” is actually a rationalization, because subjects do not tend to think of this rationale when the patients “with little chance” are not labeled as a “group.” Again, the trouble with heuristics is that they make our behavior, opinions, and attitudes subject to radical change based on how a problem is framed for us by others.
Now You Choose It—Now You Don’t: Research on Framing Effects
In discussing the mechanisms causing framing effects, Daniel Kahneman has stated that “the basic principle of framing is the passive acceptance of the formulation given” (2003a, p. 703). The frame presented to the subject is taken as focal, and all subsequent thought derives from it rather than from alternative framings because the latter would require more thought. Kahneman’s statement reveals framing effects as a consequence of cognitive miser tendencies, but it also suggests how to avoid such effects.
In laboratory experiments conducted on framing effects, when subjects are debriefed and the experiment is explained to them, they are often shown the alternative versions of the task. For example, in the tax example above they would be shown both the “reduction for children” and the “penalty for the childless” version. It is almost uniformly the case that, after being debriefed, subjects recognize the equivalence of the two versions, and also they realize that it is a mistake (an incoherence in people’s political attitudes) to respond differently to the two versions simply because they have been framed differently. This finding suggests that what people need to learn to do is to think from more than one perspective—to learn to habitually re-frame things for themselves. The debriefing results show that once they do so, people will detect discrepancies in their responses to a problem posed from different perspectives and will take steps to resolve the discrepancies. People seem to recognize that consistency is an intellectual value. What they do not do, however, is habitually generate the perspective shifts that would highlight the inconsistencies in their thinking. Their inability to do so makes them subject to framing effects—a violation of descriptive invariance signaling a basic irrationality in people’s choice patterns.
In some of the earliest and most influential work on framing effects it was not su
rprising that subjects would acknowledge that the different versions of the problem were equivalent because the equivalence was very transparent once pointed out. One of the most compelling framing demonstrations is from the early work of Tversky and Kahneman.5 Give your own reaction to Decision 1:
Decision 1. Imagine that the United States is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If Program A is adopted, 200 people will be saved. If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. Which of the two programs would you favor, Program A or Program B?
Most people when given this problem prefer Program A—the one that saves 200 lives for sure. There is nothing wrong with this choice taken alone. It is only in connection with the responses to another problem that things really become strange. The experimental subjects (sometimes the same group, sometimes a different group—the effect obtains either way) are given an additional problem. Again, give your own immediate reaction to Decision 2:
Decision 2. Imagine that the United States is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If Program C is adopted, 400 people will die. If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die. Which of the two programs would you favor, Program C or Program D?
Most subjects when presented with Decision 2 prefer Program D. Thus, across the two problems, the most popular choices are Program A and Program D. The only problem here is that Decision 1 and Decision 2 are really the same decision—they are merely redescriptions of the same situation. Program A and C are the same. That 400 will die in Program C implies that 200 will be saved—precisely the same number saved (200) in Program A. Likewise, the two-thirds chance that 600 will die in Program D is the same two-thirds chance that 600 will die (“no people will be saved”) in Program B. If you preferred Program A in Decision 1 you should have preferred Program C in Decision 2. But many subjects show inconsistent preferences—their choice switches depending on the phrasing of the question.
What this example shows is that subjects were risk averse in the context of gains and risk seeking in the context of losses. They found the sure gain of 200 lives attractive in Decision 1 over a gamble of equal expected value. In contrast, in Decision 2, the sure loss of 400 lives was unattractive compared with the gamble of equal expected value. Of course, the “sure loss” of 400 here that subjects found so unattractive is exactly the same outcome as the “sure gain” of 200 that subjects found so attractive in Decision 1! This is an example of a problem with very transparent equivalence. When presented with both versions of the problem together, most people agree that the problems are identical and that the alternative phrasing should not have made a difference. As I discussed above, such failures of descriptive invariance guarantee that a person cannot be a utility maximizer—that is, cannot be rational in the sense that cognitive scientists define that term.
A theory of why these framing effects occur was presented in the prospect theory of Kahneman and Tversky—the theory that in part led to the Nobel Prize in Economics for Kahneman in 2002. In the disease problem, subjects coded the outcomes in terms of contrasts from their current position—as gains and losses from a zero point (however that zero point was defined for them). This is one of the key assumptions of prospect theory. Another of the other key assumptions is that the utility function is steeper (in the negative direction) for losses than for gains.6 This is why people are often risk averse even for gambles with positive expected values. Would you flip a coin with me—heads you give me $500, tails I give you $525? Most people refuse such favorable bets because the potential loss, although smaller than the potential gain, looms larger psychologically.
Consider a series of studies by Nicholas Epley and colleagues in which subjects were greeted at the laboratory and given a $50 check.7 During the explanation of why they were receiving the check, one group of subjects heard the check described as a “bonus” and another group heard it described as a “tuition rebate.” Epley and colleagues conjectured that the bonus would be mentally coded as a positive change from the status quo, whereas the rebate would be coded as a return to a previous wealth state. They thought that the bonus framing would lead to more immediate spending than the rebate framing, because spending from the status quo is more easily coded as a relative loss. This is exactly what happened. In one experiment, when the subjects were contacted one week later, the bonus group had spent more of the money. In another experiment, subjects were allowed to buy items from the university bookstore (including snack foods) at a good discount. Again, the subjects from the bonus group spent more in the laboratory discount store.
Epley, a professor in the University of Chicago’s Graduate School of Business, demonstrated the relevance of these findings in an op-ed piece in the New York Times of January 31, 2008. Subsequent to the subprime mortgage crisis of 2007–2008, Congress and the president were considering mechanisms to stimulate a faltering economy. Tax rebates were being considered in order to get people spending more (such tax rebates had been used in 2001, also as a stimulative mechanism). Epley pointed out in his op-ed piece that if the goal was to get people to spend their checks, then the money would be best labeled tax bonuses rather than tax rebates. The term rebates implies that money that is yours is being returned—that you are being restored to some status quo. Prospect theory predicts that you will be less likely to spend from the status quo position. However, describing the check as a tax bonus suggests that this money is “extra”—an increase from the status quo. People will be much more likely to spend such a “bonus.” Studies of the 2001 program indicated that only 28 percent of the money was spent, a low rate in part caused by its unfortunate description as a “rebate.”
Epley’s point illustrates that framing issues need to be more familiar among policy analysts. In contrast, advertisers are extremely knowledgeable about the importance of framing. You can bet that a product will be advertised as “95% fat free” rather than “contains 5% fat.” The providers of frames well know their value. The issue is whether you, the consumer of frames, will come to understand their importance and thus transform yourself into a more autonomous decision maker.
Economist Richard Thaler has described how years ago the credit card industry lobbied intensely for any differential charges between credit cards and cash to be labeled as a discount for using cash rather than a surcharge for using the credit card.8 They were implicitly aware that any surcharge would be psychologically coded as a loss and weighted highly in negative utility. The discount, in contrast, would be coded as a gain. Because the utility function is shallower for gains than for losses, forgoing the discount would be psychologically easier than accepting the surcharge. Of course, the two represent exactly the same economic consequence. The industry, merely by getting people to accept the higher price as normal, framed the issue so that credit card charges were more acceptable to people.
The fact that human choices are so easily altered by framing has potent social implications as well. James Friedrich and colleagues describe a study of attitudes toward affirmative action in university admissions.9 Two groups of subjects were given statistical information about the effect of eliminating affirmative action and adopting a race-neutral admissions policy at several universities. The statistics were real ones and they were accurate. One group of subjects, the percentage group, received the information that under race-neutral admissions the probability of a black student being admitted would decline from 42 percent to 13 percent and that the probability of a white student being admitted would rise from 25 percent t
o 27 percent. The other group, the frequency group, received the information that under race-neutral admissions, the number of black students being admitted would drop by 725 and the number of white students being admitted would increase by 725. The statistics given to the two groups were mathematical equivalents—the results of the very same policy simply expressed in different ways (they are different framings). The difference in the pairs of percentages in the percentage condition (a 29 percent decrease for black students versus a 2 percent increase for white students) follows from the fact that many more applicants to the institutions were white.
Support for affirmative action was much higher in the percentage condition than in the frequency condition. In the percentage condition, the damage to black students from a race-neutral policy (42 percent admitted decreasing to only 13 percent admitted) seems extremely large compared to the benefit that white students would receive (an increase from 25 percent to only 27 percent admitted). The frequency condition, in contrast, highlights the fact that, on a one-to-one basis, each extra black student admitted under affirmative action means that a white student is denied admission. Both conditions simply represent different perspectives on exactly the same set of facts, but which perspective is adopted strongly affects opinions on this policy choice.
Many political disagreements are largely about alternative framings of an issue because all parties often know that whoever is able to frame the issue has virtually won the point without a debate even taking place. What many reformers are trying to do is to illustrate that the conventional wisdom is often just a default framing that everyone has come to accept. Cognitive psychologist George Lakoff has done several well-known analyses of the framing inherent in political terminology. He has drawn attention to the disciplined consistency with which, early in his first term, George W. Bush’s White House operatives used the term tax relief. Lakoff pointed out how once this term becomes accepted, the debate about the level of taxation is virtually over. Start first with the term relief. Lakoff notes that “for there to be relief there must be an affliction, an afflicted party, and a reliever who removes the affliction and is therefore a hero. And if people try to stop the hero, those people are villains for trying to prevent relief. When the word tax is added to relief, the result is a metaphor: Taxation is an affliction. And the person who takes it away is a hero, and anyone who tries to stop him is a bad guy” (Lakoff, 2004, pp. 3–4). Of course, a well-known example is the inheritance tax, with Democrats preferring the term estate tax (most people do not view themselves as possessing “estates”) and Republicans preferring the term death tax (which implies—incorrectly—that everyone is taxed at death).
What Intelligence Tests Miss Page 12