How Change Happens

Home > Other > How Change Happens > Page 30
How Change Happens Page 30

by Cass R Sunstein


  But now consider the second component of the problem, in which the same situation is given but is followed by this description of the alternative programs:

  If Program C is adopted, four hundred people will die.

  If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that six hundred people will die.

  Most people choose Problem D. But a moment’s reflection should be sufficient to show that Program A and Program C are identical, and so too for Program B and Program D. These are merely different descriptions of the same programs. The purely semantic shift in framing is sufficient to produce different outcomes. Apparently people’s moral judgments about appropriate programs depend on whether the results are described in terms of “lives saved” or “lives lost.” What accounts for the difference? The most sensible answer begins with the fact that human beings are pervasively averse to losses (hence the robust cognitive finding of loss aversion).24 With respect to either self-interested gambles or fundamental moral judgments, loss aversion plays a large role in people’s decisions. But what counts as a gain or a loss depends on the baseline from which measurements are made. Purely semantic reframing can alter the baseline and hence alter moral intuitions (many examples involve fairness).25

  This finding is usually taken to show a problem for standard accounts of rationality. But it has been argued that subjects are rationally responding to the information provided, or “leaked,” by the speaker’s choice of frame.26 Certainly the speaker’s choice might offer a clue about the desired response; some subjects in the Asian disease problem might be responding to that clue. But even if people are generally taking account of the speaker’s clues,27 that claim is consistent with the proposition that frames matter a great deal to moral intuitions, which is all I am stressing here.

  Moral framing has been demonstrated in the important context of obligations to future generations,28 a much-disputed question of morality, politics, and law.29 To say the least, the appropriate discount rate for those yet to be born is not a question that most people have pondered, and hence their judgments are highly susceptible to different frames. From a series of surveys, Maureen Cropper and her coauthors suggest that people are indifferent about saving one life today versus saving forty-five lives in one hundred years.30 They make this suggestion based on responses to asking people whether they would choose a program that saves one hundred lives now or a program that saves a substantially larger number one hundred years from now. It is possible, however, that people’s responses depend on uncertainty about whether people in the future will otherwise die (perhaps technological improvements will save them?), and other ways of framing the same problem yield radically different results.31 For example, most people consider a single death from pollution next year and a single death from pollution in one hundred years “equally bad.” This finding implies no preference for members of the current generation. The simplest conclusion is that people’s moral judgments about obligations to future generations are very much a product of framing effects.32

  The same point holds for the question whether government should consider not only the number of lives but also the number of “life years” saved by regulatory interventions. If the government focuses on life years, a program that saves children will be worth far more attention than a similar program that saves senior citizens. Is this immoral? People’s intuitions depend on how the question is framed.33 People will predictably reject an approach that would count every old person as worth “significantly less” than what every young person is worth. But if people are asked whether they would favor a policy that saves 105 old people or 100 young people, many will favor the latter in a way that suggests a willingness to pay considerable attention to the number of life years at stake.

  At least for unfamiliar questions of morality, politics, and law, people’s intuitions are very much affected by framing. Above all, it is effective to frame certain consequences as “losses” from a status quo; when so framed, moral concern becomes significantly elevated. It is for this reason that political actors often phrase one or another proposal as “turning back the clock” on some social advance. The problem is that for many social changes, the framing does not reflect social reality but is simply a verbal manipulation.

  Let’s now turn to examples that are more controversial.

  Moral Heuristics: A Catalogue

  My principal interest here is the relationship between moral heuristics and questions of law and policy. I separate the relevant heuristics into four categories: those that involve morality and risk regulation; those that involve punishment; those that involve “playing God,” particularly in the domains of reproduction and sex; and those that involve the act-omission distinction. The catalog is meant to be illustrative rather than exhaustive.

  Morality and Risk Regulation

  Cost-Benefit Analysis

  An automobile company is deciding whether to take certain safety precautions for its cars. It conducts a cost-benefit analysis in which it concludes that certain precautions are not justified—because, say, they would cost $100 million and save only four lives, and because the company has a “ceiling” of $10 million per life saved (a ceiling that is, by the way, a bit higher than the amount the United States Environmental Protection Agency uses for a statistical life). How will ordinary people react to this decision? The answer is that they will not react favorably.34 In fact they tend to punish companies that impose mortality risks on people after doing cost-benefit analysis, even if a high valuation is placed on human life.

  By contrast, they impose less severe punishment on companies that are willing to impose a “risk” on people but that do not produce a formal risk analysis that measures lives lost and dollars and trades one against another.35 The oddity here is that under tort law, it is unclear that a company should not be liable at all if it has acted on the basis of a competent cost-benefit analysis; such an analysis might even insulate a company from a claim of negligence. What underlies people’s moral judgments, which are replicated in actual jury decisions?36

  It is possible that when people disapprove of trading money for lives, they are generalizing from a set of moral principles that are generally sound, and even useful, but that work poorly in some cases. Consider the following moral principle: do not knowingly cause a human death. In ordinary life, you should not engage in conduct with the knowledge that several people will die as a result. If you are playing a sport or working on your yard, you ought not to continue if you believe that your actions will kill others. Invoking that idea, people disapprove of companies that fail to improve safety when they are fully aware that deaths will result. By contrast, people do not disapprove of those who fail to improve safety while believing that there is a risk but appearing not to know, for certain, that deaths will ensue. When people object to risky action taken after cost-benefit analysis, it seems to be partly because that very analysis puts the number of expected deaths squarely “on screen.”37

  Companies that fail to do such analysis but that are aware that a risk exists do not make clear, to themselves or to anyone else, that they caused deaths with full knowledge that this was what they were going to do. People disapprove, above all, of companies that cause death knowingly. There may be a kind of “cold-heart heuristic” here: those who know that they will cause a death, and do so anyway, are regarded as cold-hearted monsters.38 In this view, critics of cost-benefit analysis should be seen as appealing to System 1 and as speaking directly to the homunculus: “Is a corporation or public agency that endangers us to be pardoned for its sins once it has spent $6.1 million per statistical life on risk reduction?”39

  Note that it is easy to reframe a probability as a certainty and vice versa; if I am correct, the reframing is likely to have large effects. Consider two cases:

  1. Company A knows that its product will kill ten people. It markets the product to its ten million customers with that knowledge. The cost of eliminating the risk wou
ld have been $100 million.

  2. Company B knows that its product creates a one in one million risk of death. Its product is used by ten million people. The cost of eliminating the risk would have been $100 million.

  I have not collected data, but I am willing to predict that Company A would be punished more severely than Company B, even though there is no difference between the two.

  I suggest, then, that a moral heuristic is at work, one that imposes moral condemnation on those who knowingly engage in acts that will result in human deaths. And this heuristic does a great deal of good. The problem is that it is not always unacceptable to cause death knowingly, at least if the deaths are relatively few and an unintended byproduct of generally desirable activity. When government allows new highways to be built, it knows that people will die on those highways; when government allows new coal-fired power plants to be built, it knows that some people will die from the resulting pollution; when companies produce tobacco products, and when government does not ban those products, hundreds of thousands of people will die; the same is true for alcohol. Of course it would make sense, in all of these domains, to take extra steps to reduce risks. But that proposition does not support the implausible claim that we should disapprove, from the moral point of view, of any action taken when deaths are foreseeable.

  There is a complementary possibility, involving the confusion between the ex ante and ex post perspective. If a life might have been saved by a fifty-dollar expenditure on a car, people are going to be outraged, and they will impose punishment. What they will not see or incorporate is the fact, easily perceived ex ante, that the fifty-dollar-per-car expenditure would have been wasted on millions of other people. It is hardly clear that the ex ante perspective is always preferable. But something has gone badly wrong if the ex post perspective leads people to neglect the trade-offs that are involved.

  I believe that it is impossible to vindicate, in principle, the widespread social antipathy to cost-benefit balancing. But here too, “a little homunculus in my head continues to jump up and down, shouting at me” that corporate cost-benefit analysis, trading dollars for a known number of deaths, is morally unacceptable. The voice of the homunculus, I am suggesting, is not reflective, but instead a product of System 1 and a crude but quite tenacious moral heuristic.

  Emissions Trading

  In the last decades, those involved in enacting and implementing environmental law have experimented with systems of “emissions trading.”40 In those systems, polluters are typically given a license to pollute a certain amount, and the licenses can be traded on the market. The advantage of emissions-trading systems is that if they work well, they will ensure emissions reductions at the lowest possible cost.

  Is emissions trading immoral? Many people believe so. (See Chapter 3.) Political theorist Michael Sandel, for example, urges that trading systems “undermine the ethic we should be trying to foster on the environment.”41 Sandel contends: “Turning pollution into a commodity to be bought and sold removes the moral stigma that is properly associated with it. If a company or a country is fined for spewing excessive pollutants into the air, the community conveys its judgment that the polluter has done something wrong. A fee, on the other hand, makes pollution just another cost of doing business, like wages, benefits and rent.”

  In the same vein, Sandel objects to proposals to open carpool lanes to drivers without passengers who are willing to pay a fee. Here, as in the environmental context, it seems unacceptable to permit people to do something that is morally wrong so long as they are willing to pay for the privilege.

  I suggest that like other critics of emissions-trading programs, Sandel is using a moral heuristic; in fact, he has been fooled by his homunculus. The heuristic is this: people should not be permitted to engage in moral wrongdoing for a fee. You are not allowed to assault someone so long as you are willing to pay for the right to do so; there are no tradable licenses for rape, theft, or battery. The reason is that the appropriate level of these forms of wrongdoing is zero (putting to one side the fact that enforcement resources are limited; if they were unlimited, we would want to eliminate, not merely reduce, these forms of illegality). But pollution is an altogether different matter. At least some level of pollution is a byproduct of desirable social activities and products, including automobiles and power plants.

  Certain acts of pollution, including those that violate the law or are unconnected with desirable activities, are morally wrong—but the same cannot be said of pollution as such. When Sandel objects to emissions trading, he is treating pollution as equivalent to a crime in a way that overgeneralizes a moral intuition that makes sense in other contexts. There is no moral problem with emissions trading as such. The insistent objection to emissions-trading systems stems from a moral heuristic.

  Unfortunately, that objection has appeared compelling to many people, so much as to delay and to reduce the use of a pollution-reduction tool that is, in many contexts, the best available. Here, then, is a case in which a moral heuristic has led to political blunders, in the form of policies that impose high costs for no real gain.

  Betrayals

  To say the least, people do not like to be betrayed. A betrayal of trust is likely to produce a great deal of outrage. If a babysitter neglects a child or if a security guard steals from his employer, people will be angrier than if the identical acts are performed by someone in whom trust has not been reposed. So far, perhaps, so good: When trust is betrayed, the damage is worse than when an otherwise identical act has been committed by someone who was not a beneficiary of trust. And it should not be surprising that people will favor greater punishment for betrayals than for otherwise identical crimes.42

  Perhaps the disparity can be justified on the ground that the betrayal of trust is an independent harm, one that warrants greater deterrence and retribution—a point that draws strength from the fact that trust, once lost, is not easily regained. A family robbed by its babysitter might well be more seriously injured than a family robbed by a thief. The loss of money is compounded and possibly dwarfed by the violation of a trusting relationship. The consequence of the violation might also be more serious. Will the family ever feel entirely comfortable with babysitters? It is bad to have an unfaithful spouse, but it is even worse if the infidelity occurred with your best friend, because that kind of infidelity makes it harder to have trusting relationships with friends in the future.

  In this light it is possible to understand why betrayals produce special moral opprobrium and (where the law has been violated) increased punishment. But consider a finding that is much harder to explain: people are especially averse to risks of death that come from products (like airbags) designed to promote safety.43 They are so averse to “betrayal risks” that they would actually prefer to face higher risks than do not involve betrayal. The relevant study involved two principal conditions. In the first, people were asked to choose between two equally priced cars, car A and car B. According to crash tests, there was a 2 percent chance that drivers of car A, with airbag A, would die in serious accidents as a result of the impact of the crash. With car B, and airbag B, there was a 1 percent chance of death, but also an additional one in ten thousand (0.01 percent) chance of death as a result of deployment of the airbag. Similar studies involved vaccines and smoke alarms.

  The result was that most participants (over two-thirds) chose the higher-risk safety option when the less risky one carried a “betrayal risk.” A control condition demonstrated that people were not confused about the numbers: when asked to choose between a 2 percent risk and a 1.01 percent risk, people selected the 1.01 percent risk so long as betrayal was not involved. In other words, people’s aversion to betrayals is so great that they will increase their own risks rather than subject themselves to a (small) hazard that comes from a device that is supposed to increase safety. “Apparently, people are willing to incur greater risks of the very harm they seek protection from to avoid the mere possibility of betrayal.”44 Remarkably, �
��betrayal risks appear to be so psychologically intolerable that people are willing to double their risk of death from automobile crashes, fires, and diseases to avoid incurring a small possibility of death by safety device betrayal.”45

  What explains this seemingly bizarre and self-destructive preference? I suggest that a heuristic is at work: punish, and do not reward, betrayals of trust. The heuristic generally works well. But it misfires in some cases, as when those who deploy it end up increasing the risks they themselves face. An airbag is not a security guard or a babysitter, endangering those whom they have been hired to protect. It is a product, to be chosen if and only if it decreases aggregate risks. If an airbag makes people safer on balance, it should be used, even if in a tiny percentage of cases it will create a risk that would not otherwise exist. People’s unwillingness to subject themselves to betrayal risks, in circumstances in which products are involved and they are increasing their likelihood of death, is the moral cousin to the use of the representativeness heuristic in the Linda case. Both stem from a generally sound rule of thumb that leads to systematic errors.

  In a sense, the special antipathy to betrayal risks might be seen to involve not a moral heuristic but a taste. In choosing products, people are not making pure moral judgments; they are choosing what they like best, and it just turns out that a moral judgment, involving antipathy to betrayals, is part of what they like best. It would be useful to design a purer test of moral judgments, one that would ask people not about their own safety but about that of others—for example, whether people are averse to betrayal risks when they are purchasing safety devices for their friends or family members. There is every reason to expect that it would produce substantially identical results to those in the experiments just described.

 

‹ Prev