How Change Happens

Home > Other > How Change Happens > Page 13
How Change Happens Page 13

by Cass R Sunstein


  2. Joan is a student at a large university. She drinks a lot of alcohol. She enjoys it, but not that much, and she is worried that her drinking is impairing her performance and her health. She says that she would like to scale back, but for reasons that she does not entirely understand, she has found it difficult to do so. Her university recently embarked on an educational campaign to reduce drinking on campus, in which it (accurately) notes that four out of five students drink only twice a month or less. Informed of the social norm, Joan finally resolves to cut back her drinking. She does, and she is glad.

  In these cases, the chooser suffers from a self-control problem and is fully aware of that fact. Ted and Joan can be seen both as planners, with second-order preferences, and doers, with first-order preferences. A nudge helps to strengthen the hand of the planner. It is possible to raise interesting philosophical and economic questions about self-control and planner-doer models, but insofar as Ted and Joan welcome the relevant nudges, and do so ex ante as well as ex post, the AJBT criterion is met. In a sense, self-control problems require their own GPS devices and so can be seen to involve navigability; people want to exercise self-control, but they are not sure how to do so. Nudges can help them. But for choosers who face such problems, the underlying challenge is qualitatively distinctive, and they recognize that fact.

  In such cases, the AJBT criterion is met. But do people acknowledge that they face a self-control problem? That is an empirical question, of course, and my own preliminary research suggests that the answer is “yes.” On Amazon Mechanical Turk, I asked about two hundred people this question:

  Many people believe that they have an issue, whether large or small, of self-control. They may eat too much, they may smoke, they may drink too much, they may not save enough money. Do you believe that you have any issue of self-control?

  A whopping 70 percent said that they did (55 percent said “somewhat agree,” while 15 percent said “strongly agree”). Only 22 percent disagreed. (Eight percent were neutral.)

  This is a preliminary test, and admittedly, the question might have contained its own nudge (“Many people believe that they have an issue …”). Whatever majorities say, the cases of Ted and Joan capture a lot of the territory of human life, as reflected in the immense popularity of programs designed to help people to combat addiction to tobacco and alcohol. We should agree that nudges that do the work of such programs, or that are used in such programs, are likely to satisfy the AJBT criterion.

  There are harder cases. In some of them, it is not clear if people have antecedent preferences at all. In others—as in the case of Jonathan, who talked on his cell phone while driving—their ex post preferences are an artifact of, or constructed by, the nudge. Sometimes these two factors are combined (as marketers are well aware). As Amos Tversky and Richard Thaler put it long ago, “values or preferences are commonly constructed in the process of elicitation.”5 If so, how ought the AJBT criterion be understood and applied? For example:

  1. George cares about the environment, but he also cares about money. He currently receives his electricity from coal; he knows that coal is not exactly good for the environment, but it is cheap, and he does not bother to switch to wind, which would be slightly more expensive. He is quite content with the current situation. Last month, his government imposed an automatic enrollment rule on electricity providers: people will receive energy from wind, and pay a slight premium, unless they choose to switch. George does not bother to switch. He says that he likes the current situation of automatic enrollment. He approves of the policy and he approves of his own enrollment.

  2. Mary is automatically enrolled in a bronze-level health care plan. It is less expensive than silver and gold levels, but it is also less comprehensive in its coverage and it has a higher deductible. Mary prefers bronze and has no interest in switching. In a parallel world (a lot like ours, but not quite identical), Mary is automatically enrolled in a gold-level health care plan. It is more expensive than silver and bronze, but it is also more comprehensive in its coverage and it has a lower deductible. Mary prefers gold and has no interest in switching.

  3. Thomas has a serious illness and questions whether he should have an operation, which is accompanied by potential benefits and potential risks. Reading about the operation online, Thomas is not sure whether he should go ahead with it. Thomas’ doctor advises him to have the operation, emphasizing how much he has to lose if he does not. He decides to follow this advice. In a parallel world (a lot like ours, but not quite identical), Thomas’s doctor advises him not to have the operation, emphasizing how much he has to lose if he does. He decides to follow this advice.

  In the latter two cases, Mary and Thomas appear to lack an antecedent preference; what they prefer is an artifact of the default rule (in the case of Mary) or the framing (in the case of Thomas). George’s case is less clear, because he might be taken to have an antecedent preference in favor of green energy, but we could easily understand the narrative to mean that his preference, like that of Mary and Thomas, is partly a product of the default rule.

  These are the situations on which I am now focusing: People lack an antecedent preference, and what they like is a product of the nudge. Their preference is constructed by it. After being nudged, they will be happy and possibly grateful. We have also seen that even if people have an antecedent preference, the nudge might change it so that they will be happy and possibly grateful even if they did not want to be nudged in advance.

  In all of these cases, application of the AJBT criterion is less simple. Choice architects cannot contend that they are merely vindicating choosers’ ex ante preferences. If we look ex post, people do think that they are better off, and in that sense the criterion is met. For use of the AJBT criterion, the challenge is that however Mary and Thomas are nudged, they will agree that they are better off. In my view, there is no escaping at least some kind of welfarist analysis in choosing between the two worlds in the cases of Mary and Thomas. We need to ask what kind of approach makes people’s lives better. There is a large question about which nudge to choose in such cases. Nonetheless, the AJBT criterion remains relevant in the sense that it constrains what choice architects can do, even if it does not specify a unique outcome (as it does in cases in which people have clear ex ante preferences and in which the nudge does not alter them).

  The AJBT criterion is emphatically not designed to defeat a charge of paternalism. It is psychologically fine (often) to think that choosers have antecedent preferences (whether or not “latent”), but that because of a lack of information or a behavioral bias, their choices will not satisfy them. (Recall the cases of Luke, Meredith, and Edna.) To be sure, it is imaginable that some forms of choice architecture will affect people who have information or lack such biases; a reasonable cafeteria visitor might grab the first item she sees because she is busy and because it is not worth it to her to decide which item to choose. Consider this case:

  Regan enjoys her employer’s cafeteria. She tends to eat high-calorie meals, but she knows that, and she likes them a lot. Her employer recently redesigned the cafeteria so that salads and fruits are the most visible and accessible. She now chooses salad and fruit, and she likes them a lot.

  By stipulation, Regan suffers from no behavioral bias, but she is affected by the nudge. But in many (standard) cases, behaviorally biased or uninformed choosers will be affected by a nudge, and less biased and informed choosers will not; a developing body of literature explores how to proceed in such cases, with careful reference to what seems to me a version of the AJBT criterion.6

  In Regan’s case, and all those like it, the criterion does not leave choice architects at sea: if she did not like the salad, the criterion would be violated. From the normative standpoint, it may not be entirely comforting to say that nudges satisfy the AJBT criterion if choice architects succeed in altering the preferences of those whom they are targeting. (Is that a road to serfdom? Recall the chilling last lines of Orwell’s 1984: “He had won the
victory over himself. He loved Big Brother.”) But insofar as we are concerned with subjective welfare, it is a highly relevant question whether choosers believe, ex post, that the nudge has produced an outcome of which they approve.

  Countless nudges increase navigability, writ large, in the sense that they enable people to get where they want to go and therefore enable them to satisfy their antecedent preferences. Many other nudges, helping to overcome self-control problems, are warmly welcomed by choosers and so are consistent with the AJBT criterion. Numerous people acknowledge that they suffer from such problems. When people lack antecedent preferences or when those preferences are not firm, and when a nudge constructs or alters their preferences, the AJBT criterion is more difficult to operationalize, and it may not lead to a unique solution. But it restricts the universe of candidate solutions, and in that sense helps to orient choice architects.

  Notes

  1. Richard Thaler & Cass Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness 5 (2008); italics in original.

  2. Robert Sudgen, Do People Really Want to Be Nudged toward Healthy Lifestyles?, 64 Int’l Rev. Econ. 113 (2006).

  3. For data from various sources, see Cass R. Sunstein & Lucia A. Reisch, Trusting Nudges: An International Survey (forthcoming 2019); Janice Y. Jung & Barbara A. Mellers, American Attitudes toward Nudges, 11 Judgment and Decision Making 62 (2016); Cass R. Sunstein, The Ethics of Influence (2016); Lucia A. Reisch & Cass R. Sunstein, Do Europeans Like Nudges? 11 Judgment and Decision Making 310 (2016); Cass R. Sunstein, Lucia A. Reisch, & Julius Rauber, Behavioral Insights All Over the World? Public Attitudes toward Nudging in a Multi-Country Study, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2921217.

  4. Jung & Mellers, supra note 3.

  5. Amos Tversky & Richard H. Thaler, Anomalies: Preference Reversals, 4 Journal of Economic Perspectives 201, 211 (1990).

  6. Jacob Goldin, Which Way to Nudge? Uncovering Preferences in the Behavioral Age, 125 Yale L. J. 226 (2015); Jacob Goldin & Nicholas Lawson, Defaults, Mandates, and Taxes: Policy Design with Active and Passive Decision-Makers, 18 Am. J. L. & Eco. 438 (2016).

  7

  Nudges That Fail

  No one should deny that some nudges are ineffective or counterproductive. For example, information disclosure might have little impact—certainly if it is too complicated for people to understand and sometimes even if it is simple and clear. If people are told about the number of calories in candy bars, they might not learn anything they do not already know, and even if they learn something, they might be unaffected. A reminder might fall on deaf ears; a warning might be ignored (or even make the target action more attractive). In some cases, a plausible (and abstractly correct) understanding of what drives human behavior turns out to be wrong in a particular context; once a nudge is tested, it turns out to have little or no impact.

  In the terms made famous by Albert Hirschman, nudging might therefore be futile,1 or at least close to it. Alternatively, its effects might also be perverse, in the sense that they might be the opposite of what is intended—as, for example, when calorie labels increase caloric intake. To complete Hirschman’s trilogy, nudges may also jeopardize other important goals, as when a nudge designed to reduce pollution ends up increasing energy costs for the most disadvantaged members of society. Hirschman’s main goal was to explore “the rhetoric of reaction”—the rhetorical arguments of those who seek to defend the status quo—not to suggest that futility, perversity, and jeopardy are inevitable or even likely. On the contrary, he saw them as predictable rhetorical moves, sometimes offered in bad faith. But there is no question that public-spirited reforms, including nudges, often run into each of the three objections. Futility, perversity, and jeopardy may reflect reality rather than rhetoric.

  Of all of the tools in the choice architect’s repertoire, default rules may be the most promising; they are almost certainly the most discussed. But sometimes they do very little, or at least far less than anticipated.2 My principal goal here is to identify two reasons why this might be so. The first involves strong contrary preferences on the part of the choosers, which cause them to opt out. The second involves counternudges, in the form of compensating behavior on the part of those whose economic interests are at stake. To make a long story short, institutions may be able to move choosers in their preferred directions (often with the assistance of behavioral insights). As we shall see, these two explanations help account for the potential ineffectiveness of many other nudges as well, not only default rules.

  It is a useful simplification to assume that in deciding whether to depart from default rules or to reject nudges of any kind, choosers consider two factors: the costs of decisions and the costs of errors. When it is not especially costly to decide to reject a nudge, and when choosers believe that doing so will reduce significant error costs, a nudge will be ineffective. We shall also see, though more briefly, other reasons that nudges might be ineffective; of these, the most important is the choice architect’s use of a plausible (but ultimately mistaken) hypothesis about how choice architecture affects behavior.

  If nudges do not work, there is of course the question what to do instead. The answer depends on normative criteria. A nudge might turn out to be ineffective, or far less effective than expected, but that might be a good thing; it might explain why choice architects chose a nudge rather than some other instrument (such as a mandate). Suppose that choice architects care about social welfare and that they want to increase it. If so, promoting welfare provides the right criterion, and effectiveness does not. By itself, the ineffectiveness of nudges—or for that matter their effectiveness—tells us little and perhaps even nothing about what has happened to social welfare. Imagine that 90 percent of a relevant population opts out of a default rule, so that the nudge is largely ineffective. Or imagine that 10 percent opts out or that 50 percent does. In all of these cases, choice architects must consider whether the result suggests that the nudge is, all things considered, a success or a failure. To undertake that consideration, they must ask particular questions about the consequences for social welfare.

  One answer is that if a nudge is ineffective, or less effective than expected, it is because it is not a good idea for those who were unaffected by it. Its failure is instructive and on balance should be welcomed, in the sense that if choosers ignore or reject it, it is because they know best. That answer makes sense if ineffectiveness is diagnostic, in the sense that it demonstrates that people are acting in accordance with their (accurate) sense of what will promote their welfare. Sometimes that conclusion is correct, but for good behavioral reasons, sometimes it is not.

  A second answer is to try a different kind of nudge. It is important to test behaviorally informed interventions and to learn from those tests; what is learned might well point in the direction of other nudges. That answer might be best if the people’s choices (e.g., to ignore a warning or to opt out) are based on confusion, bias, or misunderstanding and if a better nudge might dispel one or all of these. Many nudges do not, in fact, raise hard normative questions. They are designed to promote behavior that is almost certainly in the interest of choosers or society as a whole; the failure of the nudge is not diagnostic. In such cases, a better nudge may well be the right response. If, for example, a warning fails, it might be a good idea to add a default rule.

  A third answer is to undertake a more aggressive approach, going beyond a nudge, such as an economic incentive (a subsidy or a tax) or coercion. A more aggressive approach might make sense when the choice architect knows that the chooser is making a mistake or when the interests of third parties are involved. Some nudges are designed to protect such interests; consider environmental nudges or nudges that are intended to reduce crime. In such cases, choice-preserving approaches might well prove inadequate or at best complementary to incentives, mandates, and bans. I will return to this issue in chapter 10.

  Why Default Rules Stick

  Default rules are my princi
pal topic here, and it makes sense to begin by explaining why such rules have such a large effect on outcomes. A great deal of research has explored three reasons.3 The first involves inertia and procrastination (sometimes described as effort or an effort tax). To alter the default rule, people must make an active choice to reject that rule. Especially (but not only) if they are busy or if the question is difficult or technical, it is tempting to defer the decision or not to make it at all. In view of the power of inertia and the tendency to procrastinate, they might simply continue with the status quo. Attention is a scarce resource, and it is effortful to engage it; a default rule might stick because that effort does not seem to be worth undertaking.

  The second factor involves what people might see as the informational signal that the default rule provides. If choice architects have explicitly chosen that rule, many people will believe that they have been given an implicit recommendation, and by people who know what they are doing (and who are not objectionably self-interested). If so, they might think that they should not depart from it and go their own way, unless they have private information that is reliable and that would justify a change. Going one’s own way is risky, and people might not want to do it unless they are quite confident that they should.

  The third factor involves loss aversion, one of the most important and robust findings in behavioral science: people dislike losses far more than they like corresponding gains.4 For present purposes, the key point is that the default rule establishes the status quo; it determines the reference point for counting changes as losses or instead as gains. If, for example, people are not automatically enrolled in a savings plan, a decision to enroll might well seem to be a loss (of salary). If people are automatically enrolled, a decision to opt out might well seem to be a loss (of savings). The reference point is established by the default rule.

 

‹ Prev