How Change Happens
Page 16
The general point is that any form of choice architecture, including the use of default rules, may have little or no effect on net if people are able to find other domains in which to counteract it. The idea of compensating behavior can be seen as a subset of the general category of strong antecedent preferences, but it points to a more specific case, in which the apparent success of the nudge is an illusion in terms of what choice architects actually care about. Recall the risk that a nudge will have unintended side effects, including unwelcome distributional consequences—as, for example, when environmental nudges impose costs on people who are not easily able to afford them. As noted, this is Hirschman’s notion of jeopardy.
What matters is welfare, not effectiveness. A largely ineffective nudge may have positive welfare effects; an effective nudge might turn out to reduce welfare. A strong reason for nudges, as distinguished from more aggressive tools, is that they preserve freedom of choice and thus allow people to go their own way. In many contexts, that is indeed a virtue, and the ineffectiveness of nudges, for some or many, is nothing to lament. But when choosers are making clear errors, and when third-party effects are involved, the ineffectiveness of nudges provides a good reason to consider stronger measures on welfare grounds.
Notes
1. See Albert Hirschman, The Rhetoric of Reaction (1991).
2. See Lauren E. Willis, When Defaults Fail: Slippery Defaults, 80 U. Chi. L. Rev. 1155 (2012), for an excellent discussion. I deal only glancingly here with the risk of counterproductive nudges—Hirschman’s category of “perversity”—though that is an important topic. See, e.g., George Loewenstein et al., The Unintended Consequences of Conflict of Interest Disclosure, 307 JAMA 669 (2012); Ryan Bubb & Richard Pildes, How Behavioral Economics Trims Its Sails and Why, 127 Harv. L. Rev. 1593 (2014); Sunita Sah et al., Effect of Physician Disclosure of Specialty bias on Patient Trust and Treatment Choice, PNAS (2016), http://www.pnas.org/content/early/2016/06/16/1604908113.full.pdf.
3. For a good discussion, Gabriel D. Carroll et al., Optimal Defaults and Active Decisions, 124 Q. J. Econ. 1639, 1641–1643 (2009). For an overview, with citations to the relevant literature, see Cass R. Sunstein, Choosing Not to Choose (2015).
4. Eyal Zamir, Law, Psychology, and Morality: The Role of Loss Aversion (2014).
5. See generally Elizabeth F. Emens, Changing Name Changing: Framing Rules and the Future of Marital Names, 74 U. Chi. L. Rev. 761 (2007).
6. Id. at 786.
7. Young Eun Huh, Joachim Vosgerau, & Carey K. Morewedge, Social Defaults: Observed Choices Become Choice Defaults, 41 J. Consumer Res. 746 (2014).
8. John Beshears et al., The Limitations of Defaults, unpublished manuscript (September 15, 2010), http://www.nber.org/programs/ag/rrc/NB10-02,%20Beshears,%20Choi,%20Laibson,%20Madrian.pdf.
9. See Erin Todd Bronchetti et al., When a Default Isn’t Enough: Defaults and Saving among Low-Income Tax Filers 28–29 (Nat’l Bureau of Econ. Research, Working Paper No. 16887, 2011), http://www.nber.org/papers/w16887 (explaining that default manipulation did not have an impact on tax refund allocation to a savings bond where an individual previously intended to spend the refund). Note, however, that the “default” in this study consisted of a mere statement on a form with the option to opt out. Id. at 17–18. In such a case, the line between the use of such a “default” and active choosing is relatively thin.
10. See Zachary Brown et al., Testing the Effects of Defaults on the Thermostat Settings of OECD Employees, 39 Energy Econ. 128 (2013).
11. Aristeidis Theotokis & Emmanouela Manganari, The Impact of Choice Architecture on Sustainable Consumer Behavior: The Role of Guilt, 131 J. Bus. Ethics 423 (2014).
12. René A. de Wijk et al., An In-Store Experiment on the Effect of Accessibility on Sales of Wholegrain and White Bread in Supermarkets (2016), http://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0151915.
13. David J. Just & Brian Wansink, Smarter Lunchrooms: Using Behavioral Economics to Improve Meal Selection (2009), http://www.choicesmagazine.org/UserFiles/file/article_87.pdf.
14. Lauren E. Willis, Why Not Privacy by Default?, 29 Berkeley Tech. L. J. 62 (2014)—and in particular this suggestion: “Firms surround defaults they favor with a powerful campaign to keep consumers in the default position, but meet defaults set contrary to their interests with an equally powerful campaign to drive consumers to opt out. Rather than giving firms an incentive to facilitate consumer exercise of informed choice, many defaults leave firms with opportunities to play on consumer biases or confuse consumers into sticking with or opting out of the default.”
15. Requirements for Overdraft Services, 45 § C.F.R. 205.17 (2010).
16. See Lauren E. Willis, When Defaults Fail: Slippery Defaults, 80 U. Chi. L. Rev. 1155, 1174–1175 (2012).
17. Id. at 1186–1187.
18. Id. at 1192.
19. See id.
20. See id.
21. Id. at 130.
22. See Tatiana Homonoff, Essays in Behavioral Economics and Public Policy (September 2013), https://dataspace.princeton.edu/jspui/bitstream/88435/dsp01jw827b79g/1/Homonoff_princeton_0181D_10641.pdf.
23. See Willis, supra note 16, for an excellent discussion.
24. See Punam Anand Keller et al., Enhanced Active Choice: A New Method to Motivate Behavior Change, 21 J. Consumer Psychol. 376, 378 (2011).
25. Ariel Porat & Lior J. Strahilevitz, Personalizing Default Rules and Disclosure with Big Data, 112 Mich. L. Rev. 1417 (2014).
26. For a short discussion, full of implications, see Lauren Willis, The Financial Education Fallacy, 101 Am. Econ. Rev. 429 (2011).
27. See Sharon Brehm & Jack Brehm, Psychological Reactance: A Theory of Freedom and Control (1981); Louisa Pavey & Paul Sparks, Reactance, Autonomy and Paths to Persuasion: Examining Perceptions of Threats to Freedom and Informational Value, 33 Motivation & Emotion 277 (2009).
28. See Erin Frey & Todd Rogers, Persistence: How Treatment Effects Persist after Interventions Stop, 1 Pol’y Insights from Behav. & Brain Sci. 172 (2014), exploring four “persistence pathways” that “explain how persistent treatment effects may arise: building psychological habits, changing what and how people think, changing future costs, and harnessing external reinforcement.”
29. Hunt Allcott & Todd Rogers, The Short-Run and Long-Run Effects of Behavioral Interventions: Experimental Evidence from Energy Conservation, 104 Am. Econ. Rev. 3003 (2014); Henrik Crongvist et al., When Nudges Are Forever: Inertia in the Swedish Premium Pension Plan, 108 Am. Econ. Rev. 153 (2018).
8
Ethics
No one should doubt that certain nudges, and certain kinds of choice architecture, can raise serious ethical questions. Consider, for example, a government that used nudges to promote discrimination on the basis of race, sex, or religion. Even truthful information (e.g., about crime rates) might fan the flames of violence and prejudice. Groups or nations that are committed to violence often enlist nudges in their cause. Even when nudges do not have illicit ends, it is possible to wonder whether those who enlist them are treating people with respect.
Possible concerns about nudging and choice architecture point to four foundational commitments: (1) welfare, (2) autonomy, (3) dignity, and (4) self-government. Some nudges could run afoul of one or more of these commitments. It is easy to identify welfare-reducing nudges that lead people to waste time or money; an unhelpful default rule could fall in that category, as could an educational campaign designed to persuade people to purchase excessive insurance or to make foolish investments. Nudges could be, and often are, harmful to the environment. Excessive pollution is, in part, a product of unhelpful choice architecture.
Consider in this light a tale from the novelist David Foster Wallace: “There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says ‘Morning, boys. How's the water?’ And the two young fish swim on for a bit, and th
en eventually one of them looks over at the other and goes ‘What the hell is water?’”1 This is a tale about choice architecture. Such architecture is inevitable, whether or not we see it. It is the equivalent of water. Weather is itself a form of choice architecture, because it influences what people decide.2 Human beings cannot live without some kind of weather. Nature nudges.
We can imagine the following view: Choice architecture is unavoidable, to be sure, but it might be the product of nature or some kind of spontaneous order, rather than of conscious design or of the action of any designer. Invisible-hand mechanisms often produce choice architecture. Alternatively, choice architecture might be the product of a genuinely random process (and a choice architect might intentionally opt for randomness, on the ground that it has a kind of neutrality).
On certain assumptions, self-conscious choice architecture is especially dangerous, because it is explicitly and intentionally directed at achieving certain goals. But what are those assumptions, and are they likely to be true? Why and when would spontaneous order be benign? (Is there some kind of social Darwinism here?) What is so good about randomness? We should agree that a malevolent choice architect, aware of the power of nudges, could produce a great deal of harm. But the most serious harms tend to come from mandates and bans—from coercion—not from nudges, which maintain freedom of choice.
It is true that spontaneous orders, invisible hands, and randomness can avoid some of the serious dangers, and some of the distinctive biases, that come from self-conscious nudging on the part of government. People might be especially averse to intentional nudges. If we are especially fearful of official mistakes—coming from incompetence or bad motivations—we will want to minimize the occasions for nudging. And if we believe that invisible-hand mechanisms promote welfare or freedom, we will not want to disturb their products, even if those products include nudges. But a degree of official nudging cannot be avoided.
In this chapter, I will offer seven principal conclusions:
1. It is pointless to object to choice architecture or nudging as such. The private sector inevitably nudges, as does the government. We can object to particular nudges and particular goals of choice architects and particular forms of choice architecture, but not to nudging and choice architecture in general. For human beings (or for that matter dogs and cats and mice), choice architecture cannot be avoided. It is tempting to defend nudging on the part of government by saying that the private sector already nudges (sometimes selfishly, even in competitive markets). On certain assumptions, this defense might be right, but it is not necessary because the government is nudging even if it does not want to do so.
2. In this context, ethical abstractions (e.g., about autonomy, dignity, and manipulation) can create serious confusion. We need to bring those abstractions into contact with concrete practices. Nudging takes many diverse forms, and the force of an ethical objection depends on specific form.
3. If welfare is our guide, much nudging is required on ethical grounds.
4. If autonomy is our guide, much nudging is also required on ethical grounds.
5. Choice architecture should not, and need not, compromise either dignity or self-government, though imaginable forms could do both.
6. Many nudges are objectionable because the choice architect has illicit ends. If the ends are legitimate, and if nudges are fully transparent and subject to public scrutiny, a convincing ethical objection is far less likely to be available.
7. There is, however, room for such an objection in the case of highly manipulative interventions, certainly if people have not consented to them. The concept of manipulation deserves careful attention, especially because of its relationship to the ideas of autonomy and dignity.
The Dangers of Abstraction
I have noted that in behavioral science, it has become standard to distinguish between two families of cognitive operations: System 1, which is fast, automatic, and intuitive, and System 2, which is slow, calculative, and deliberative.3 System 2 can and does err, but System 1 is distinctly associated with identifiable behavioral biases. Some nudges attempt to strengthen the hand of System 2 by improving the role of deliberation and people’s considered judgments—as, for example, through disclosure strategies and the use of precommitment. Other nudges are designed to appeal to, or to activate, System 1—as in the cases of graphic health warnings. Some nudges work because of the operation of System 1—as, for example, when default rules have large effects because of the power of inertia.
A nudge might be justified on the ground that it helps counteract a behavioral bias, but (and this is an important point) such a bias is not a necessary justification for a nudge. Disclosure of information can be helpful even in the absence of any bias. GPS is useful even for people who do not suffer from present bias, probability neglect, or unrealistic optimism. A default rule simplifies life and might therefore be desirable whether or not a behavioral bias is involved.
As the GPS example suggests, many nudges have the goal of increasing navigability—of making it easier for people to get to their preferred destination. Such nudges stem from an understanding that life can be simple or hard to navigate, and a goal of helpful choice architecture is desirable as a way of promoting simple navigation. To date, there has been far too little attention to the close relationship between navigability and (good) nudges. Insofar as the goal is to promote navigability, the ethical objections are greatly weakened.
It must be acknowledged that choice architecture can be altered, and new nudges can be introduced, for illicit reasons. Indeed, many of the most powerful objections to nudges, and to changes in choice architecture, are based on a judgment that the underlying motivations are illicit. With these points, there is no objection to nudges as such; the objection is to the grounds for the particular nudges.
For example, an imaginable default rule might skew the democratic process by saying that voters are presumed to vote to support the incumbent politician, unless they specify otherwise. Such a rule would violate principles of neutrality that are implicit in democratic norms; it would be unacceptable for that reason. Alternatively, a warning might try to frighten people about the supposedly nefarious plans of members of a minority group. Social norms might be used to encourage people to buy unhealthy products. In extreme cases, private or public institutions might try to nudge people toward violence.
It must also be acknowledged that the best choice architecture often calls for active choosing. Sometimes the right approach is to require people to choose, so as to ensure that their will is actually expressed. Sometimes it is best to prompt choice, by asking people what they want, without imposing any requirement that they do so. A prompt is emphatically a nudge, designed to get people to express their will, and it might be unaccompanied by any effort to steer people in a preferred direction—except in the direction of choosing.
Choice architecture should be transparent and subject to public scrutiny, especially if public officials are responsible for it. In general, regulations should be subject to a period of public comment. If officials alter a default rule so as to promote clean energy or conservation, they should not hide what they are doing. Self-government itself requires public scrutiny of nudges—a form of choice architecture for choice architects. Such scrutiny is an important ex ante safeguard against harmful nudges; it is also an important ex post corrective. Transparency and public scrutiny can reduce the likelihood of welfare-reducing choice architecture and of nudges that threaten autonomy or dignity. Nations should also treat their citizens with respect, and public scrutiny shows a measure of respect.
There is a question whether transparency and public scrutiny are sufficient rather than merely necessary. The answer is that they are not sufficient. We could imagine forms of choice architecture that would be unacceptable even if they were fully transparent; consider (transparent) architecture designed to entrench inequality on the basis of sex. Here again, the problem is that the goals of the relevant nudge are illicit. As we shall
see, it is also possible to imagine cases of manipulation, in which the goals are not illicit but the fact of transparency might not be sufficient to justify a nudge.
Recall at this point that choice architecture is inevitable. Any website nudges; so does a cell phone or a computer; so do lawyers and doctors. A doctor can try to present options in a neutral way so as to respect patient autonomy, but that is a form of choice architecture, not an alternative to it. Whenever government has websites, offices, or programs, it creates choice architecture, and it will nudge.
It is true that in the face of error, education might be the best response. Some people argue in favor of educational interventions in lieu of nudges. In a way, the opposition is confusing; at least some such interventions fit the definition of a nudge, and they are certainly a form of choice architecture. When education is favored, a natural question arises: Favored over what?
In some cases, a default rule would be preferable to education because it would preserve desirable outcomes (again, from the standpoint of choosers themselves) without requiring people to take the functional equivalent of a course in, say, statistics or finance.4 For those who purchase cell phones, tablets, and computers, it would be impossibly demanding to insist on the kind of education that would allow active choices about all relevant features. Much of life is feasible because products and activities come with default rules and people are not required to undergo some kind of instruction before selecting them. There is a recurring question whether the costs of education justify the benefits in particular circumstances. Default rules may well be best.