How Change Happens

Home > Other > How Change Happens > Page 34
How Change Happens Page 34

by Cass R Sunstein


  Or suppose that certain moral intuitions arise in part because of emotions and that some or many deontological intuitions fall into that category. Even if so, we would not have sufficient reason to believe that those intuitions are wrong. Some intuitions about states of the world arise from the emotion of fear, and they are not wrong for that reason. To be sure, people may be fearful when they are actually safe, but without knowing about the situations that cause fear, we have no reason to think that the emotion is leading people to make mistakes. The fact—if it is a fact—that some or many deontological intuitions are driven by emotions does not mean that those intuitions misfire.

  If these points are right, then we might be able to agree that deontological thinking often emerges from automatic processing and that consequential thinking is often more calculative and deliberative. This might well be the right way to think about moral psychology, at least in many domains, and the resulting understanding of moral psychology certainly has explanatory power for many problems in law and politics; it helps us to understand why legal and political debates take the form that they do. But if so, we would not be able to conclude that deontological thinking is wrong. Consider in this regard the fact that in response to some problems and situations, people’s immediate, intuitive responses are right, and a great deal of reflection and deliberation can produce mistakes. There is no reason to think that System 2 is always more accurate than System 1. Even if deontological judgments are automatic and emotional, they may turn out to be correct.

  Two New Species

  Here is a way to sharpen the point. Imagine that we discovered two new species of human beings: Kantians and Benthamites. Suppose that the Kantians are far more emotional than Homo sapiens and that Benthamites are far less so. Imagine that neuroscientists learn that Kantians and Benthamites have distinctive brain structures. Kantians have a highly developed emotional system and a relatively undeveloped cognitive system. By contrast, Benthamites have a highly developed cognitive system—significantly more developed than that in Homo sapiens. And true to their names, Kantians strongly favor deontological approaches to moral questions, whereas Benthamites are thoroughgoing consequentialists.

  Impressed by this evidence, some people insist that we have new reason to think that consequentialism is correct. Indeed, anthropologists discover that Benthamites have written many impressive and elaborate arguments in favor of consequentialism. By contrast, Kantians have written nothing. (They do not write much.) With such discoveries, would we have new reason to think that consequentialism is right? Clearly not. Whether consequentialism is rights turns on the strength of the arguments offered on its behalf, not on anything about the brains of the two species.

  To see the point, suppose that an iconoclastic Benthamite has written a powerful essay, contending that consequentialism is wrong and that some version of Kantianism is right. Wouldn’t that argument have to be investigated on its merits? If the answer is affirmative, then we should be able to see that even if certain moral convictions originate in automatic processing, they may nonetheless be correct. Everything depends on the justifications that have been provided in their defense. A deontological conviction may come from System 1, but the Kantians might be right, and the Benthamites should listen to what they have to say.

  Moral Reasoning and Moral Rationalization

  Suppose that we agree that recent research shows that as a matter of fact, “deontological judgments tend to be driven by emotional responses”; a more provocative conclusion, consistent with (but not mandated by) the evidence, is that “deontological philosophy, rather than being grounded in moral reasoning, is to a large extent an exercise in moral rationalization.”25 Without denying the possibility that the intuitive system is right, Joshua Greene contends “that science, and neuroscience in particular, can have profound ethical implications by providing us with information that will prompt us to re-evaluate our moral values and our conceptions of morality.”26

  The claim seems plausible. But how exactly might scientific information prompt us to reevaluate our moral values? The best answer is that it might lead people to slow down and to give serious scrutiny to their immediate reactions. If you know that your moral judgment is a rapid intuition based on emotional processing, then you might be more willing to consider the possibility that it is wrong. You might be willing to consider the possibility that you have been influenced by irrelevant factors.

  Suppose that you believe it is unacceptable to push someone into a speeding train even if you know that the result of doing so would be to save five people. Now suppose that you are asked whether it is acceptable to pull a switch that drops someone through a trapdoor when the result of doing so would also be to save five people. Suppose that you believe that it is indeed acceptable. Now suppose that you are asked to square your judgments in the two cases. You might decide that you cannot, that in the first case, physical contact is making all the difference to your moral judgments, but that on reflection it is irrelevant. If that is your conclusion, you might be moved in a more consequentialist direction.

  And in fact, there is evidence to support this view.27 Consider this case:

  A major pharmaceutical company sells a desperately needed cancer drug. It decides to increase its profits by significantly increasing the price of the drug. Is this acceptable?

  Many people believe that it is not. Now consider this case:

  A major pharmaceutical company sells a desperately needed cancer drug. It decides to sell the right to market the drug to a smaller company; it receives a lot of money for the sale and it knows that the smaller company will significantly increase the price of the drug. Is this acceptable?

  Many people believe that it is. In a between-subjects design, in which each person sees only one scenario, people regard the case of indirect harm as far more acceptable than that of direct harm. But in a within-subjects design, in which people see the two cases at the same time, the difference evaporates. The apparent reason is that when people see the two cases at once, they conclude that the proper evaluation of harmful actions should not turn on the direct-indirect distinction. We could easily imagine the same process in the context of other moral dilemmas, including the trolley and footbridge problems. If System 1 and emotional processing are leading to a rapid, intuitive conclusion that X is morally abhorrent or that Y is morally acceptable, a simultaneous encounter with cases A and B may weaken that conclusion and show that it is based on morally irrelevant factors—or at least factors that people come to see as morally irrelevant after reflection. It is also possible that when people see various problems at the same time, they might conclude that certain factors are morally relevant that they originally thought immaterial.

  The same process may occur in straightforwardly legal debates. Suppose that intuition suggests that punitive damages are best understood as a simple retributive response to wrongdoing and that theories of deterrence seem secondary, unduly complex, or essentially beside the point.28 If this is so, people’s judgments about appropriate punitive damage awards will not be much influenced by the likelihood that the underlying conduct would be discovered and punished—a factor that is critical to the analysis of optimal deterrence. Critical reflection might lead people to focus on the importance of deterrence and to conclude that a factor that they disregarded is in fact highly relevant. Indeed, critical reflection—at least if it is sustained—might lead people to think that some of their intuitive judgments about fairness are not correct because they have bad consequences. Such reflection might lead them to change their initial views about policy and law and even to conclude that those views were rooted in a heuristic.

  Even here, however, people’s ultimate conclusions are not decisive. Let us stipulate that people might well revisit their intuitions and come to a different (and perhaps more consistently consequentialist) point of view. Suppose that they do. Do we know that they are right? Not necessarily. Recall that for some problems, people’s immediate answers are more accurate tha
n those that follow from reflection. With respect to moral questions, a final evaluation would have to depend on the right moral theory. The fact that people have revised their intuitions does not establish that they have moved in the direction of that theory. It is evidence of what people think, under certain conditions, and surely some conditions are more conducive to good thinking than others. But the question remains whether what they think is right or true in principle.

  Noisy Intuitions

  We are learning a great deal about the psychology of moral judgment. It is increasingly plausible to think that many deontological intuitions are a product of rapid, automatic, emotional processing and that these intuitions play a large role in debates over public policy and law. But the relevant evidence is neither necessary nor sufficient to justify the conclusion that deontology is a mere heuristic (in the sense of a mental shortcut in the direction of the correct theory). What is required is a moral argument.

  There is one qualification to this claim. It is well established that with respect to factual questions, rapid reactions, stemming from System 1, generally work well but can produce systematic errors. Deontological intuitions appear to have essentially the same sources as those rapid reactions. That point does not establish error, but it does suggest the possibility that, however firmly held, deontological intuitions are providing the motivation for elaborate justifications that would not be offered or have much appeal without the voice of Gould’s homunculus, jumping up and down and shouting at us. The homunculus might turn out to be right. But it is also possible that we should be listening to other voices.

  Notes

  1. Frances Kamm, Intricate Ethics (2006); Bernard Williams, A Critique of Utilitarianism, in Utilitarianism: For and Against (J. C. Smart and Bernard Williams eds. 1973).

  2. An important effort to explore differences among the leading ethical theories, and to suggest that they converge, is Derek Parfit, On What Matters, vol. 1 (2013). Throughout I oppose consequentialism and deontology in the conventional way.

  3. C. R. Sunstein, D. Schkade, & D. Kahneman, Do People Want Optimal Deterrence?, 29 J. Legal Stud. 237–253.

  4. Note, however, that the relevant findings cover only a very small portion of the domain in which deontological judgments are and might be made and that some evidence does not support the view that such judgments are distinctly associated with automatic processing. Andrea Manfrinati et al., Moral Dilemmas and Moral Principles: When Emotion and Cognition Unite, 27 Cognition & Emotion 1276 (2013). Note also that learning and culture certainly matter and we do not have much cross-cultural evidence, which would be highly informative about the relationships among deontology, automatic processing, and culture.

  5. Henry Sidgwick, The Methods of Ethics 425–426 (1981).

  6. Joshua D. Greene, Reply to Mikhail and Timmons, in Moral Psychology: The Neuroscience of Morality: Emotion, Brain Disorders, and Development 3 (Walter-Sinnott-Armstrong ed. 2007).

  7. Judith Jarvis Thomson, The Trolley Problem, in Rights, Restitution and Risk: Essays in Moral Theory (J. J. Thomson & W. Parent eds. 1986).

  8. Joshua D. Greene, Brian R. Somerville, Leigh E. Nystrom, John M. Darley, & Jonathan D. Cohen, An fMRI Investigation of Emotional Engagement in Moral Judgment, 293 Science 2105, 2106 (2001). Various questions have been raised about the methodology in Greene et al.’s paper, in particular the difficulty of inferring causation: Selim Berker, The Normative Insignificance of Neuroscience, 37 Phil. & Public Affairs 293, 305–313 (2009); I bracket those questions here.

  9. Joshua D. Greene, The Cognitive Neuroscience of Moral Judgment, in The Cognitive Neurosciences (M. S. Gazzaniga ed., 4th ed., 2009).

  10. Fiery Cushman, Dylan Murray, Shauna Gordon-McKeon, Sophie Wharton, & Joshua D. Greene, Judgment before Principle: Engagement of the Frontoparietal Control Network, 7 Soc. Cognitive & Affective Neuroscience 888 (2011).

  11. Id., 893.

  12. Id., 894.

  13. Id., 893.

  14. Michael L. Koenigs, Liane Young, Ralph Adolphs, Daniel Tranel, Fiery Cushman, Marc Hauser, & Antonio Damasio, Damage to the Prefrontal Cortex Increases Utilitarian Moral Judgments, 446 Nature 908, 909 (2007).

  15. Id., 909–910.

  16. Joshua D. Greene, Why Are VMPFC Patients More Utilitarian? A Dual-Process Theory of Moral Judgment Explains, 11 Trends in Cognitive Sci. 322 (2007).

  17. Mario Mendez, Eric Anderson, & Jill S. Shapira, An Investigation of Moral Judgment in Frontotemporal Dementia, 18 Cognitive & Behav. Neurology 193 (2005).

  18. Id. Note that on their own, fMRI studies suggest only correlations and cannot distinguish cause and effect. If region X is active when we make decision Y, it is not necessarily the case that X is causing decision Y. Y may be causing X, or both may be caused by something else altogether. For example, the act of making a deontological judgment may cause an emotional reaction that may be processed by the amygdala and/or VMPC. By contrast, lesions studies may suggest cause and effect. (I am grateful to Tali Sharot for clarifying this point.)

  19. Elinor Amit & Joshua D. Greene, You See, The Ends Don’t Justify the Means: Visual Imagery and Moral Judgment, 23 Psychol. Sci. 861, 862 (2012).

  20. Id., 866.

  21. Joshua D. Greene, Sylvia A. Morelli, Kelly Lowenberg, Leigh E. Nystrom, & Jonathan D. Cohen, Cognitive Load Selectively Interferes with Utilitarian Moral Judgment, 107 Cognition 1144, 1151 (2008).

  22. Contrary evidence, suggesting that deontological thinking can actually take longer than consequentialist thinking and that “cognitive and emotional processes participate in both deontological and consequentialist moral judgments,” can be found in Andrea Manfrinati et al., Moral Dilemmas and Moral Principles, 27 Cognition & Emotion 1276 (2013).

  23. Joseph M Paxton, Leo Ungar, & Joshua D. Greene, Reflection and Reasoning in Moral Judgment, 36 Cognitive Science 163, 171–172 (2012).

  24. Id., 166.

  25. Joshua D. Greene, The Secret Joke of Kant’s Soul, in 3 Moral Psychology: The Neuroscience of Morality: Emotion, Brain Disorders, and Development 36 (W. Sinnott-Armstrong ed. 2007); italics in original.

  26. Joshua D. Greene, From Neural “Is” to Moral “Ought”: What Are the Moral Implications of Neuroscientific Moral Psychology?, 4 Nature Reviews Neuroscience 847–850, 847 (2003); for critical evaluation, see R. Dean, Does Neuroscience Undermine Deontological Theory?, 3 Neuroethics 43–60 (2010). An especially valuable overview, from which I have learned a great deal, is Joshua D. Greene, Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Morality, 124 Ethics 695 (2014).

  27. N. Paharia, K. S. Kassamb, J. D. Greene, and M. H. Bazerman, Dirty Work, Clean Hands: The Moral Psychology of Indirect Agency, 109 Organizational Behav. & Hum. Decision Processes 134–141, 140 (2009); S. Berker, The Normative Insignificance of Neuroscience, 37 Phil. & Pub. Aff. 293–329, 327–329 (2009).

  28. Cass R. Sunstein, David Schkade, & Daniel Kahneman, Do People Want Optimal Deterrence?, 29 J. Legal Stud. 237, 248–249 (2000).

  16

  Partyism

  With respect to prejudice and hostility, the English language has a number of isms: racism, sexism, classism, and speciesism are prominent examples. I aim to coin a new one here: partyism. The central idea is that those who identify with a political party often become deeply hostile to the opposing party and believe that its members have a host of horrific characteristics.1 They might think that the opposing party is full of people who are ignorant, foolish, evil, corrupt, duped, out of touch, or otherwise awful.

  My major suggestion here is that in the United States (and perhaps in other countries as well), partyism is real and on the rise, and that it has serious adverse consequences for governance, politics, and daily life. Sometimes it makes change possible and rapid, at least if one party controls the levels of power. Sometimes it leads to authoritarianism. Sometimes it makes change impossible or slow, at least when and because parties are able to block each other�
��and determined to do so.

  I also offer a few words about the causes and consequences of partyism and make some suggestions about what might be done about it. Under conditions of severe partyism, it can become unusually difficult to address serious social problems, at least through legislation. To that extent, the system of separation of powers—which already imposes a series of barriers to legislative initiatives—is often an obstacle to desirable change. The executive branch might be able to produce change on its own—as both Barack Obama and Donald Trump discovered—but under conditions of partyism, unilateral change has problems of its own. It might not be legitimate; it might be ill-considered.

  There is a great deal of evidence of partyism and its growth. Perhaps the simplest involves “thermometer ratings.”2 With those ratings, people are asked to rate a range of groups on a scale of 0 to 100, where 100 means that the respondent feels “warm” toward the group and 0 means that the respondent feels “cold.” In-party rankings have remained stable over the last three decades, with both Democrats and Republicans ranking members of their own party around 70. By contrast, ratings of the other party have experienced a remarkable fifteen-point dip since 1988.3 In 2008, the average out-party ranking was around 30—and it continues to decline. By contrast, Republicans ranked “people on welfare” in that year at 50, and Democrats ranked “big business” at 52. It is remarkable but true that negative affect toward the opposing party is not merely greater than negative affect toward unwelcome people and causes; it is much greater.

  Implicit Association Test

  Consider one of the most influential measures of prejudice: the implicit-association test (IAT).4 The test is simple to take. Participants see words on the upper corners of a screen—for example, white paired with either good or bad in the upper-left corner, and black paired with one of those same adjectives in the upper right. Then they see a picture or a word in the middle of the screen—for example, a white face, an African American face, or the word joy or terrible. The task is to click on the upper corner that matches either the picture or the word in the middle.

 

‹ Prev