Book Read Free

How Change Happens

Page 33

by Cass R Sunstein


  The disagreements between deontologists and consequentialists bear directly on many issues in law and policy. Consequentialists believe that constitutional rights, including the right to free speech, must be defended and interpreted by reference to the consequences; deontologists disagree. Consequentialists are favorably disposed to cost-benefit analysis in regulatory policy, but that form of analysis has been vigorously challenged on broadly deontological grounds. Consequentialists favor theories of punishment that are based on deterrence, and they firmly reject retributivism, which some deontologists endorse. For both criminal punishment and punitive damage awards, consequentialists and deontologists have systematic disagreements. Consequentialists and deontologists also disagree about the principles underlying the law of contract and tort.

  In defending their views, deontologists often point to cases in which our intuitions seem very firm and hence to operate as “fixed points” against which we must evaluate consequentialism.1 They attempt to show that consequentialism runs up against intransigent intuitions and is wrong for that reason. In the previous chapters, I explored weak consequentialism, which attempts to soften this disagreement. But it is true that in its usual forms, consequentialism seems to conflict with some of our deepest intuitions, certainly in new or unfamiliar situations.2 For example, human beings appear to be intuitive retributivists; they want wrongdoers to suffer. With respect to punishment, efforts to encourage people to think in consequentialist terms do not fare at all well.3

  In the face of the extensive body of philosophical work exploring the conflict between deontology and consequentialism, it seems reckless to venture a simple resolution, but let us consider one: deontology is a moral heuristic for what really matters, and consequences are what really matter. On this view, deontological intuitions are generally sound, in the sense that they usually lead to what would emerge from a proper consequentialist assessment. Protection of free speech and religious liberty, for example, is generally justified on consequentialist grounds. At the same time, however, deontological intuitions can sometimes produce severe and systematic errors in the form of suboptimal or bad consequences. The idea that deontology should be seen as a heuristic is consistent with a growing body of behavioral and neuroscientific research, which generally finds that deontological judgments are rooted in automatic, emotional processing.4

  My basic claims here are twofold. First, the emerging research might serve to unsettle and loosen some deeply held moral intuitions and give us new reason to scrutinize our immediate and seemingly firm reactions to moral problems. Deontology may in fact be a moral heuristic, in the sense that it may be a mental shortcut for the right moral analysis, which is consequentialist. Thus Henry Sidgwick urges:

  It may be shown, I think, that the Utilitarian estimate of consequences not only supports broadly the current moral rules, but also sustains their generally received limitations and qualifications: that, again, it explains anomalies in the Morality of Common Sense, which from any other point of view must seem unsatisfactory to the reflective intellect; and moreover, where the current formula is not sufficiently precise for the guidance of conduct, while at the same time difficulties and perplexities arise in the attempt to give it additional precision, the Utilitarian method solves these difficulties and perplexities in general accordance with the vague instincts of Common Sense, and is naturally appealed to for such solution in ordinary moral discussions.5

  Sidgwick did not have the benefit of recent empirical findings, but he might have been willing to agree “that deontological philosophy is largely a rationalization of emotional moral intuitions.”6 But my second claim is that nothing in the emerging research is sufficient to establish that deontological judgments are wrong or false. Nonetheless, I am going to explore the possibility here.

  Trolleys and Footbridges

  A great deal of neuroscientific and psychological work is consistent with the view that deontological judgments stem from a moral heuristic, one that works automatically and rapidly. It bears emphasizing that if this view is correct, it is also possible—indeed, likely—that such judgments generally work well in the sense that they produce the right results (according to the appropriate standard) in most cases. The judgments that emerge from automatic processing, including emotional varieties, usually turn out the way they do for a reason. If deontological judgments result from a moral heuristic, we might end up concluding that they generally work well but misfire in systematic ways.

  Consider in this regard the long-standing philosophical debate over two well-known moral dilemmas, which seem to test deontology and consequentialism.7 The first, called the trolley problem, asks people to imagine that a runaway trolley is headed for five people, who will be killed if the trolley continues on its current course. The question is whether you would throw a switch that would move the trolley onto another set of tracks, killing one person rather than five. Most people would throw the switch. The second, called the footbridge problem, is the same as that just given, but with one difference: the only way to save the five is to throw a stranger, now on a footbridge that spans the tracks, into the path of the trolley, killing that stranger but preventing the trolley from reaching the others. Most people will not kill the stranger.

  What is the difference between the two cases, if any? A great deal of philosophical work has been done on this question, much of it trying to suggest that our firm intuitions can indeed be defended, or rescued, as a matter of principle. The basic idea seems to be that those firm intuitions, separating the two cases, tell us something important about what morality requires, and an important philosophical task is to explain why they are essentially right.

  Without engaging these arguments, consider a simpler answer. As a matter of principle, there is no difference between the two cases. People’s different reactions are based on a deontological heuristic (“do not throw innocent people to their death”) that condemns the throwing of the stranger but not the throwing of the switch. To say the least, it is desirable for people to act on the basis of a moral heuristic that makes it extremely abhorrent to use physical force to kill innocent people. But the underlying heuristic misfires in drawing a distinction between the two ingeniously devised cases. Hence people (including philosophers) struggle heroically to rescue their intuitions and to establish that the two cases are genuinely different in principle. But they are not. If so, a deontological intuition is serving as a heuristic in the footbridge problem, and it is leading people in the wrong direction. Can this proposition be tested? Does it suggest something more general about deontology?

  Neuroscience

  The Human Brain

  How does the human brain respond to the trolley and footbridge problems? The authors of an influential study do not attempt to answer the moral questions in principle, but they find “that there are systematic variations in the engagement of emotions in moral judgment” and that brain areas associated with emotion are far more active in contemplating the footbridge problem than in contemplating the trolley problem.8

  More particularly, the footbridge problem preferentially activates the regions of the brain that are associated with emotion, including the amygdala. By contrast, the trolley problem produces increased activity in parts of the brain associated with cognitive control and working memory. A possible implication of the authors’ finding is that human brains distinguish between different ways of bringing about deaths; some ways trigger automatic, emotional reactions, whereas others do not. Other fMRI studies reach the same general conclusion.9

  Actions and Omissions

  People tend to believe that harmful actions are worse than harmful omissions; intuition strongly suggest a sharp difference between the two. Many people think that the distinction is justified in principle. They may be right, and the arguments offered in defense of the distinction might be convincing. But in terms of people’s actual judgments, there is reason to believe that automatic (as opposed to deliberative or controlled) mechanisms help to account for peo
ple’s intuitions. The provocative possibility is that the faster and more automatic part of the human brain regards actions as worse than omissions, but the slower and more deliberative part, focused on consequences, does not make such a sharp distinction between the two.10

  In the relevant experiments, participants were presented with a series of moral scenarios, involving both actions and omissions. They judged active harms to be far more objectionable than inaction. As compared with harmful actions, harmful omissions produced significantly more engagement in the frontoparietal control network, an area that contributes to the ability to guide actions based on goals. Those participants who showed the highest engagement in that network while answering questions involving omissions also tended to show the smallest differences in their judgments of actions and omissions. This finding suggests that more controlled and deliberative processing does not lead to a sharp distinction between the two. A high level of such processing was necessary to override the intuitive sense that the two are different (with omissions seeming less troublesome). In the authors’ words, there is “a role for controlled cognition in the elimination of the omission effect,”11 meaning that such cognition leads people not to see actions as much worse than omissions.

  The upshot is that lesser concern with omissions arises automatically, without the use of controlled cognition. People engage in such cognition to overcome automatic judgment processes in order to condemn harmful omissions. Hence “controlled cognition is associated not with conforming to the omission effect but with overriding it,”12 and “the more a person judges harmful omissions on parity with harmful actions, the more they engage cognitive control during the judgment of omissions.”13

  Social Emotions and Utilitarianism

  The ventromedial prefrontal cortex (VMPC) is a region of the brain that is necessary for social emotions, such as compassion, shame, and guilt.14 Patients with VMPC lesions show reductions in these emotions and reduced emotional receptivity in general. Researchers predicted that such patients would show an unusually high rate of utilitarian judgments in moral scenarios that typically trigger strong emotions—such as the footbridge problem. The prediction turned out to be correct.15 Those with damage to the VMPC engaged in utilitarian reasoning in responding to problems of that kind.

  This finding is consistent with the view that deontological reasoning is a product of negative emotional responses.16 By contrast, consequentialist reasoning, reflecting a kind of cost-benefit analysis, is subserved by the dorsolateral prefrontal cortex, which shares responsibility for cognitive functions. Damage to the VMPC predictably dampens the effects of emotions and leads people to engage in an analysis of likely effects of different courses of action. Similarly, people with frontotemporal dementia are believed to suffer from “emotional blunting”—and they are especially likely to favor action in the footbridge problem.17 According to an admittedly controversial interpretation of these findings, “patients with emotional deficits may, in some contexts, be the most pro-social of all.”18

  Behavioral Evidence and Deontology

  A great deal of behavioral evidence also suggests that deontological thinking is associated with System 1 and in particular with emotions.

  Words or Pictures?

  People were tested to see if they had a visual or verbal cognitive style—that is, to see whether they performed better with tests of visual accuracy than with tests of verbal accuracy.19 The authors hypothesized that because visual representations are more emotionally salient, those who do best with verbal processing would be more likely to support utilitarian judgments, and those who do best with visual processing would be more likely to support deontological judgments. The hypothesis was confirmed. Those with more visual cognitive styles were more likely to favor deontological approaches.

  People’s self-reports showed that their internal imagery—that is, what they visualized—predicted their judgments in the sense that those who “saw” pictures of concrete harm were significantly more likely to favor deontological approaches. In the authors’ words, “visual imagery plays an important role in triggering the automatic emotional responses that support deontological judgments.”20

  The Effects of Cognitive Load

  What are the effects of cognitive load? If people are asked to engage in tasks that are cognitively difficult, such that they have less “space” for complex processing, what happens to their moral judgments? The answer is clear: an increase in cognitive load interferes with consequentialist (utilitarian) moral judgment but has no such effect on deontological approaches.21 This finding strongly supports the view that consequentialist judgments are cognitively demanding and that deontological judgments are relatively easy and automatic.22

  Priming System 2

  The cognitive reflection test (CRT) asks a series of questions that elicit answers that fit with people’s intuitions but that turn out to be wrong. Here is one such question: A bat and a ball cost $1.10 in total. The bat costs a dollar more than the ball. How much does the ball cost? In response, most people do not give the correct answer, which is five cents. They are more likely to offer the intuitively plausible answer, which is ten cents. Those who take the CRT tend to learn that they often give an immediate answer that turns out, on reflection, to be incorrect. If people take the CRT before engaging in some other task, they will be “primed” to question their own intuitive judgments. What is the effect of taking the CRT on moral judgments?

  The answer is clear: those who take the CRT are more likely to reject deontological thinking in favor of utilitarianism.23 Consider the following dilemma:

  John is the captain of a military submarine traveling underneath a large iceberg. An onboard explosion has caused the vessel to lose most of its oxygen supply and has injured a crewman who is quickly losing blood. The injured crewman is going to die from his wounds no matter what happens.

  The remaining oxygen is not sufficient for the entire crew to make it to the surface. The only way to save the other crew members is for John to shoot dead the injured crewman so that there will be just enough oxygen for the rest of the crew to survive.

  Is it morally acceptable for John to shoot the injured crewman? Those who took the CRT before answering that question were far more likely to find that action morally acceptable.24 Across a series of questions, those who took the CRT became significantly more likely to support consequentialist approaches to social dilemmas.

  Is and Ought

  The evidence just outlined is consistent with the proposition that deontological intuitions are mere heuristics, produced by the automatic operations of System 1. The basic picture would be closely akin to the corresponding one for questions of fact. People use mental shortcuts, or rules of thumb, that generally work well but can also lead to systematic errors.

  To be sure, the neuroscientific and psychological evidence is preliminary and suggestive but no more. Importantly, we do not have much cross-cultural evidence. Do people in diverse nations and cultures show the same kinds of reactions to moral dilemmas? Is automatic processing associated with deontological approaches only in certain nations and cultures? Do people in some nations show automatic moral disapproval (perhaps motivated by disgust) of practices and judgments that seem tolerable or even appropriate elsewhere? Where and how does deontology matter? There may not be simple answers to such questions; perhaps some deontological reactions are hardwired and others are not.

  Moreover, deontological intuitions and judgments span an exceedingly wide range. They are hardly exhausted by the trolley and footbridge problems and by related hypothetical questions, and if deontological intuitions are confused or unhelpful in resolving such problems, deontology would not stand defeated by virtue of that fact. Consider, for example, retributive theories of punishment; autonomy-based theories of freedom of speech and religion; bans on slavery and torture, grounded in principles of respect for persons; and theories of tort law and contract law that are rooted in conceptions of fairness. We do not have neuroscientific
or psychological evidence with respect to the nature and role of deontological thinking in the wide assortment of moral, political, and legal problems for which deontological approaches have been proposed or defended. Perhaps System 2, and not System 1, is responsible for deontological thinking with respect to some of those problems. It is certainly imaginable, however, that neuroscientific or psychological evidence will eventually find that automatic processing supports deontological thinking across a wide range of problems.

  The Central Objection

  Even if this is so, the proposition that deontology is a heuristic (in the sense in which I am using that term) runs into a serious and immediate objection. For factual matters, we have an independent standard by which to assess the question of truth. Suppose that people think that more people die as a result of homicide than suicide. The facts show that people’s judgments, influenced by the availability heuristic, are incorrect. But if people believe that torture is immoral even if it has good consequences, we do not have a self-evidently correct independent standard to demonstrate that they are wrong.

  To be sure, a great deal of philosophical work attempts to defend some version of consequentialism. But deontologists reject the relevant arguments. They do so for what they take to be good reasons, and they elaborate those reasons in great detail. With respect to facts, social scientists can show that certain rules of thumb produce errors; the same cannot be said for deontology. For this reason, deontological judgments may not be a mental shortcut at all. Even if automatic processing gives them a head start, they may ultimately be the product of a long and successful journey.

 

‹ Prev