How Change Happens
Page 29
Or consider the representativeness heuristic, in accordance with which judgments of probability are influenced by assessments of resemblance (the extent to which A “looks like” B). The representativeness heuristic is famously exemplified by people’s answers to questions about the likely career of a hypothetical woman named Linda, described as follows: “Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and also participated in antinuclear demonstrations.”6 People were asked to rank, in order of probability, eight possible futures for Linda. Six of these were fillers (such as psychiatric social worker, elementary school teacher); the two crucial ones were “bank teller” and “bank teller and active in the feminist movement.”
More people said that Linda was less likely to be a bank teller than to be a bank teller and active in the feminist movement. This is an obvious mistake, a conjunction error, in which characteristics A and B are thought to be more likely than characteristic A alone. The error stems from the representativeness heuristic: Linda’s description seems to match “bank teller and active in the feminist movement” far better than “bank teller.” In an illuminating reflection on the example, Stephen Jay Gould observes that “I know [the right answer], yet a little homunculus in my head continues to jump up and down, shouting at me—‘but she can’t just be a bank teller; read the description.’”7 Because Gould’s homunculus is especially inclined to squawk in the moral domain, I shall return to him on several occasions.
With respect to moral heuristics, existing work is suggestive rather than definitive; a great deal of progress remains to be made, above all through additional experimental work on moral judgments. Some of the moral heuristics that I shall identify might reasonably be challenged as subject to ad hoc rather than predictable application. One of my primary hopes is to help stimulate further research exploring when and whether people use moral heuristics that produce sense or nonsense in particular cases.
Attribute Substitution and Prototypical Cases
Kahneman and Shane Frederick suggest that heuristics are mental shortcuts used when people are interested in assessing a “target attribute” and substitute a “heuristic attribute” of the object, which is easier to handle.8 Heuristics therefore operate through a process of attribute substitution. The use of heuristics gives rise to intuitions about what is true,9 and these intuitions sometimes are biased, in the sense that they produce errors in a predictable direction. Consider the question whether more people die from suicides or homicides. Lacking statistical information, people might respond by asking whether it is easier to recall cases in either class (the availability heuristic). The approach is hardly senseless, but it might also lead to errors, a result of availability bias in the domain of risk perception. Sometimes heuristics are linked to affect, understood as an emotional reaction, and indeed affect has even been seen as a heuristic, by which people evaluate products or actions by reference to the affect that they produce.10 But attribute substitution is often used for factual questions that lack an affective component.
Similar mechanisms are at work in the moral, political, and legal domains. Unsure what to think or do about a target attribute (what morality requires, what the law is), people might substitute a heuristic attribute instead—asking, for example, about the view of trusted authorities (a leader of the preferred political party, an especially wise judge, a religious figure). Often the process works by appeal to prototypical cases. Confronted by a novel and difficult problem, observers often ask whether it shares features with a familiar problem. If it seems to do so, then the solution to the familiar problem is applied to the novel and difficult one. It is possible that in the domain of values, as well as facts, real-world heuristics generally perform well in the real world—so that moral errors are reduced, not increased, by their use, at least compared to the most likely alternatives (see my remarks on rule utilitarianism below). The only claim here is that some of the time, our moral judgments can be shown to misfire.
The principal heuristics should be seen in light of dual-process theories of cognition.11 Recall that System 1 is intuitive; it is rapid, automatic, and effortless (and it features Gould’s homunculus). System 2, by contrast, is reflective; it is slower, self-aware, calculative, and deductive. System 1 proposes quick answers to problems of judgment; System 2 operates as a monitor, confirming or overriding those judgments. Consider, for example, someone who is flying from New York to London in the month after an airplane crash. This person might make a rapid, barely conscious judgment, rooted in System 1, that the flight is quite risky; but there might well be a System 2 override, bringing a more realistic assessment to bear. System 1 often has an affective component, but it need not; for example, a probability judgment might be made quite rapidly and without much affect at all.
There is growing evidence that people often make automatic, largely unreflective moral judgments for which they are sometimes unable to give good reasons.12 Moral, political, or legal judgments often substitute a heuristic attribute for a target attribute; System 1 is operative here as well, and it may or may not be subject to System 2 override. Consider the incest taboo. People feel moral revulsion toward incest even in circumstances in which the grounds for that taboo seem to be absent; they are subject to “moral dumbfounding”13—that is, an inability to give an account for a firmly held intuition. It is plausible, at least, to think that System 1 is driving their judgments, without System 2 correction. The same is true in legal and political contexts as well.
Heuristics and Morality
To show that heuristics operate in the moral domain, we have to specify some benchmark by which we can measure moral truth. On these questions, I want to avoid any especially controversial claims. Whatever one’s view of the foundations of moral and political judgments, I suggest, moral heuristics are likely to be at work in practice.
In this section, I begin with a brief account of the possible relationship between ambitious theories (understood as large-scale accounts of the right or the good) and moral heuristics. I suggest that for those who accept ambitious theories about morality or politics, it is tempting to argue that alternative positions are mere heuristics; but this approach is challenging, simply because any ambitious theory is likely to be too contentious to serve as the benchmark for measuring moral truth. (I will have more to say on this topic in the next chapter.) It is easiest to make progress not by opposing (supposedly correct) ambitious theories to (supposedly blundering) commonsense morality, but in two more modest ways: first, by showing that moral heuristics are at work on any view about what morality requires; and second, by showing that such heuristics are at work on a minimally contentious view about what morality requires.
Some people are utilitarians; they believe that the goal should be to maximize utility. At the outset, there are pervasive questions about the meaning of that idea. In Bentham’s own words: “By utility is meant that property in any object, whereby it tends to produce benefit, advantage, pleasure, good, or happiness, (all this in the present case comes to the same thing) or (what comes again to the same thing) to prevent the happening of mischief, pain, evil, or unhappiness to the party whose interest is considered.”14 Admittedly, those words leave many questions unanswered; but it is not my goal to answer them here. Let us simply note that many utilitarians, including John Stuart Mill and Henry Sidgwick, have a more capacious understanding of utility than did Bentham, and argue that ordinary morality is based on simple rules of thumb that generally promote utility but that sometimes misfire.15 For example, Mill emphasizes that human beings “have been learning by experience the tendencies of experience” so that the “corollaries from the principle of utility” are being progressively captured by ordinary morality.16
With the aid of modern psychological findings, utilitarians might be tempted to argue that ordinary morality is simply a series of heuristics for what really matters, which is utility. T
hey might contend that ordinary moral commitments are a set of mental shortcuts that generally work well, but that also produce severe and systematic errors from the utilitarian point of view. Suppose that most people reject utilitarian approaches to punishment and are instead committed to retributivism (understood, roughly, as an approach that sees punishment as what is morally deserved, rather than as an effort to maximize utility); this is their preferred theory. Are they responding to System 1? Might they be making a cognitive error? (Is Kantianism a series of cognitive errors? See chapter 15.) Note that with respect to what morality requires, utilitarians frequently agree with their deontological adversaries about concrete cases; they can join in accepting the basic rules of criminal and civil law. When deontologists and others depart from utilitarian principles, perhaps they are operating on the basis of heuristics that usually work well but that sometimes misfire.
But it is exceedingly difficult to settle large-scale ethical debates in this way. In the case of many ordinary heuristics, based on availability and representativeness, a check of the facts or of the elementary rules of logic will show that people err. In the moral domain, this is much harder to demonstrate. To say the least, those who reject utilitarianism are not easily embarrassed by a demonstration that their moral judgments can lead to reductions in utility. For example, utilitarianism is widely challenged by those who insist on the importance of distributional considerations. It is far from clear that a moderate utility loss to those at the bottom can be justified by a larger utility gain for many at the top.
Emphasizing the existence of moral heuristics, those who reject utilitarianism might well turn the tables on their utilitarian opponents. They might contend that the rules recommended by utilitarians are consistent, much of the time, with what morality requires—but also that utilitarianism, taken seriously, produces serious mistakes in some cases. In this view, utilitarianism is itself a heuristic, one that usually works well but leads to systematic errors. And indeed, many debates between utilitarians and their critics involve claims, by one or another side, that the opposing view usually produces good results but also leads to severe mistakes and should be rejected for that reason.
These large debates are not easy to resolve, simply because utilitarians and deontologists are most unlikely to be convinced by the suggestion that their defining commitments are mere heuristics. Here there is a large difference between moral heuristics and the heuristics uncovered in the relevant psychological work, where the facts or simple logic provide a good test for whether people have erred. If people tend to think that more words in a given space end with the letters ing than have n in the next-to-last position, something has clearly gone wrong. If people think that Linda is more likely to be “a bank teller who is active in the feminist movement” than a “bank teller,” there is an evident problem. If citizens of France think that New York University is more likely to have a good basketball team than St. Joseph’s University because they have not heard of the latter, then a simple examination of the record might show that they are wrong. In the moral domain, factual blunders and simple logic do not provide such a simple test.
Neutral Benchmarks and Weak Consequentialism
My goal here is therefore not to show, with Sidgwick and Mill, that commonsense morality is a series of heuristics for the correct general theory, but more cautiously that in many cases, moral heuristics are at work—and that this point can be accepted by people with diverse general theories, or with grave uncertainty about which general theory is correct. I contend that it is possible to conclude that a moral heuristic is at work without accepting any especially controversial normative claims. In several examples, that claim can be accepted without accepting any contestable normative theory at all. Other examples will require acceptance of what I shall call weak consequentialism, in accordance with which the social consequences of the legal system are relevant, other things being equal, to what law ought to be doing.
Weak consequentialists insist that consequences are what matter, but they need not be utilitarians; they do not have to believe that law and policy should attempt to maximize utility. Utilitarianism is highly controversial, and many people, including many philosophers, reject it. For one thing, there are pervasive questions about the meaning of the idea of utility. Are we speaking only of maximizing pleasure and minimizing pain? Doesn’t that seem to be a constricted account of what people do and should care about? Weak consequentialists think that it is. They are prepared to agree that whatever their effects on utility, violations of rights count among the consequences that ought to matter, so such violations play a role in the overall assessment of what should be done. Consider Amartya Sen’s frequent insistence that consequentialists can insist that consequences count without accepting utilitarianism and without denying that violations of rights are part of the set of relevant consequences. Thus Sen urges an approach that “shares with utilitarianism a consequentialist approach (but differs from it in not confining attention to utility consequences only)” while also attaching “intrinsic importance to rights (but … not giving them complete priority irrespective of other consequences).”17 Weak consequentialism is in line with this approach. In evaluating decisions and social states, weak consequentialists might well be willing to give a great deal of weight to nonconsequentialist considerations.
Some deontologists, insistent on duties and rights, will reject any form of consequentialism altogether. They might believe, for example, that retribution is the proper theory of punishment and that the consequences of punishment are never relevant to the proper level of punishment. Some of my examples will be unpersuasive to deontologists who believe that consequences do not matter at all. But weak consequentialism seems to me sufficiently nonsectarian, and attractive enough to sufficiently diverse people, to make plausible the idea that in the cases at hand, moral heuristics are playing a significant role. And for those who reject weak consequentialism, it might nonetheless be productive to ask whether, from their own point of view, certain rules of morality and law are reflective of heuristics that sometimes produce serious errors.
Evolution and Rule Utilitarianism: Simple Heuristics That Make Us Good?
Two clarifications before we proceed. First, some moral heuristics might well have an evolutionary foundation.18 Perhaps natural selection accounts for automatic moral revulsion against incest or cannibalism, even if clever experiments, or life, can produce situations in which the revulsion is groundless. In the case of incest, the point is straightforward: the automatic revulsion might be far more useful, from the evolutionary perspective, than a more fine-grained evaluation of contexts.19 In fact an evolutionary account might be provided for most of the heuristics that I explore here. When someone has committed to a harmful act, evolutionary pressures might well have inculcated a sharp sense of outrage and a propensity to react in proportion to it. As a response to wrongdoing, use of an outrage heuristic might well be much better than an attempt at any kind of consequentialist calculus, weak or strong. Of course many moral commitments are a product not of evolution but of social learning and even cascade effects;20 individuals in a relevant society will inevitably be affected by a widespread belief that it is wrong to tamper with nature, and evolutionary pressures need not have any role at all.
Second, and relatedly, some or even most moral heuristics might have a rule-utilitarian or rule-consequentialist defense.21 The reason is that in most cases they work well despite their simplicity, and if people attempted a more fine-grained assessment of the moral issues involved, they might make more moral mistakes rather than fewer (especially because their self-interest is frequently at stake). Simple but somewhat crude moral principles might lead to less frequent and less severe moral errors than complex and fine-grained moral principles. Compare the availability heuristic. Much of the time, use of that heuristic produces speedy judgments that are fairly accurate, and those who attempt a statistical analysis might make more errors (and waste a lot of time in the process). If human being
s use “simple heuristics that make us smart,”22 then they might also use “simple heuristics that make us good.” I will offer some examples in which moral heuristics seem to me to produce significant errors for law and policy, but I do not contend that we would be better off without them. On the contrary, such heuristics might well produce better results, from the moral point of view, than the feasible alternatives—a possibility to which I will return.
The Asian Disease Problem and Moral Framing
In a finding closely related to their work on heuristics, Kahneman and Tversky themselves find “moral framing” in the context of what has become known as “the Asian disease problem.”23 Framing effects do not involve heuristics, but because they raise obvious questions about the rationality of moral intuitions, they provide a valuable backdrop. Here is the first component of the problem:
Imagine that the US is preparing for the outbreak of an unusual Asian disease, which is expected to kill six hundred people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences are as follows:
If Program A is adopted, two hundred people will be saved.
If Program B is adopted, there is a one-third probability that six hundred people will be saved and a two-thirds probability that no people will be saved.
Which of the two programs would you favor?
Most people choose Program A.