Book Read Free

The Enigma of Reason: A New Theory of Human Understanding

Page 29

by Dan Sperber


  Not quite. In every one of these experiments, more reasoning led to worse decisions. For instance, in Tim Wilson’s experiment, the participants were given the poster they had ranked higher to take home. Asked a few weeks later about their appreciation of the poster, those who had had to explain their preferences were less satisfied than those who had relied on their unfiltered intuitions.

  To understand why reason can mess up people’s decisions even when the myside bias is not the main culprit, we must look more precisely at how reason affects decisions.

  Itamar Simonson performed an early experiment on this topic.8 He started by designing two products—for example, beers—that would be equally preferred by most people. Let’s call the first brand Beer Deluxe. It’s a fancy product, with a quality rating of 75 out of 100, worth $18 for a six-pack. The second is Beeros, a less sophisticated—rating at 65—but cheaper—$12—alternative. When people had to choose between these two brands, they were indifferent, picking either beer about as often. Then the experimenter introduced a third brand, Premium Beer. At $18, Premium Beer is as expensive as Beer Deluxe, but it is also less well rated—a 70 out of 100. Given that Premium Beer is simply inferior to Beer Deluxe, it should not make a difference in people’s choices. In fact, it did: once Premium Beer was introduced, people were more likely to pick Beer Deluxe.

  Christopher Hsee conducted one of the most original experiments in the area. He asked participants which of two treats they would prefer to receive as a gift for having completed a task. Both gifts were chocolates, but one was a small (0.5 ounce), cheap ($0.50), heart-shaped chocolate while the other was a big (2 ounces), expensive ($2), roach-shaped chocolate. When participants relied more on their feeling, they were about split between the two choices. But when they reasoned to make a decision, most picked the big roach-shaped chocolate.9

  Debora Thompson explored the phenomenon of feature creep: the multiplication of useless features that burdens so many gadgets and, in the end, reduces their usability. With her colleague Michael Norton, they showed that when people feel they must provide reasons for their decisions, they are more likely to pick a feature-rich item—such as a digital video player with dozens of functions—even though they realize it would be less convenient to use.10

  Here’s the common thread in all these results: in each case, reason drives participants toward the decision that is easier to justify. “Beer Deluxe is better but not more expensive than Premium Beer, so I’ll pick it.” “Given it’s a gift, it would be irrational not to pick the bigger and more expensive chocolate just because of its shape—it’s not as if it was a real roach anyway.” “Why buy a digital video player that does fewer things?”

  This common phenomenon is known as reason-based choice: when people have weak or conflicting intuitions, reason drives them toward the decision for which it is easiest to find reasons—the decisions that they can best justify.

  Paying for Reasons

  While these results are difficult to reconcile with the intellectualist theory—reason should lead people to better decisions, not to worse decisions—they are what the interactionist approach predicts. Reason doesn’t stop being a social device in the absence of a point of view to uphold. Instead, it samples potential reasons for the different options available and drives the reasoner toward the decision that is the easiest to justify—whether or not it is otherwise a good decision.

  In many cases, it looks as if reasoning is driving people toward worse, less rational decisions. The introduction of an obviously inferior option—Premium Beer—should not influence the decision between two superior options. Psychologists studying disgust can tell you that people will not enjoy eating that roach shaped chocolate, however big.11 A gadget bloated with useless features will become a source of anxiety, not enjoyment.12 Refusing to buy a jam simply because there are more jams to pick from doesn’t make much sense.

  Even more strikingly, people are willing to pay simply to have a reason for their decision. Amos Tversky and Eldar Shafir, who were among the first, with Itamar Simonson, to explore reason-based choice, asked a first group of participants to imagine the following scenario:

  You have just taken a tough qualifying examination. It is the end of the fall quarter, you feel tired and run-down, and you are not sure that you passed the exam. In case you failed, you have to take the exam again in a couple of months—after the Christmas holidays. You now have an opportunity to buy a very attractive 5-day Christmas vacation package in Hawaii at an exceptionally low price. The special offer expires tomorrow, while the exam grade will not be available until the following day.

  Would you

  x. Buy the vacation package.

  y. Not buy the vacation package.

  z. Pay a $5 nonrefundable fee in order to retain the rights to buy the vacation package at the same exceptional price the day after tomorrow—after you find out whether or not you passed the exam.13

  A second group of participants was asked to imagine that they had passed the exam and a third group that they had failed.

  Most of the participants told that they had passed the exam decided to buy the vacation package—they reasoned that it was a well-deserved reward for their success. Most of the participants told that they had failed the exam also decided to buy the vacation package—they reasoned that they direly needed a break to recover from this failure.

  Combined, these two results imply that it would be rational for most participants in the first group, who didn’t yet know whether they had passed or failed, to buy the package and not to waste five dollars to postpone their decision. Whether they passed or failed, they would buy it. However, participants in this group chose to pay the fee and to wait a couple of days in order to know the exam results. Their problem: the reasons for buying the package were incompatible, one being “I deserve a reward for success” and the other “I need a break after failure.” And so they paid to wait, effectively buying a reason to make a decision that they would make either way.

  Social Rationality

  When assessing decisions, it might seem clear that we should focus on the fit between the content of our decisions and our practical goals. We should buy posters we will enjoy more. When buying electronic devices, we should pick a model that best meets our needs. However, if being rational is striving for the best decision, all things considered (and not just our practical goals), then making a good decision gets more complex.

  Humans constantly evaluate one another. Are the people we interact with competent and reliable? Is their judgment sound? As we argued in Chapter 7, much of this evaluation is done in terms of reasons: we understand others’ ideas and actions by attributing to them reasons, we evaluate the goodness of these reasons, and we evaluate people’s reliability on the basis of their reasons. The way we rely on reasons may distort and exaggerate their role in thought and action, but it is not as if a better, more objective understanding were readily available. After all, psychologists themselves are still striving to develop such an understanding, and they disagree as to what it would look like. Reason-based understanding, for all its shortcomings, has the advantage of addressing two main concerns: providing a basis for an evaluation of people’s performance, and providing an idiom to express, share, and discuss these evaluations.

  Just as we evaluate others, they evaluate us. It is important to us that they should form a good opinion of us: this will make them more willing to cooperate and less inclined to act against us. Given this, it is desirable to act efficiently not only in order to attain our goals but also in order to secure a good reputation. Our reasons for acting the way we do shouldn’t just be good reasons; they should be reasons that are easily recognized as good.

  In some situations, our best personal reasons might be too complicated, or they might go against common wisdom, and hence be detrimental to our reputation. In such a case, it may be more advantageous to make a less-than-optimal choice that is easier to justify than an optimal choice that will be seen as incompetent. We
might lose in terms of the practical payoff but score social points yielding a higher overall payoff.

  Reason influences our decisions in the direction of reputational gains. For instance, those participants who picked Beer Deluxe because it was the easiest decision to justify may not have maximized their product satisfaction, but they scored social points: their decision was the least likely to be criticized by others.14 Customers who ended up with a device burdened with useless features are (ironically) regarded as technologically savvy.15 Trying to look rational, even at the price of some practical irrationality, may be the most rational thing to do.

  In the type of choices we have examined in this chapter, people’s intuitions are generally weak. Having such weak intuitions is often a reliable sign that the decision at issue is not so important. After all, when it comes to dealing with the most pressing aspects of our ancestral environment, specific cognitive mechanisms are likely to have evolved and to provide us with strong intuitions. So, when our intuitions are weak, being guided by how easy it is to justify a particular decision is, in general, a simple and reasonable heuristic.

  Our environments, however, have changed so much in the past millennia, centuries, and even decades that having weak intuitions is no longer such a reliable indication of the true importance of a decision. For most people, for instance, buying a car is an important decision. Much of their money goes into buying and maintaining a car; much of their time goes into using it; and their life may depend on its safety features. There are no evolved mechanisms for choosing cars in the way we have dedicated mechanisms aimed at selecting safe foods or reliable friends. As a result, intuitions give only weak and limited guidance. Does this mean that looking for an easily justifiable choice in these evolutionarily novel situations—choosing a car that is popular and well-reviewed, for instance—is unreasonably risky? Not really. In most cases, the decisions that are the easiest to justify in the eyes of others and hence that are the most likely to contribute to our reputation are also the best decisions to achieve our goals.

  When the reasons that are recognized as good in a given community are objectively good reasons, people guided by reputational concern may still arrive at true beliefs and effective decisions. But this is not always the case—far from it. Throughout the centuries, smart physicians felt justified in making decisions that cost patients their lives. A misguided understanding of physiology such as Galen’s theory of humors created a mismatch between decisions easy to justify—say, bleeding the patient to restore the balance between humors—and the fact that the condition of some patients deteriorated after being bled, a fact that must have given pause to some of these physicians. Still, if they were eager to maintain their reputation, they were better off bleeding their patients, and anyhow, there was no clear alternative. By contrast, today’s doctors, relying on vastly improved, evidence-based medical knowledge, may make decisions guided in good part by the sense of what the medical community would approve and, in so doing, preserve both their reputation and the health of their patients.

  When Justification and Argumentation Diverge

  The message of this chapter might seem bleak. Reason improves our social standing rather than leading us to intrinsically better decisions. And even when it leads us to better decisions, it’s mostly because we happen to be in a community that favors the right type of decisions on the issue. This, however, cannot be the whole picture. Justifications in terms of reasons do indeed involve deference to common wisdom or to experts. What implicitly justifies this deference, however, is the presumption that the community or the experts are better at producing good reasons. But there is a potential for tension between the lazy justification provided by socially recognized “good reasons” and an individual effort to better understand and evaluate these reasons, to acquire some expertise oneself.

  In the beginning of the nineteenth century, for instance, the doctor Joseph Victor Broussais was the most respected medical authority in Paris. He insisted that all fevers are caused by inflammation and should be treated by bloodletting. The younger doctor Pierre-Charles-Alexandre Louis didn’t really doubt the efficacy of bloodletting, but he wanted to evaluate it precisely. For this, he compared two groups of patients that had been bled for pneumonia and discovered that, contrary to his expectations, those who had been bled early in their illness had died in greater numbers than those who had been bled late, showing that not only had bloodletting not cured them, it had worsened their condition. Louis had now compelling evidence and arguments, if not against bloodletting in general, at least against the systematic usage recommended by Broussais. Louis’s pioneering work in evidence-based medicine played a crucial role in the progressive abandonment of bloodletting as a major medical procedure in the nineteenth century. In criticizing the overextended practice of bloodletting, Louis was taking immediate reputational risks, but precisely because he had good arguments to do so, in the end his ideas prevailed and his reputation grew.16

  It would be nice to think that, when there is a conflict between the goal of having good reasons in the eyes of others and that of having demonstrably good reasons, argumentative strength trumps ease of immediate justification and that the best reasons ultimately win. Well, things are somewhat more complicated.

  Consider the following scenario:

  You have a ticket to a basketball game in a city sixty miles from your home. The day of the game there is a major snowstorm, and the roads are very bad. Are you more likely to go to the game if:

  a. You paid $35 for the ticket.

  b. You got the ticket for free.

  c. Equally likely.17

  Most people answer that they would be more likely to face the snowstorm if they had bought the ticket than if they had got it for free. According to psychologists Hal Arkes and Peter Ayton, this decision is based on a reason such as “wasting is bad.”18 Most people would understand better why someone would brave a snowstorm for a ticket they bought than for one they got for free; they might even disapprove of somebody who bought the ticket and didn’t go.

  Economists call the price of the ticket in this situation a “sunk cost.” The money has already been spent and cannot be retrieved. It is as good as sunk. Decisions are about the future, which can be altered, not about the past, which cannot. The only question that really matters, then, is: Would you be better off now facing the snowstorm to get to the game, or doing something else? If you would be better off doing something else, then undertaking an unpleasant and potentially dangerous drive simply makes you worse off. People who accept this argument should, it seems, answer that whether they bought the ticket or got it for free would not affect their decision to go or not to go to the game.

  The argument against this so-called sunk-cost fallacy, on the other hand, is clear. It even has the backing of many philosophers and economists. If, however, you are convinced by the argument and you decide, say, to stay at home in spite of having paid for the ticket, you might be ill-judged by people who are not aware of this argument, and you might not have the opportunity to explain the reasons for your choice. So you might, ironically, be seen as making an irrational decision when in fact you made a decision based on a sound reason. To the extent that you care what these people think of you, their judgment will pull toward making the socially acceptable decision.19 (Mind you, if you are a student in economics and make a decision based on a sunk cost not to be ill-judged by your family, you may end up being deemed incompetent by your fellow economics students.)

  But why should the sunk-cost fallacy be common and indeed be seen not as fallacious but as the right thing to do? Here is a speculative answer. One of the qualities people look for in friends, partners, or collaborators is that they should be dependable. Some degree of stubbornness in carrying through any decision made, in pursuing any course of action undertaken, even when the expectation of benefits is revised down, gives to others evidence that one can be relied upon. People who persevere in their undertakings even when it might not be optimally rati
onal from their individual point of view may, in doing so, strengthen their reputation of reliability. It may be rational, then, at least in some cases, not just to knowingly commit the sunk-cost fallacy in order to signal to others that one can be counted upon but also to have a better opinion of people who commit the fallacy than of people who don’t.

  Attending to the interactional functions of reason not only makes better sense of it but also shows its limit. Justificatory and argumentative reasons are fundamental tools in human interaction, but which type of reason trumps the other when they diverge may depend not only on the quality of the reasons involved but also on the social, and in particular reputational, benefits involved. The reason module cannot pretend to the commandeering position that the classical approach assigned to capital R Reason.

  15

  The Bright Side of Reason

  At the beginning of the movie 12 Angry Men (spoilers ahead), a youngster stands accused of stabbing his father to death. His life is in a precarious position: in the jury room, the arguments for conviction are piling up. One witness has seen the boy do it; another heard the fight and saw the accused flee out of the apartment; the boy’s alibi doesn’t hold water; he has a motive, and a long record of violence. Group polarization is lurking, ready to convince the jurors that the boy should be sent to the electric chair. But one juror is less confident than the others. While this juror is not convinced of the defendant’s innocence, he’s not quite sure of his guilt, either. He “just wants to talk.” When urged to provide arguments, he starts with a weak one: the evidence against the boy is too good; it’s suspiciously good. Unsurprisingly, this doesn’t sway any of the other jurors. From then on, however, this juror does a better job at poking holes in the prosecution’s case.

 

‹ Prev