The Undoing Project

Home > Other > The Undoing Project > Page 25
The Undoing Project Page 25

by Michael Lewis


  It must have come as something of a relief to him, as he neared the end of Amos’s chapter on expected utility theory, to arrive at the following sentence: “Some people, however, remained unconvinced by the axioms.”

  One such person, the textbook went on to say, was Maurice Allais. Allais was a French economist who disliked the self-certainty of American economists. He especially disapproved of the growing tendency in economics, after von Neumann and Morgenstern built their theory, to treat a math model of human behavior as an accurate description of how people made choices. At a convention of economists in 1953, Allais offered what he imagined to be a killer argument against expected utility theory. He asked his audience to imagine their choices in the following two situations (the dollar amounts used by Allais are here multiplied by ten to account for inflation and capture the feel of his original problem):

  Situation 1. You must choose between having:

  1)$5 million for sure

  or this gamble

  2)An 89 percent chance of winning $5 million

  A 10 percent chance of winning $25 million

  A 1 percent chance to win zero

  Most people who looked at that, apparently including many of the American economists in Allais’s audience, said, “Obviously, I’ll take door number 1, the $5 million for sure.” They preferred the certainty of being rich to the slim possibility of being even richer. To which Allais replied, “Okay, now consider this second situation.”

  Situation 2. You must choose between having:

  3)An 11 percent chance of winning $5 million, with an 89 percent chance to win zero

  or

  4)A 10 percent chance of winning $25 million, with a 90 percent chance to win zero

  Most everyone, including American economists, looked at this choice and said, “I’ll take number 4.” They preferred the slightly lower chance of winning a lot more money. There was nothing wrong with this; on the face of it, both choices felt perfectly sensible. The trouble, as Amos’s textbook explained, was that “this seemingly innocent pair of preferences is incompatible with utility theory.” What was now called the Allais paradox had become the most famous contradiction of expected utility theory. Allais’s problem caused even the most cold-blooded American economist to violate the rules of rationality.*

  Amos’s introduction to mathematical psychology sketched the controversy and argument that had ensued after Allais posed his paradox. On the American end, the argument was spearheaded by a brilliant American statistician and mathematician named L. J. (Jimmie) Savage, who had made important contributions to utility theory and who admitted that he, too, had been suckered by Allais into contradicting himself. Savage found an even more complicated way to restate Allais’s gambles so that at least a few devotees of expected utility theory, himself included, looked at the second situation and picked option number 3 instead of option number 4. That is, he demonstrated—or thought he had demonstrated—that the Allais “paradox” was not a paradox at all, and that people behaved just as expected utility predicted they would behave. Amos, along with pretty much everyone else who took an interest in such things, remained dubious.

  As Danny read up on decision theory, Amos helped him to understand what was important about it and what was not. “He just had impeccable taste,” said Danny. “He knew what the problems were. He knew how to situate himself in the broad field. I didn’t have that.” What was important, Amos said, were the unresolved puzzles. “Amos said, ‘This is the story, this is the game. The game is to solve the Allais paradox.’”

  Danny wasn’t inclined to see the paradox as a problem of logic. It looked to him more like a quirk in human behavior. “I wanted to understand the psychology of what was going on,” he said. He sensed that Allais himself hadn’t given much thought to why people might choose in a way that violated the major theory of decision making. But to Danny the reason seemed obvious: regret. In the first situation people sensed that they would look back on their decision, if it turned out badly, and feel they had screwed up; in the second situation, not so much. Anyone who turned down a certain gift of $5 million would experience far more regret, if he wound up with nothing, than a person who turned down a gamble in which he stood a slight chance of winning $5 million. If people mostly chose option 1, it was because they sensed the special pain they would experience if they chose option 2 and won nothing. Avoiding that pain became a line item on the inner calculation of their expected utility. Regret was the ham in the back of the deli that caused people to switch from turkey to roast beef.

  Decision theory had approached the seeming contradiction at the heart of the Allais paradox as a technical problem. Danny found that silly: There was no contradiction. There was just psychology. The understanding of any decision had to account not just for the financial consequences but for the emotional ones, too. “Obviously it is not regret itself that determines decisions—no more than the actual emotional response to consequences ever determines the prior choice of a course of action,” Danny wrote to Amos, in one of a series of memos on the subject. “It is the anticipation of regret that affects decisions, along with the anticipation of other consequences.” Danny thought that people anticipated regret, and adjusted for it, in a way they did not anticipate or adjust for other emotions. “What might have been is an essential component of misery,’” he wrote to Amos. “There is an asymmetry here, because considerations of how much worse things could have been is not a salient factor in human joy and happiness.”

  Happy people did not dwell on some imagined unhappiness the way unhappy people imagined what they might have done differently so that they might be happy. People did not seek to avoid other emotions with the same energy they sought to avoid regret.

  When they made decisions, people did not seek to maximize utility. They sought to minimize regret. As the starting point for a new theory, it sounded promising. When people asked Amos how he made the big decisions in his life, he often told them that his strategy was to imagine what he would come to regret, after he had chosen some option, and to choose the option that would make him feel the least regret. Danny, for his part, personified regret. Danny would resist a change to his airline reservations, even when the change made his life a lot easier, because he imagined the regret he would feel if the change led to some disaster. It’s not a stretch to say that Danny anticipated anticipating regret. He was perfectly capable of anticipating the regret provoked by events that might never occur and decisions that he might never need to make. Once, at a dinner with Amos and their wives, Danny went on at length and with great certainty about his premonition that his son, then still a boy, would one day join the Israeli military; that war would break out; and that his son would be killed. “What were the odds of all that happening?” said Barbara Tversky. “Minuscule. But I couldn’t talk him out of it. It was so unpleasant talking with him about these small probabilities that I just gave up.” It was as if Danny thought that by anticipating his feelings he might dull the pain they would inevitably bring.

  By the end of 1973, Amos and Danny were spending six hours a day with each other, either holed up in a conference room or on long walks across Jerusalem. Amos hated smoke; he hated being around people who smoked. Danny was still smoking two packs of cigarettes a day, and yet Amos never said a word. All that mattered was the conversation. When they weren’t with each other, they were writing memos to each other, to clarify and extend what had been said. If they happened to find themselves at the same social function, they inevitably wound up in the corner of a room, talking to each other. “We just found each other more interesting than anyone else,” said Danny. “Even if we had just spent the entire day working together.” They’d become a single mind, creating ideas about why people did what they did, and cooking up odd experiments to test them. For instance, they put this scenario to subjects:

  You have participated in a lottery at a fair, and have bought a single expen
sive ticket in the hope of winning the single large prize that is offered. The ticket was drawn blindly from a large urn, and its number is 107358. The results of the lottery are now announced, and it turns out that the winning number is 107359.

  They asked their subjects to rate their unhappiness on a scale from 1 to 20. Then they went to two other groups of subjects and gave them the same scenario, but with one change: the winning number. One group of subjects was told that the winning number was 207358; the second group was told that the winning number was 618379. The first group professed greater unhappiness than the second. Weirdly—but as Danny and Amos had suspected—the further the winning number was from the number on a person’s lottery ticket, the less regret they felt. “In defiance of logic, there is a definite sense that one comes closer to winning the lottery when one’s ticket number is similar to the number that won,” Danny wrote in a memo to Amos, summarizing their data. In another memo, he added that “the general point is that the same state of affairs (objectively) can be experienced with very different degrees of misery,” depending on how easy it is to imagine that things might have turned out differently.

  Regret was sufficiently imaginable that people conjured it out of situations they had no control over. But it was of course at its most potent when people might have done something to avoid it. What people regretted, and the intensity with which they regretted it, was not obvious.

  War and politics were never far from Amos and Danny’s minds or their conversations. They watched their fellow Israelis closely in the aftermath of the Yom Kippur war. Most regretted that Israel had been caught by surprise. Some regretted that Israel had not attacked first. Few regretted what both Danny and Amos thought they should most regret: the Israeli government’s reluctance to give back the territorial gains from the 1967 war. Had Israel given back the Sinai to Egypt, Sadat would quite likely never have felt the need to attack in the first place. Why didn’t people regret Israel’s inaction? Amos and Danny had a thought: People regretted what they had done, and what they wished they hadn’t done, far more than what they had not done and perhaps should have. “The pain that is experienced when the loss is caused by an act that modified the status quo is significantly greater than the pain that is experienced when the decision led to the retention of the status quo,” Danny wrote in a memo to Amos. “When one fails to take action that could have avoided a disaster, one does not accept responsibility for the occurrence of the disaster.”

  They set out to build a theory of regret. They were uncovering, or thought they were uncovering, what amounted to the rules of regret. One rule was that the emotion was closely linked to the feeling of “coming close” and failing. The nearer you came to achieving a thing, the greater the regret you experienced if you failed to achieve it.† A second rule: Regret was closely linked to feelings of responsibility. The more control you felt you had over the outcome of a gamble, the greater the regret you experienced if the gamble turned out badly. People anticipated regret in Allais’s problem not from the failure to win a gamble but from the decision to forgo a certain pile of money.

  That was another rule of regret. It skewed any decision in which a person faced a choice between a sure thing and a gamble. This tendency was not merely of academic interest. Danny and Amos agreed that there was a real-world equivalent of a “sure thing”: the status quo. The status quo was what people assumed they would get if they failed to take action. “Many instances of prolonged hesitation, and of continued reluctance to take positive action, should probably be explained in this fashion,” wrote Danny to Amos. They played around with the idea that the anticipation of regret might play an even greater role in human affairs than it did if people could somehow know what would have happened if they had chosen differently. “The absence of definite information concerning the outcomes of actions one has not taken is probably the single most important factor that keeps regret in life within tolerable bounds,” Danny wrote. “We can never be absolutely sure that we would have been happier had we chosen another profession or another spouse. . . . Thus, we are often protected from painful knowledge concerning the quality of our decisions.”

  They spent more than a year working and reworking the same basic idea: In order to explain the paradoxes that expected utility could not explain, and create a better theory to predict behavior, you had to inject psychology into the theory. By testing how people choose between various sure gains and gains that were merely probable, they traced the contours of regret.

  Which of the following two gifts do you prefer?

  Gift A: A lottery ticket that offers a 50 percent chance of winning $1,000

  Gift B: A certain $400

  or

  Which of the following gifts do you prefer?

  Gift A: A lottery ticket that offers a 50 percent chance of winning $1 million

  Gift B: A certain $400,000

  They collected great heaps of data: choices people had actually made. “Always keep one hand firmly on data,” Amos liked to say. Data was what set psychology apart from philosophy, and physics from metaphysics. In the data, they saw that people’s subjective feelings about money had a lot in common with their perceptual experiences. People in total darkness were extremely sensitive to the first glimmer of light, just as people in total silence were alive to the faintest sound, and people in tall buildings were quick to detect even the slightest swaying. As you turned up the lights or the sound or the movement, people became less sensitive to incremental change. So, too, with money. People felt greater pleasure going from 0 to $1 million than they felt going from $1 million to $2 million. Of course, expected utility theory also predicted that people would take a sure gain over a bet that offered an expected value of an even bigger gain. They were “risk averse.” But what was this thing that everyone had been calling “risk aversion?” It amounted to a fee that people paid, willingly, to avoid regret: a regret premium.

  Expected utility theory wasn’t exactly wrong. It simply did not understand itself, to the point where it could not defend itself against seeming contradictions. The theory’s failure to explain people’s decisions, Danny and Amos wrote, “merely demonstrates what should perhaps be obvious, that non-monetary consequences of decisions cannot be neglected, as they all too often are, in applications of utility theory.” Still, it wasn’t obvious how to weave what amounted to a collection of insights about an emotion into a theory of how people make risky decisions. They were groping. Amos liked to use an expression he’d read someplace: “carving nature at its joint.” They were trying to carve human nature at its joint, but the joints of an emotion were elusive. That was one reason Amos didn’t particularly like to think or talk about emotion; he didn’t like things that were hard to measure. “This is indeed a complex theory,” Danny confessed one day in a memo. “In fact it consists of several mini-theories, which are rather loosely connected.”

  In reading about expected utility theory, Danny had found the paradox that purported to contradict it not terribly puzzling. What puzzled Danny was what the theory had left out. “The smartest people in the world are measuring utility,” he recalled. “As I’m reading about it, something strikes me as really, really peculiar.” The theorists seemed to take it to mean “the utility of having money.” In their minds, it was linked to levels of wealth. More, because it was more, was always better. Less, because it was less, was always worse. This struck Danny as false. He created many scenarios to show just how false it was:

  Today Jack and Jill each have a wealth of 5 million.

  Yesterday, Jack had 1 million and Jill had 9 million.

  Are they equally happy? (Do they have the same utility?)

  Of course they weren’t equally happy. Jill was distraught and Jack was elated. Even if you took a million away from Jack and left him with less than Jill, he’d still be happier than she was. In people’s perceptions of money, as surely as in their perception of light and sound and the weather and e
verything else under the sun, what mattered was not the absolute levels but changes. People making choices, especially choices between gambles for small sums of money, made them in terms of gains and losses; they weren’t thinking about absolute levels. “I came back to Amos with that question, expecting that he would explain it to me,” Danny recalled. “Instead Amos says, ‘You’re right.’”

  * * *

  * I apologize for this, but it must be done. Those whose minds freeze when confronted with algebra can skip what follows. A simpler proof of the paradox, devised by Danny and Amos, will come later. But here, more or less reproduced from Mathematical Psychology: An Elementary Introduction, is the proof of Allais’s point that Amos asked Danny to ponder.

  Let u stand for utility.

  In situation 1:

  u(gamble 1) > u(gamble 2)

  and hence

  1u(5) > .10u(25) + .89u(5) + .01u(0)

  so

  .11u(5) > .10u(25) + .01u(0)

  Now turn to situation 2, where most people chose 4 over 3. This implies

  u(gamble 4) > u(gamble 3)

  and hence

  .10u(25) + .90u(0) > .11u(5) + .89u(0)

  so

  .10u(25) + .01u(0) > .11u(5)

  Or the exact reverse of the choice made in the first gamble.

  † Two decades later, in 1995, the American psychologist Thomas Gilovich, who collaborated in turn with Danny and Amos, coauthored a study that examined the relative happiness of silver and bronze medal winners at the 1992 Summer Olympics. From video footage, subjects judged the bronze medal winners to be happier than the silver medal winners. The silver medalists, the authors suggested, dealt with the regret of not having won gold, while the bronze medalists were just happy to be on a podium.

 

‹ Prev