Misbehaving: The Making of Behavioral Economics

Home > Other > Misbehaving: The Making of Behavioral Economics > Page 4
Misbehaving: The Making of Behavioral Economics Page 4

by Richard H. Thaler


  † In fact, just having a gun in the house increases the risk that a member of the household will commit suicide.

  ‡ In case you are wondering about the order of the names in their papers, early on Amos and Danny adopted the highly unusual strategy of alternating whose name would go first as a subtle way of signaling that they were equal partners. In economics, alphabetical order is the default option, but in psychology the order of names usually is meant to indicate relative contributions. Their solution avoided having to make a decision, paper by paper, about who had contributed more. Such evaluations can be fraught (see chapter 28).

  4

  Value Theory

  After my day in the library, I called Fischhoff to thank him. He told me that Kahneman and Tversky were working on a new project about decision-making that should be right up my alley. Fischhoff thought that Howard Kunreuther, a professor at Wharton, might have a copy. I called Howard and struck gold. He had the draft and would send me a copy.

  The paper, called “Value Theory” at the time, arrived replete with Howard’s comments scrawled in the margins. It was an early version of the paper that would win Danny a Nobel Prize in 2002. (Amos would have shared the prize had he been alive.) In time the authors changed the title to “Prospect Theory.”* This paper was even more germane to the List than the work on heuristics and biases. Two things grabbed me immediately: an organizing principle and a simple graph.

  Two kinds of theories

  The organizing principle was the existence of two different kinds of theories: normative and descriptive. Normative theories tell you the right way to think about some problem. By “right” I do not mean right in some moral sense; instead, I mean logically consistent, as prescribed by the optimizing model at the heart of economic reasoning, sometimes called rational choice theory. That is the only way I will use the word “normative” in this book. For instance, the Pythagorean theorem is a normative theory of how to calculate the length of one side of a right triangle if you know the length of the other two sides. If you use any other formula you will be wrong.

  Here is a test to see if you are a good intuitive Pythagorean thinker. Consider two pieces of railroad track, each one mile long, laid end to end (see figure 1). The tracks are nailed down at their end points but simply meet in the middle. Now, suppose it gets hot and the railroad tracks expand, each by one inch. Since they are attached to the ground at the end points, the tracks can only expand by rising like a drawbridge. Furthermore, these pieces of track are so sturdy that they retain their straight, linear shape as they go up. (This is to make the problem easier, so stop complaining about unrealistic assumptions.) Here is your problem:

  Consider just one side of the track. We have a right triangle with a base of one mile, a hypotenuse of one mile plus one inch. What is the altitude? In other words, by how much does the track rise above the ground?

  FIGURE 1

  If you remember your high school geometry, have a calculator with a square root function handy, and know that there are 5,280 feet in a mile and 12 inches in a foot, you can solve this problem. But suppose instead you have to use your intuition. What is your guess?

  Most people figure that since the tracks expanded by an inch they should go up by roughly the same amount, or maybe as much as two or three inches.

  The actual answer is 29.7 feet! How did you do?

  Now suppose we want to develop a theory of how people answer this question. If we are rational choice theorists, we assume that people will give the right answer, so we will use the Pythagorean theorem as both our normative and descriptive model and predict that people will come up with something near 30 feet. For this problem, that is a terrible prediction. The average answer that people give is about 2 inches.

  This gets to the heart of the problem with traditional economics and the conceptual breakthrough offered by prospect theory. Economic theory at that time, and for most economists today, uses one theory to serve both normative and descriptive purposes. Consider the economic theory of the firm. This theory, a simple example of the use of optimization-based models, stipulates that firms will act to maximize profits (or the value of the firm), and further elaborations on the theory simply spell out how that should be done. For example, a firm should set prices so that marginal cost equals marginal revenue. When economists use the term “marginal” it just means incremental, so this rule implies that the firm will keep producing until the point where the cost of the last item made is exactly equal to the incremental revenue brought in. Similarly, the theory of human capital formation, pioneered by the economist Gary Becker, assumes that people choose which kind of education to obtain, and how much time and money to invest in acquiring these skills, by correctly forecasting how much money they will make (and how much fun they will have) in their subsequent careers. There are very few high school and college students whose choices reflect careful analysis of these factors. Instead, many people study the subject they enjoy most without thinking through to what kind of life that will create.

  Prospect theory sought to break from the traditional idea that a single theory of human behavior can be both normative and descriptive. Specifically, the paper took on the theory of decision-making under uncertainty. The initial ideas behind this theory go back to Daniel Bernoulli in 1738. Bernoulli was a student of almost everything, including mathematics and physics, and his work in this domain was to solve a puzzle known as the St. Petersburg paradox, a puzzle posed by his cousin Nicolas.† (They came from a precocious family.) Essentially, Bernoulli invented the idea of risk aversion. He did so by positing that people’s happiness—or utility, as economists like to call it—increases as they get wealthier, but at a decreasing rate. This principle is called diminishing sensitivity. As wealth grows, the impact of a given increment of wealth, say $100,000, falls. To a peasant, a $100,000 windfall would be life-changing. To Bill Gates, it would go undetected. A graph of what this looks like appears in figure 2.

  FIGURE 2

  A utility function of this shape implies risk aversion because the utility of the first thousand dollars is greater than the utility of the second thousand dollars, and so forth. This implies that if your wealth is $100,000 and I offer you a choice between an additional $1,000 for sure or a 50% chance to win $2,000, you will take the sure thing because you value the second thousand you would win less than the first thousand, so you are not willing to risk losing that first $1,000 prize in an attempt to get $2,000.

  The full treatment of the formal theory of how to make decisions in risky situations—called expected utility theory—was published in 1944 by the mathematician John von Neumann and the economist Oskar Morgenstern. John von Neumann, one of the greatest mathematicians of the twentieth century, was a contemporary of Albert Einstein at the Institute of Advanced Study at Princeton University, and during World War II he decided to devote himself to practical problems. The result was the 600-plus-page opus The Theory of Games and Economic Behavior, in which the development of expected utility theory was just a sideline.

  The way that von Neumann and Morgenstern created the theory was to begin by writing down a series of axioms of rational choice. They then derived how someone who wanted to follow these axioms would behave. The axioms are mostly uncontroversial notions such as transitivity, a technical term that says if you prefer A over B and B over C then you must prefer A over C. Remarkably, von Neumann and Morgenstern proved that if you want to satisfy these axioms (and you do), then you must make decisions according to their theory. The argument is completely convincing. If I had an important decision to make—whether to refinance my mortgage or invest in a new business—I would aim to make the decision in accordance with expected utility theory, just as I would use the Pythagorean theorem to estimate the altitude of our railroad triangle. Expected utility is the right way to make decisions.

  With prospect theory, Kahneman and Tversky set out to offer an alternative to expected utility theory that had no pretense of being a useful guide to rational c
hoice; instead, it would be a good prediction of the actual choices real people make. It is a theory about the behavior of Humans.

  Although this seems like a logical step to take, it is not one that economists had ever really embraced. Simon had coined the term “bounded rationality,” but had not done much fleshing out of how boundedly rational people differ from fully rational ones. There were a few other precedents, but they too had never taken hold. For example, the prominent (and for the most part, quite traditional) Princeton economist William Baumol had proposed an alternative to the traditional (normative) theory of the firm (which assumes profit maximization). He postulated that firms maximize their size, measured for instance by sales revenue, subject to a constraint that profits have to meet some minimum level. I think sales maximization may be a good descriptive model of many firms. In fact, it might be smart for a CEO to follow this strategy, since CEO pay oddly seems to depend as much on a firm’s size as it does on its profits, but if so that would also constitute a violation of the theory that firms maximize value.

  The first thing I took from my early glimpse of prospect theory was a mission statement: Build descriptive economic models that accurately portray human behavior.

  A stunning graph

  The other major takeaway for me was a figure depicting the “value function.” This too was a major conceptual change in economic thinking, and the real engine of the new theory. Ever since Bernoulli, economic models were based on a simple assumption that people have “diminishing marginal utility of wealth,” as illustrated in figure 2.

  This model of the utility of wealth gets the basic psychology of wealth right. But to create a better descriptive model, Kahneman and Tversky recognized that we had to change our focus from levels of wealth to changes in wealth. This may sound like a subtle tweak, but switching the focus to changes as opposed to levels is a radical move. A picture of their value function is shown further below, in figure 3.

  Kahneman and Tversky focus on changes because changes are the way Humans experience life. Suppose you are in an office building with a well-functioning air circulation system that keeps the environment at what we typically think of as room temperature. Now you leave your office to attend a meeting in a conference room. As you enter the room, how will you react to the temperature? If it is the same as that of your office and the corridor, you won’t give it a second thought. You will only notice if the room is unusually hot or cold relative to the rest of the building. When we have adapted to our environment, we tend to ignore it.

  FIGURE 3

  The same is true in financial matters. Consider Jane, who makes $80,000 per year. She gets a $5,000 year-end bonus that she had not expected. How does Jane process this event? Does she calculate the change in her lifetime wealth, which is barely noticeable? No, she is more likely to think, “Wow, an extra $5,000!” People think about life in terms of changes, not levels. They can be changes from the status quo or changes from what was expected, but whatever form they take, it is changes that make us happy or miserable. That was a big idea.

  The figure in the paper so captured my imagination that I drew a version of it on the blackboard right next to the List. Have another look at it now. There is an enormous amount of wisdom about human nature captured in that S-shaped curve. The upper portion, for gains, has the same shape as the usual utility of wealth function, capturing the idea of diminishing sensitivity. But notice that the loss function captures diminishing sensitivity also. The difference between losing $10 and $20 feels much bigger than the difference between losing $1,300 and $1,310. This is different from the standard model, because starting from a given wealth level in figure 1, losses are captured by moving down the utility of wealth line, meaning that each loss gets increasingly painful. (If you care less and less about increases in wealth, then it follows that you care more and more about decreases in wealth.)

  The fact that we experience diminishing sensitivity to changes away from the status quo captures another basic human trait—one of the earliest findings in psychology—known as the Weber-Fechner Law. The Weber-Fechner Law holds that the just-noticeable difference in any variable is proportional to the magnitude of that variable. If I gain one ounce, I don’t notice it, but if I am buying fresh herbs, the difference between 2 ounces and 3 ounces is obvious. Psychologists refer to a just noticeable difference as a JND. If you want to impress an academic psychologist, add that term to your cocktail party banter. (“I went for the more expensive sound system in the new car I bought because the increase in price was not a JND.”)

  You can test your understanding of the concept behind the Weber-Fechner Law with this example from National Public Radio’s long-running show called Car Talk. The show consisted of brothers Tom and Ray Magliozzi—both MIT graduates—taking calls from people with questions about their cars. Improbably enough, it was hysterically funny, especially to them. They would laugh endlessly at their own jokes.‡

  In one show a caller asked: “Both my headlights went out at the same time. I took the car to the shop but the mechanic said that all I needed was two new bulbs. How can that be right? Isn’t it too big of a coincidence that both bulbs blew out at the same time?”

  Tom answered the question in a flash. “Ah, the famous Weber-Fechner Law!” It turns out that Tom also did a PhD in psychology and marketing supervised by Max Bazerman, a leading scholar in judgment and decision-making research. So, what does the caller’s question have to do with the Weber-Fechner Law, and how did this insight help Tom solve the problem?

  The answer is that the two bulbs did not in fact burn out at the same time. It is easy to drive around with one bulb burned out and not notice, especially if you live in a well-lit city. Going from two bulbs to one is not always a noticeable difference. But going from one to zero is definitely noticeable. This phenomenon also explains the behavior in one of the examples on the List: being more willing to drive ten minutes to save $10 on a $45 clock radio than on a $495 television set. For the latter purchase, the savings would not be a JND.

  The fact that people have diminishing sensitivity to both gains and losses has another implication. People will be risk-averse for gains, but risk-seeking for losses, as illustrated by the experiment reported below which was administered to two different groups of subjects. (Notice that the initial sentence in the two questions differs in a way that makes the two problems identical if subjects are making decisions based on levels of wealth, as was traditionally assumed.) The percentage of subjects choosing each option is shown in brackets.

  PROBLEM 1. Assume yourself richer by $300 than you are today. You are offered a choice between

  A. A sure gain of $100, or

  [72%]

  B. A 50% chance to gain $200 and a 50% chance to lose $0.

  [28%]

  PROBLEM 2. Assume yourself richer by $500 than you are today. You are offered a choice between

  A. A sure loss of $100, or

  [36%]

  B. A 50% chance to lose $200 and a 50% chance to lose $0.

  [64%]

  The reason why people are risk-seeking for losses is the same logic that applies to why they are risk-averse for gains. In the case of problem 2, the pain of losing the second hundred dollars is less than the pain of losing the first hundred, so subjects are ready to take the risk of losing more in order to have the chance of getting back to no loss at all. They are especially keen to eliminate a loss altogether because of the third feature captured in figure 3: loss aversion.

  Examine the value function in this figure at the origin, where both curves begin. Notice that the loss function is steeper than the gain function: it decreases more quickly than the gain function goes up. Roughly speaking, losses hurt about twice as much as gains make you feel good. This feature of the value function left me flabbergasted. There, in that picture, was the endowment effect. If I take away Professor Rosett’s bottle of wine, he will feel it as a loss equivalent to twice the gain he would feel if he acquired a bottle; that is why he would never
buy a bottle worth the same market price as one in his cellar. The fact that a loss hurts more than an equivalent gain gives pleasure is called loss aversion. It has become the single most powerful tool in the behavioral economist’s arsenal.

  So, we experience life in terms of changes, we feel diminishing sensitivity to both gains and losses, and losses sting more than equivalently-sized gains feel good. That is a lot of wisdom in one image. Little did I know that I would be playing around with that graph for the rest of my career.

  ________________

  * I asked Danny why they changed the name. His reply: “‘Value theory’ was misleading, and we decided to have a completely meaningless term, which would become meaningful if by some lucky break the theory became important. ‘Prospect’ fitted the bill.”

  † The puzzle is this: Suppose you are offered a gamble where you keep flipping a coin until it lands heads up. If you get heads on your first flip you win $2, on your second flip $4, and so forth, with the pot doubling each time. Your expected winnings are ½ x $2 + ¼ x $4 + 1/8 x $8 . . . The value of this sequence is infinite, so why won’t people pay a huge amount to play the bet? Bernoulli’s answer was to suppose that people get diminishing value from increases in their wealth, which yields risk aversion. A simpler solution is to note that there is only a finite amount of wealth in the world, so you should be worried about whether the other side can pay up if you win. Just forty heads in a row puts your prize money at over one trillion dollars. If you think that would break the bank, the bet is worth no more than $40.

 

‹ Prev