Misbehaving: The Making of Behavioral Economics

Home > Other > Misbehaving: The Making of Behavioral Economics > Page 16
Misbehaving: The Making of Behavioral Economics Page 16

by Richard H. Thaler


  ________________

  * I asked one Uber driver in California how he would feel about surge pricing being applied if there was a wildfire in some town and people had to get out. He said: “In that situation, I would want to offer rides for free!”

  † A similar episode occurred in Sydney, Australia, during a hostage crisis in the center of the city. Prices surged, probably based on some algorithm that was not fine-tuned to special circumstances. After online criticism, some Humans at Uber decided to offer free rides and to refund people who had paid (Sullivan, 2014).

  ‡ Notably, an even larger organization—the NFL—recognizes and ascribes to this same piece of advice. In an interview with economist Alan B. Krueger, the NFL’s VP for public relations, Greg Aiello, explained that his organization takes a “long-term strategic view” toward ticket pricing, at least for the Super Bowl. Even though the high demand for Super Bowl tickets might justify significantly higher prices (and short-term profits—he calculates the profit increase as on the same scale as all advertising revenues), the organization intentionally keeps these prices reasonable in order to foster its “ongoing relationship with fans and business associates” (Krueger, 2001).

  15

  Fairness Games

  One question was very much on the minds of Danny, Jack, and me while we were doing our fairness project. Would people be willing to punish a firm that behaves unfairly? Would a customer who was charged $500 for a taxi ride that is normally priced at $50 try to avoid using that service again, even if they liked the service? We designed an experiment in the form of a game to investigate.

  One player, the Proposer, is given a sum of money known as the “pie.” He is told to offer the other player, called the Responder, the portion of the pie he is willing to share. The Responder can either accept the offer, leaving the remaining amount to the Proposer, or can reject it, in which case both players get nothing.

  It was important that this game be played for real money, so we abandoned our telephone polling bureau and did our research with students at the University of British Columbia and Cornell. We devised a very simple way to play the game and get as much data as possible for a given research budget. Players were chosen at random to play the role of Proposer or Responder. Then they filled out a simple form like this one for Responders. In our game the pie was $10.

  If you are offered $10 will you accept?

  Yes________

  No________

  If you are offered $9.50 will you accept?

  Yes________

  No________

  …

  …

  If you are offered $0.50 will you accept?

  Yes________

  No________

  If you are offered nothing will you accept?

  Yes________

  No________

  We asked the questions in this way because we were worried that many Proposers would offer half, which would not give us much insight into the preferences of the Responders, who were our primary focus.

  Using the standard economics assumptions that people are selfish and rational, game theory has a clear prediction for this game. The Proposer will offer the smallest positive amount possible (50 cents in our version) and the Responder will accept, since 50 cents is more than nothing. In contrast, we conjectured that small offers would be rejected as “unfair.” That conjecture turned out to be right. Typically, offers that did not exceed 20% of the pie, $2 in our game, were rejected.

  We were delighted with this outcome of our cute little game, but we soon discovered that three German economists led by Werner Güth had published a paper on precisely this game three years earlier. They used exactly the same methods and had a snappy name for it: the Ultimatum Game. Danny was crestfallen when he heard this news, worried as always that his current idea would be his last. (This is the same man who would publish a global best-seller at age seventy-seven.)

  Jack and I reassured Danny that he probably still had some good ideas left, and we all pressed on to think of another game to go along with the first one. Our research on this game was conducted in two stages. In the first stage we gave students in a classroom setting the following choice: “You have the opportunity to divide $20 between you and another anonymous student in this class. You have two choices: you can take $18 and give the other student $2, or you can split the money evenly, so that you each get $10.” (While everyone made the choice, the subjects were told that only some of them would be selected at random to be paid.) Because the second player is forced to take whatever she is offered, this game has become known as the Dictator Game.

  We did not have a strong opinion about how the Dictator Game would come out. Our primary interest was in the second game, let’s call it the Punishment Game, in which we went to a different class and told the students there about the Dictator Game experiment. Then we gave students a choice. “You have been paired with two students who played [the Dictator Game] but were not selected to be paid. One, E, divided the money evenly, while the other, U, divided the money unevenly. He took $18 and gave his counterpart $2. You have the following choice. Would you like to evenly split $12 with U or $10 with E?”

  Another way to phrase the choice in the Punishment Game is: “Are you willing to give up a dollar to share some money with a student who behaved nicely to someone else, rather than share with a student who was greedy in the same situation?” We thought that the Punishment Game, like the Ultimatum Game, would tell us whether people are willing to give something up to punish someone who behaves in a manner they consider “unfair.”

  Somewhat surprisingly to us (or at least to me), the students in the Dictator stage of our game were remarkably nice. Nearly three quarters (74%) chose to divide the money equally. Of more interest to us, the results of the Punishment stage were even stronger. Fully 81% of the subjects chose to share $10 with a “fair” allocator rather than $12 with an “unfair” allocator.

  It is important to stress what should and should not be inferred from the results of both of these experiments. There is clear evidence that people dislike unfair offers and are willing to take a financial hit to punish those who make them. It is less clear that people feel morally obliged to make fair offers. Although it is true that in the Ultimatum Game the most common offer is often 50%, one cannot conclude that Proposers are trying to be fair. Instead, they may be quite rationally worried about being rejected. Given the empirical evidence on respondents’ behavior, the profit-maximizing strategy in the Ultimatum Game is for the Proposer to offer about 40% of the pie. Lower offers start to run the risk of being rejected, so a 50% offer is not far from the rational selfish strategy.

  Whether the offers made by Proposers are driven by fairness or selfish concerns, the outcomes of the Ultimatum Game appear to be quite robust. Proposers make offers of close to half the pie, and Responders tend to reject offers of less than 20%. The game has been run in locations all around the world, and with the exception of some remote tribes the results are pretty similar. Nevertheless, one question that people have long wondered about is whether the tendency to reject small offers in the Ultimatum Game persists as stakes increase. A natural intuition shared by many is that as the stakes go up, the minimum offer that will be accepted goes down as a fraction of the total pie. That is, if when playing for $10 the average minimally acceptable offer is $2, then when the stakes are raised to $1,000, would people accept less than $200?

  Investigating this hypothesis has been plagued by two problems: running a high-stakes version of the Ultimatum Game is expensive, and most Proposers make “fair” offers. Experimenters in the United States ran a version of the Ultimatum Game for $100, and the results did not differ much from lower-stakes games. Even more telling is evidence from running the game in poor countries, where the cost of living allows experimenters to raise the stakes even higher. For example, Lisa Cameron ran Ultimatum Game experiments in Java using both low stakes and truly high stakes (approximately three months’ income for the subjects). Sh
e found virtually no difference in the behavior of Proposers when she raised the stakes.

  There is another class of games that takes up the question of whether people are purely selfish (at least when dealing with strangers), as Econs are presumed to be. These are games about cooperation. The classic game of this variety is the well-known Prisoner’s Dilemma. In the original setup, there are two prisoners who have been arrested for committing some crime and are being held and interrogated separately. They each have a choice: they can confess their crime or remain silent. If they both remain silent, the police can only convict them of a minor offense with a sentence of one year. If they both confess, they each get five years in jail. But if one confesses and the other stays silent, the confessor gets out of jail free while the other serves ten years in jail.

  In the more general version of this game without the prisoner cover story, there are two strategies, cooperate (stay silent) or defect (confess). The game theoretic prediction is that both players will defect because, no matter what the other player does, it is in the selfish best interest of each player to do so. Yet when this game is played in the laboratory, 40–50% of the players cooperate, which means that about half the players either do not understand the logic of the game or feel that cooperating is the just the right thing to do, or possibly both.

  The Prisoner’s Dilemma comes with a great story, but most of us don’t get arrested very often. What are the implications of this game for normal life? Consider a related game called the Public Goods Game. To understand the economic significance of this game, we turn back to the great Paul Samuelson, who formalized the concept of a public good in a three-page paper published in 1954. The guy did not belabor things.

  A public good is one that everyone can consume without diminishing the consumption of anyone else, and it is impossible to exclude anyone from consuming it. A fireworks display is a classic example. Samuelson proved that a market economy will undersupply public goods because no one will have an incentive to pay much of anything for them, since they can be consumed for free. For years after Samuelson’s paper, economists assumed that the public goods problem could not be solved unless the government stepped in and provided the good, using taxes to make everybody pay a share.

  Of course, if we look around, we see counterexamples to this result all the time. Some people donate to charities and clean up campgrounds, and quite miraculously, at least in America, most urban dog owners now carry a plastic bag when they take their dog for a “walk” in order to dispose of the waste. (Although there are laws in place supposedly enforcing this norm, they are rarely enforced.) In other words, some people cooperate, even when it is not in their self-interest to do so.

  Economists, psychologists, and sociologists have all studied this problem using variations on the following simple game. Suppose we invite ten strangers to the lab and give each of them five one-dollar bills. Each subject can decide how many (if any) dollar bills he wishes to contribute to the “public good” by privately putting that money into a blank envelope. The rules of the game are that the total contributions to the public good envelope are doubled, and then the money is divided equally among all the players.

  The rational, selfish strategy in the Public Goods Game is to contribute nothing. Suppose that Brendan decides to contribute one dollar. That dollar is doubled by the experimenter to two dollars and then is divided among all the players, making Brendan’s share of that contribution 20 cents. So for each dollar he contributes, Brendan will lose 80 cents. Of course other subjects are happy about Brendan’s anonymous contribution, since they each get 20 cents as well, but they will not be grateful to him personally because his contribution was anonymous. Following Samuelson’s logic, the prediction from economic theory is that no one will contribute anything. Notice that by being selfishly rational in this way, the group ends up with half as much money as they would have had if everyone contributed their entire stake, because if everyone contributed $5, that amount would be doubled, and everyone would go home with $10. The distinguished economist and philosopher Amartya Sen famously called people who always give nothing in this game rational fools for blindly following only material self-interest: “The purely economic man is indeed close to being a social moron. Economic theory has been much preoccupied with this rational fool.”

  As with the Prisoner’s Dilemma, the standard economics prediction that no one will cooperate in the Public Goods Game turns out to be false. On average, people contribute about half their stake to the public good. There is still a public goods problem, meaning that public goods are not supplied in as great a quantity as people would want if they could all somehow agree to be cooperative, but the undersupply is about half as severe as the rational selfish model predicts—well, with one important proviso. When the game was played by economics graduate students, the contribution rate was only 20%, leading the sociologists Gerald Marwell and Ruth Ames to write a paper titled “Economists Free Ride: Does Anyone Else?”

  A wisecracking economist might answer the question posed by Marwell and Ames’s title with “experienced players.” A robust finding in public goods experiments is that if a group of subjects play the game repeatedly, cooperation rates steadily fall, from the usual 50% down to nearly zero. When this result was first discovered, some economists argued that the initial high cooperation rates were due to some confusion on the part of the subjects, and when they played the game repeatedly, they learned that the rational selfish strategy was the right one. In 1999, the experimental economist James Andreoni tested this interpretation with a brilliant twist. After groups of five subjects played the game for the announced ten rounds and watched cooperation rates fall, the subjects were told that they would play another ten rounds of the game with the same players. What do you think happens?

  If people have learned that being selfish is the smart thing to do, then cooperation rates should remain low after the restart, but that is not what happened. Instead, in the first round of the new game, cooperation rates jumped back to the same level as the first round of the initial experiment. So repeated play of the Public Goods Game does not teach people to be jerks; rather it teaches them that they are playing with (some) jerks, and no one likes to play the role of the sucker.

  Further research by Ernst Fehr and his colleagues has shown that, consistent with Andreoni’s finding, a large proportion of people can be categorized as conditional cooperators, meaning that they are willing to cooperate if enough others do. People start out these games willing to give their fellow players the benefit of the doubt, but if cooperation rates are low, these conditional cooperators turn into free riders. However, cooperation can be maintained even in repeated games if players are given the opportunity to punish those who do not cooperate. As illustrated by the Punishment Game, described earlier, people are willing to spend some of their own money to teach a lesson to those who behave unfairly, and this willingness to punish disciplines potential free riders and keeps robust cooperation rates stable.

  A few years after my time with Danny in Vancouver, I wrote an article about cooperation with the psychologist Robyn Dawes. In the conclusion, we drew an analogy with the roadside stands one would often see in the rural areas around Ithaca. A farmer would put some produce for sale out on a table in front of his farm. There was a box with a small slot to insert the payment, so money could be put in but not taken out. The box was also nailed to the table. I thought then, and think now, that farmers who use this system have a pretty good model of human nature in mind. There are enough honest people out there (especially in a small town) to make it worthwhile for the farmer to put out some fresh corn or rhubarb to sell. But they also know that if the money were left in an open box where anyone could take all of it, someone eventually would.

  Economists need to adopt as nuanced a view of human nature as the farmers. Not everyone will free ride all the time, but some people are ready to pick your pocket if you are not careful. I keep a photograph of one of those farm stands in my office for ins
piration.

  16

  Mugs

  At some point during the Vancouver year, the economist Alvin Roth, who was then deeply involved with experimental methods, organized a conference at the University of Pittsburgh. The goal was to present the first drafts of papers that would later be published in a small book called Laboratory Experimentation in Economics: Six Points of View. The contributors were major figures in the experimental economics community including Al, Vernon Smith, and Charlie Plott. Danny and I represented the new behavioral wing of the experimental economics community.

  For Danny and me, the most interesting discussion was about my beloved endowment effect. Both Vernon and Charlie claimed we didn’t have convincing empirical evidence for this phenomenon. The evidence I had presented was based on a paper written by Jack Knetsch along with an Australian collaborator, John Sinden. Their experiment was delightfully simple. Half the subjects were chosen at random to receive three dollars; the other half got lottery tickets. The winner of the lottery would receive her choice between $50 in cash and $70 in vouchers for use at the local bookstore. After some time passed during which the subjects completed some other task, each group was given a choice. Those who did not have a lottery ticket were told they could buy one for $3, while the others were told that they could sell their lottery tickets for $3.

  Notice that both groups are being asked the same question: “Would you rather have the lottery ticket or three dollars?” According to economic theory, it should not make any difference whether subjects had originally received the money or the lottery ticket. If they value the ticket at more than $3 they should end up with one; if they value the ticket at less than $3 they should end up with the money. The results clearly rejected this prediction. Of those who began with a lottery ticket, 82% decided to keep it, whereas of those who started out with the money, only 38% wanted to buy the ticket. This means that people are more likely to keep what they start with than to trade it, even when the initial allocations were done at random. The result could not be any stronger or clearer.

 

‹ Prev