Book Read Free

Misbehaving: The Making of Behavioral Economics

Page 22

by Richard H. Thaler


  These experimental findings have recently been replicated in a natural experiment made possible by a regulatory change in Israel. A paper by Chicago Booth PhD student Maya Shaton investigates what happened in 2010 when the government agency that regulates retirement savings funds changed the way funds report their returns. Previously, when an investor checked on her investments, the first number that would appear for a given fund was the return for the most recent month. After the new regulation, investors were shown returns for the past year instead. As predicted by myopic loss aversion, after the change investors shifted more of their assets into stocks. They also traded less often, and were less prone to shifting money into funds with high recent returns. Altogether this was a highly sensible regulation.

  These experiments demonstrate that looking at the returns on your portfolio more often can make you less willing to take risk. In our “myopic loss aversion” paper, Benartzi and I used prospect theory and mental accounting to try to explain the equity premium puzzle. We used historical data on stocks and bonds and asked how often investors would have to evaluate their portfolios to make them indifferent between stocks and bonds, or want to hold a portfolio that was a 50-50 mixture of the two assets. The answer we got was roughly one year. Of course investors will differ in the frequency with which they look at their portfolios, but once a year has a highly plausible ring. Individuals file tax returns once a year; similarly, while pensions and endowments make reports to their boards on a regular basis, the annual report is probably the most salient.

  The implication of our analysis is that the equity premium—or the required rate of return on stocks—is so high because investors look at their portfolios too often. Whenever anyone asks me for investment advice, I tell them to buy a diversified portfolio heavily tilted toward stocks, especially if they are young, and then scrupulously avoid reading anything in the newspaper aside from the sports section. Crossword puzzles are acceptable, but watching cable financial news networks is strictly forbidden.#

  During our year at Russell Sage, Colin and I would frequently take taxis together. Sometimes it was difficult to find an empty cab, especially on cold days or when a big convention was in town. We would occasionally talk to the drivers and ask them how they decided the number of hours to work each day.

  Most drivers work for a company with a large fleet of cabs. They rent the cab for a period of twelve hours, usually from five to five, that is, 5 a.m. to 5 p.m., or 5 p.m. to 5 a.m.** The driver pays a flat amount to rent the cab and has to return it with the gas tank full. He keeps all the money he makes from the fares on the meter, plus tips. We started asking drivers, “How do you decide when to quit for the day? Twelve hours is a long time to drive in New York City traffic, especially while trying to keep an eye out for possible passengers. Some drivers told us they had adopted a target income strategy. They would set a goal for how much money they wanted to make after paying for the car and the fuel, and when they reached that goal they would call it a day.

  The question of how hard to work was related to a project Colin, George Loewenstein, and I had been thinking about; we called it the “effort” project. We had discussed the idea for a while and had run a few lab experiments, but we had yet to find an angle we liked. We decided that studying the actual decision-making of cab drivers might be what we had been looking for.

  All drivers kept a record of each fare on a sheet of paper called a trip sheet. The information recorded included the time of the pickup, the destination, and the fare. The sheet also included when the driver returned the car. Somehow, Colin managed to find the manager of a taxicab company who agreed to let us make copies of a pile of these trip sheets. We later supplemented this data set with two more we obtained from the New York City Taxi and Limousine commissioner. The data analysis became complicated so we recruited Linda Babcock, a labor economist and Russell Sage summer camp graduate with good econometrics skills, to join us.

  The central question that the paper asked is whether drivers work longer on days when the effective wage is higher. The first step was to show that high- and low-wage days occur, and that earnings later in the day could be predicted by earnings during the first part of the day. This is true. On busy days, drivers make more per hour and can expect to make more if they work an additional hour. Having established this, we looked at our central question and got a result economists found shocking. The higher the wage, the less drivers worked.

  Basic economics tells us that demand curves slope down and supply curves slope up. That is, the higher the wage, the more labor that is supplied. Here we were finding just the opposite result! It is important to clarify just what these results say and don’t say. Like other economists, we believed that if the wages of cab drivers doubled, more people would want to drive cabs for a living. And even on a given day, if there is a reason to think that a day will be busy, fewer drivers will decide to take that day off and go to the beach. Even behavioral economists believe that people buy less when the price goes up and supply more when the wage rises. But in deciding how long to work on a given day that they have decided to work, the drivers were falling into a trap of narrowly thinking about their earnings one day at a time, and this led them to make the mistake of working less on good days than bad ones.††

  Well, not all drivers made this mistake. Driving a cab is a Groundhog Day–type learning experience, in which the same thing happens every day, and cab drivers appear to learn to overcome this bias over time. We discovered that if we split each of our samples in half according to how long the subjects had been cab drivers, in every case the more experienced drivers behaved more sensibly. For the most part, they drove more when wages were higher, not lower. But of course, that makes the effect even stronger than average for the inexperienced drivers, who look very much like they have a target income level that they shoot for, and when they reach it, they head home.

  To connect this with narrow framing, suppose that drivers keep track of their earnings at a monthly rather than a daily level. If they decided to drive the same amount each day, they would earn about 5% more than they do in our sample. And if they drove more on good days and less on bad days, they would earn 10% more over the same amount of hours. We suspected that, especially for inexperienced drivers, the daily income target acts as a self-control device. “Keep driving until you make your target or run up against the twelve-hour maximum” is an easy rule to follow, not to mention justify to yourself or a spouse waiting at home. Imagine instead having to explain that you quit early today because you didn’t make very much money. That will be a long conversation, unless your spouse is an economist.

  The cabs paper was also published in that special issue of the Quarterly Journal of Economics dedicated to the memory of Amos.

  ________________

  * A recent experiment shows that behavioral interventions can work in this domain, although it uses technology that did not exist at this time. Simply texting patients to remind them to take their prescribed medications (in this study, for lowering blood pressure or cholesterol levels) reduced the number of patients who forgot or otherwise failed to take their medications from 25% to 9% (Wald et al., 2014).

  † They were able to do this because, for technical reasons, the standard theory makes a prediction about the relation between the equity premium and the risk-free rate of return. It turns out that in the conventional economics world, when the real (inflation-adjusted) interest rate on risk-free assets is low, the equity premium cannot be very large. And in the time period they studied, the real rate of return on Treasury bills was less than 1%.

  ‡ That might not look like a big difference, but it is huge. It takes seventy years for a portfolio to double if it’s growing at 1% per year, and fifty-two years if it’s growing at 1.35%, but only ten years if it’s growing at 7%.

  § It is crucial to Samuelson’s argument that he is using the traditional expected utility of wealth formulation. Mental accounting misbehavior such as the house money effect is not pe
rmitted in this setup because wealth is fungible.

  ¶ Well not quite entirely. Here is how he ends the paper. “No need to say more. I’ve made my point. And, save for the last word, have done so in prose of but one syllable.” And truth be told, he slipped in the word “again” somewhere in the paper, no doubt by accident. I owe this reference, and the spotting of the “again,” to the sharp-eyed Maya Bar-Hillel.

  # Of course, this is not to say that stocks always go up. We have seen quite recently that stocks can fall 50%. That is why I think the policy of decreasing the percentage of your portfolio in stocks as you get older makes sense. The target date funds used as default investment strategies in most retirement plans now follow this strategy.

  ** The 5 p.m. turnover is particularly maddening since it occurs just as many people are leaving work. And with many of the fleets located in Queens, far from midtown Manhattan, drivers often start to head back to the garage at 4, turning their off-duty sign on. A recent study found that this results in 20% fewer cabs on the road between 4 and 5 p.m., when compared to an hour before. See Grynbaum (2011) for the full story.

  †† Recall the earlier discussion of Uber and surge pricing. If some of their drivers behaved this way, it would limit the effectiveness of the surge in increasing the supply of drivers. The key question, which is impossible to answer without access to their data, is whether many drivers monitor the surge pricing when they are not driving and hop in their cars when prices go up. If enough drivers respond this way, that would offset any tendency for drivers to take off early after hitting the jackpot on a 10x fare. Of course the surge may help divert cabs to places where demand is higher, assuming the surge lasts long enough for the taxis to get there.

  VI.

  FINANCE:

  1983–2003

  Aside from the discussion of my work with Benartzi on the equity premium puzzle, I have left something out of the story so far: the investigation of behavioral phenomena in financial markets. This topic was, fittingly, a risky one to delve into, but one that offered the opportunity for high rewards. Nothing would help the cause of behavioral economics more than to show that behavioral biases matter in financial markets, where there are not only high stakes but also ample opportunities for professional traders to exploit the mistakes made by others. Any non-Econs (amateurs) or non-Econ behavior (even by experts) should theoretically have no chance of surviving. The consensus among economists, and especially among those who specialized in financial economics, was that evidence for misbehaving would be least likely to be found in financial markets. The very fact that financial markets were the least likely place to harbor behavioral anomalies meant that a victory there would make people take notice. Or, as my economist friend Tom Russell once told me, finance was like New York in Frank Sinatra’s famous song: “If you can make it there, you can make it anywhere.”

  But the smart money was betting against us making it anywhere near New York, New York. We were likely to be stuck in Ithaca, New York.

  21

  The Beauty Contest

  It is difficult to express how dubious people were about studying the behavioral economics of financial markets. It was one thing to claim that consumers did strange things, but financial markets were thought to be a place where foolish behavior would not move market prices an iota. Most economists hypothesized—and it was a good starting hypothesis—that even if some people made mistakes with their money, a few smart people could trade against them and “correct” prices—so there would be no effect on market prices. The efficient market hypothesis, mentioned in chapter 17 about the conference at the University of Chicago, was considered by the profession to have been proven to be true. In fact, when I first began to study the psychology of financial markets back in the early 1980s, Michael Jensen, my colleague at the Rochester business school, had recently written: “I believe there is no other proposition in economics which has more solid empirical evidence supporting it than the Efficient Market Hypothesis.”

  The term “efficient market hypothesis” was coined by University of Chicago economist Eugene Fama. Fama is a living legend not just among financial economists, but also at Malden Catholic High School near Boston, Massachusetts, where he was elected to their athletic hall of fame, one of his most prized accomplishments.* After graduating from nearby Tufts University with a major in French, Fama headed to the University of Chicago for graduate school, and he was such an obvious star that the school offered him a job on the faculty when he graduated (something highly unusual), and he never left. The Booth School of Business recently celebrated his fiftieth anniversary as a faculty member. He and Merton Miller were the intellectual leaders of the finance group at Chicago until Miller died, and to this day Fama teaches the first course taken by finance PhD students, to make sure they get off to the right start.

  The EMH has two components, which are somewhat related but are conceptually distinct.† One component is concerned with the rationality of prices; the other concerns whether it is possible to “beat the market.” (I will get to how the two concepts are related a bit later.)

  I call the first of these propositions “the price is right,” a term I first heard used to describe the stock market by Cliff Smith, a colleague at the University of Rochester. Cliff could be heard bellowing from the classroom in his strong southern accent, “The price is riiiight!” Essentially, the idea is that any asset will sell for its true “intrinsic value.” If the rational valuation of a company is $100 million, then its stock will trade such that the market cap of the firm is $100 million. This principle is thought to hold both for individual securities and for the overall market.

  For years financial economists lived with a false sense of security that came from thinking that the price-is-right component of the EMH could not be directly tested—one reason it is called a hypothesis. Intrinsic value, they reasoned, is not observable. After all, who is to say what the rational or correct price of a share of General Electric, Apple, or the Dow Jones Industrial Average actually is? There’s no better way to build confidence in a theory than to believe it is not testable. Fama tends not to emphasize this component of the theory, but in many ways it is the more important part of the EMH. If prices are “right,” there can never be bubbles. If one could disprove this component of the theory, it would be big news.‡

  Most of the early academic research on the EMH stressed the second component of the theory, what I call the “no free lunch” principle—the idea that there is no way to beat the market. More specifically it says that, because all publicly available information is reflected in current stock prices, it is impossible to reliably predict future prices and make a profit.

  The argument supporting this hypothesis is intuitively appealing. Suppose a stock is selling for $30 a share, and I know for certain that it will soon sell for $35 a share. It would then be easy for me to become fabulously wealthy by buying up shares at prices below $35 and later selling them when my prediction comes true. But, of course, if the information I am using to make this prediction is public, then I am unlikely to be the only one with this insight. As soon as the information becomes available, everyone who is in possession of this news will start buying up shares, and the price will almost instantaneously jump to $35, rendering the profit opportunity fleeting. This logic is compelling, and early tests of the theory appeared to confirm it. In some ways, Michael Jensen’s PhD thesis provided the most convincing analysis. In it he showed that professional money managers perform no better than simple market averages, a fact that remains true today. If the pros can’t beat the market, who can?

  It is somewhat surprising that it was not until the 1970s that the efficient market hypothesis was formally proposed, given that it is based on the same principles of optimization and equilibrium that other fields of economics adopted much earlier. One possible explanation is that financial economics as a field was a bit slower to develop than other branches of economics.

  Finance is now a highly respected branch of econ
omics, and numerous Nobel Prizes have been awarded to economists whose primary work was in finance, including a recent prize in 2013.§ But it was not always so. Although some of the intellectual giants of the field, such as Kenneth Arrow, Paul Samuelson, and James Tobin, all made important contributions to financial economics in the 1950s and 1960s, finance was not a mainstream topic in economics departments, and before the 1970s, in business schools finance was something of an academic wasteland. Finance courses were often similar to accounting courses, where students learned the best methods to figure out which stocks were good investments. There was little in the way of theory, and even less rigorous empirical work.

  Modern financial economics began with theorists such as Harry Markowitz, Merton Miller and William Sharpe, but the field as an academic discipline took off because of two key developments: cheap computing power and great data. The data breakthrough occurred at the University of Chicago, where the business school got a grant of $300,000 to develop a database of stock prices going back to 1926. This launched the Center for Research in Security Prices, known as CRSP (pronounced “crisp”).

 

‹ Prev