Another big-picture question that begs for more thorough behavioral analysis is the best way to encourage people to start new businesses (especially those who might be successful). Economists on the right tend to stress reducing marginal tax rates on high-income earners as the key to driving growth. Those on the left tend to push for targeted subsidies for industries they want to encourage (such as clean energy) or increased availability of loans from the Small Business Administration, a government agency whose mission is to encourage the creation and success of new enterprises. And both economists and politicians of all stripes tend to favor exemptions from many government regulations for small firms, for whom compliance can be costly. All of these policies are worth consideration, but we rarely hear much from economists about mitigating the downside risk to entrepreneurs if a new business fails, which happens at least half if not more of the time.§ We know that losses loom larger than gains to Humans, so this might be an important consideration. Here is one such suggestion along those lines, offered during an impromptu television interview (so pardon the grammar):
What we need to do in this country is make it a softer cushion for failure. Because what [those on the right] say is the job creators need more tax cuts and they need a bigger payoff on the risk that they take. . . . But what about the risk of, you’re afraid to leave your job and be an entrepreneur because that’s where your health insurance is? . . . Why aren’t we able to sell this idea that you don’t have to amplify the payoff of risk to gain success in this country, you need to soften the damage of risk?
This idea did not come from an economist, not even a behavioral economist. It came from comedian Jon Stewart, the host of The Daily Show, during an interview with Austan Goolsbee, my University of Chicago colleague who served for a while as the chairman of President Obama’s Council of Economic Advisors. Economists should not need the host of a comedy news show to point out that finding ways to mitigate the costs of failures might be more effective at stimulating new business startups than cutting the tax rate on people earning above $250,000 a year, especially when 97% of small business owners in the U.S. earn less than that amount.
Behavioral macroeconomics is on the top of my wish list, but virtually every field of economics could benefit from giving greater scrutiny to the role of Humans. Along with finance, development economics is probably the field where behavioral economists are having the greatest impact, in part because that field has been revitalized by an influx of economists who are testing ideas in poor countries using randomized control trials. Some poor African country is not going to turn into Switzerland overnight, but we can learn how to make things better, one experiment at a time.
We need more evidence-based economics, which can be either theoretical or empirical. Prospect theory is, of course, the seminal evidence-based theory in behavioral economics. Kahneman and Tversky began by collecting data on what people do (starting from their own experiences) and then constructed a theory whose goal was to capture as much of that behavior as possible in a parsimonious way. This is in contrast to expected utility theory, which, as a normative theory of choice, was derived from rationality axioms. Prospect theory has now been repeatedly and rigorously tested with data taken from a wide variety of settings, from the behavior of game show contestants to golf professionals to investors in the stock market. The next generation of behavioral economic theorists, such as Nicholas Barberis, David Laibson, and Matthew Rabin (to name just three), also start with facts and then move to theory.
To produce new theories we need new facts, and the good news is that I am now seeing a lot of creative evidence collection being published in top economics journals. The growing popularity of randomized control trials, starting with the field of development economics, nicely illustrates this trend, and shows how experimentation can increase economists’ tool kit, which often has had a single tool: monetary incentives. As we have seen throughout this book, treating all money as the same, and also as the primary driver of human motivation, is not a good description of reality.
A good example of a domain where field experiments run by economists are having an impact is education. Economists do not have a theory for how to maximize what children learn in school (aside from the obviously false one that all for-profit schools are already using the best methods). One overly simplistic idea is that we can improve student performance by just by giving financial incentives to parents, teachers, or kids. Unfortunately, there is little evidence that such incentives are effective, but nuances matter. For example, one intriguing finding by Roland Fryer suggests that rewarding students for inputs (such as doing their homework) rather than outputs (such as their grades) is effective. I find this result intuitively appealing because the students most in need do not know how to become better students. It makes sense to reward them for doing things that educators believe are effective.
Another interesting result comes directly from the behavioral economics playbook. The team of Fryer, John List, Steven Levitt, and Sally Sadoff has found that the framing of a bonus to teachers makes a big difference. Teachers who are given a bonus at the beginning of the school year that must be returned if they fail to meet some target, improve the performance of their students significantly more than teachers who are offered an end-of-year bonus contingent on meeting the same goals.¶
A third positive result even further from the traditional tool kit of financial incentives comes from a recent randomized control trial conducted in the U.K., using the increasingly popular and low-cost method of text reminders. This intervention involved sending texts to half the parents in some school in advance of a major math test to let them know that their child had a test coming up in five days, then in three days, then in one day. The researchers call this approach “pre-informing.” The other half of parents did not receive the texts. The pre-informing texts increased student performance on the math test by the equivalent of one additional month of schooling, and students in the bottom quartile benefited most. These children gained the equivalent of two additional months of schooling, relative to the control group. Afterward, both parents and students said they wanted to stick with the program, showing that they appreciated being nudged. This program also belies the frequent claim, unsupported by any evidence, that nudges must be secret to be effective.
Public schools, like remote villages in poor countries, are challenging environments for experimenters. That we are learning important lessons about how to teach our children and keep them motivated should embolden others outside of education and development economics to try collecting data too. Field experiments are perhaps the most powerful tool we have to put the evidence in evidence-based economics.
My wish list for non-economists has a similar flavor. Considering that schools are one of the oldest of society’s institutions, it is telling that we have not figured out how to teach our children well. We need to run experiments to figure out how to improve, and have only just started doing so. What should that tell us about creations much newer than schools, such as modern corporations? Is there any reason to think we know the best way to run them? It is time for everyone—from economists to bureaucrats to teachers to corporate leaders—to recognize that that they live in a world of Humans and to adopt the same data-driven approach to their jobs and lives that good scientists use.
My participation in the making of behavioral economics has taught me some basic lessons that, with due caution, can be adopted across circumstances. Here are three of them.
Observe. Behavioral economics started with simple observations. People eat too many nuts if the bowl is left out. People have mental accounts—they don’t treat all cash the same. People make mistakes—lots of them. To paraphrase an earlier quote, “There are Humans. Look around.” The first step to overturning conventional wisdom, when conventional wisdom is wrong, is to look at the world around you. See the world as it is, not as others wish it to be.
Collect data. Stories are powerful and memorable. That is why I have told so many in thi
s book. But an individual anecdote can only serve as an illustration. To really convince yourself, much less others, we need to change the way we do things: we need data, and lots of it. As Mark Twain once said, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” People become overconfident because they never bother to document their past track record of wrong predictions, and then they make things worse by falling victim to the dreaded confirmation bias—they only look for evidence that confirms their preconceived hypotheses. The only protection against overconfidence is to systematically collect data, especially data that can prove you wrong. As my Chicago colleague Linda Ginzel always tells her students: “If you don’t write it down, it doesn’t exist.”
In addition, most organizations have an urgent need to learn how to learn, and then commit to this learning in order to accumulate knowledge over time. At the very least this means trying new things and keeping track of what happens. Even better would be to run actual experiments. If no one in your organization knows how to go about running a proper experiment, hire a local behavioral scientist. They are cheaper than lawyers or consultants.
Speak up. Many organizational errors could have been easily prevented if someone had been willing to tell the boss that something was going wrong.
One vivid example of this comes from the high-stakes world of commercial aviation, as chronicled by Atul Gawande, a champion of reducing Human error, in his recent book The Checklist Manifesto. Over 500 people lost their lives in a 1977 runway crash because the second officer of a KLM flight was too timid to question the authority of the captain, his “boss.” After mishearing instructions about another plane still on the same runway, the captain continued to speed the plane forward for takeoff. The second officer tried to warn him but the captain dismissed his warning, and the second officer remained quiet from then on—until the two planes collided. Gawande aptly diagnoses the cause to be an organizational failure: “[The airline was] not prepared for this moment. They had not taken the steps to make themselves a team. As a result, the second officer never believed he had the permission, let alone the duty, to halt the captain and clear up the confusion. Instead the captain was allowed to plow ahead and kill them all.”
Another example comes from the mountain climbing disaster on Mount Everest so vividly documented by Jon Krakauer in his book Into Thin Air. During the several weeks spent acclimating and slowly reaching high base camp, the expedition leaders for two major climbing companies, Rob Hall and Scott Fisher, repeatedly stressed to their customers the importance of turning around if they had not reached the summit by the designated hour of 1 p.m. Yet both of these experienced guides lost their lives after violating their own rule. Tragically, none of their subordinates tried to intervene to remind these men about their own rules. As both of these examples illustrate, sometimes, even when you are talking to the boss, you need to warn of the threat of an impending disaster.
The making of behavioral economics has included a lot of speaking up to the high priests of economics about the unrealism of hyperrational models. I can’t say that I recommend anyone take as risky a career path as I did. I was in unusual circumstances. I was lucky to run into Kahneman and Tversky at just the right moment in time. And as my thesis advisor so bluntly put it, my prospects as an economist were not all that bright: “We didn’t expect much of him” says it all. When your opportunity costs are low, it pays to take risks and speak up, especially if the course you pursue is as much fun as the one I have taken.
But we cannot expect people to take risks, by speaking up or in other ways, if by so doing they will get fired. Good leaders must create environments in which employees feel that making evidence-based decisions will always be rewarded, no matter what outcome occurs. The ideal organizational environment encourages everyone to observe, collect data, and speak up. The bosses who create such environments are risking only one thing: a few bruises to their egos. That is a small price to pay for increasing the flow of new ideas and decreasing the risks of disasters.
Although I have at times been critical of economists in this book, I am entirely optimistic about the future of economics. One sign that I find particularly encouraging is that economists who do not identify themselves as “behavioral” wrote some of the best behavioral economics papers published in recent years. These economists simply do solid empirical work and let the chips fall where they may. I already mentioned two such papers earlier in the book: Justine Hastings and Jesse Shapiro’s paper on the mental accounting of gasoline, and the paper by Raj Chetty and his team analyzing Danish data on pension saving. Recall that the Chetty team finds that the economic incentive for saving via tax breaks has virtually no effect on behavior. Instead, 99% of the work is done by the choice architecture of the plans, such as the default saving rate—in other words, SIFs. This paper is just one of many in which Chetty and his team of collaborators have found behavioral insights can improve our understanding of public policy.
When all economists are equally open-minded and are willing to incorporate important variables in their work, even if the rational model says those variables are supposedly irrelevant, the field of behavioral economics will disappear. All economics will be as behavioral as it needs to be. And those who have been stubbornly clinging to an imaginary world that consists only of Econs will be waving a white flag, rather than an invisible hand.
________________
* It also didn’t hurt that financial markets offer the best opportunities to make money if markets are misbehaving, so a lot of intellectual resources have gone into investigating possible profitable investment strategies.
† We have benefited by some “natural” experiments, such as the fall of the Berlin Wall, that have allowed us to compare market vs. planned economies.
‡ Even the label given to a tax cut may be relevant. Epley et al. (2006) find that people report a greater propensity to spend from a tax cut that is called a “bonus” rather than a “rebate.”
§ Of course, not everyone should be encouraged to become an entrepreneur. Many start with unrealistic expectations about the chance of success: the vast majority believe their chance of success to be far above average, and a third or so believe their success is a sure thing (Cooper, Woo, and Dunkelberg, 1988)! Perhaps the Small Business Administration should offer training on base rates to budding new business owners, to help curb any overconfidence.
¶ One caveat to this finding is that the bonus clawback is not popular with teachers, one reason we almost never see “negative” bonuses in the workplace. Taking money back may be viewed as “unfair.”
NOTES
xi “The foundation of political economy”: Pareto ([1906] 2013), ch. 2, p. 21.
Preface
xiv Choices, Values, and Frames: Kahneman and Tversky (2000).
xv article about my work for the New York Times Magazine: Lowenstein (2000).
xv When Genius Failed: Lowenstein (2001).
Chapter 1: Supposedly Irrelevant Factors
7 human “passions”: Smith ([1776] 1981, [1759] 1981).
9 behavior of peasant farmers: For evidence on Human farmers making decisions like these, see Duflo, Kremer, and Robinson (2011), Suri (2011), and Cole and Fernando (2012). On the one hand, farmers do seem responsive to information if communicated to them, and they understand how beneficial fertilizer will be on their land. On the other, they also increase their purchase and usage of fertilizer in response to simple behavioral nudges that would have no impact on an Econ.
Chapter 2: The Endowment Effect
12 “Let a six-year-old girl”: Schelling (1958).
15 “The Value of Saving a Life”: Thaler and Rosen (1976).
19 think about this decision correctly: At Super Bowl XXXV, Alan Krueger (2001) asked fans who had been able to buy tickets for $400 or less (the face value of the tickets) about their willingness to buy and sell at the market price of approximately $3,000. An overwhelming majority (86%) would h
ave been unwilling to buy (if they hadn’t managed to secure tickets) yet would also be unwilling to sell at this price.
Chapter 3: The List
21 “hindsight bias”: Fischhoff (1975).
22 “Judgment Under Uncertainty: Heuristics and Biases”: Tversky and Kahneman (1974).
24 twice as many gun deaths by suicide: DeSilver (2013), reporting on 2010 data from the Centers for Disease Control and Prevention.
Chapter 4: Value Theory
25 “Prospect Theory”: Kahneman and Tversky (1979).
27 the theory of human capital formation: See Becker (1962, 1964).
27 Bernoulli in 1738: See Bernoulli ([1738] 1954) for an English translation.
29 The Theory of Games and Economic Behavior: von Neumann and Morgenstern (1947).
30 Baumol had proposed an alternative: Baumol (1962).
Chapter 5: California Dreamin’
35 “Consumer Choice: A Theory of Economists’ Behavior”: Published as Thaler (1980).
38 Thinking, Fast and Slow: Kahneman (2011).
40 “slow hunch”: Johnson (2010).
40 private value for a token: See Smith (1976).
Misbehaving: The Making of Behavioral Economics Page 38