Book Read Free

Willful

Page 17

by Richard Robb


  4. Becker, “Theory of Social Interactions,” 1074–1083. See the Online Technical Appendix (www.willful-appendix.com) for a simple example illustrating the Rotten Kid Theorem.

  5. Singer, “Drowning Child.”

  6. As Larissa MacFarquhar describes in Strangers Drowning: Voyages to the Brink of Moral Extremity, extreme effective altruists give kidneys to strangers. One of the book’s subjects was racked with guilt over buying a candy apple rather than using the four dollars for a malaria net to send to Africa. Extreme effective altruists do not promote Rawlsian justice, exactly, and seek to help the absolute worst off, although those in deep poverty are ripe targets for aid. Another of the book’s subjects, Aaron, refused to pay off the credit-card debt of a homeless former girlfriend because starving people in the developing world needed the money more. Extreme effective altruists only sleep or relax to restore energy so they can earn more money to donate; Aaron optimized his motions, placing his computer in his bedroom so “he could roll out of bed and push the on button with one movement” (54). It reached the point that “everything Aaron bought, even the smallest cheapest thing, felt to him like food or medicine snatched from someone dying” (44).

  7. Smith, Theory of Moral Sentiments, 1–2.

  8. In terms of an optimization problem, this is technically a “corner solution.” I give zero. Abstracting from feelings of guilt, a Yankees fan who gives nothing would theoretically take money away from the team if he could do so without getting caught. The exception would be the knife-edge condition, where care is just about to rise to the surface. In this case, an individual who gives nothing but is indifferent to giving one dollar would not take money from the cause if given the chance.

  9. A simple numerical example will clarify this point. Suppose Consumer #1 cares about both Consumer #2 and Consumer #3, who in turn care about only their own consumption. The following formula describes Consumer #1’s utility, which depends on the consumption of all three:

  Utility of Consumer #1 = U(c1, c2, c3) = ln(c1) + 0.75 ln(4 + Δc2) + 0.5 ln(14.2 + Δc3).

  This formula tells us that #2 starts out poorer than #3, with 4 units versus 14.2. 4 + Δc2 is #2’s consumption, which she obtains from her 4-unit endowment plus Δc2 transferred from #1. 14.2 + Δc3 is #3’s consumption, which she gets from her 14.2-unit endowment and a transfer of Δc3 from #1. The term ln(4 + Δc2) is the utility of #2 as she experiences it, ln(14.2 + Δc3) is the utility of #3, and ln(c1) is the utility that #1 derives from her own consumption.

  Because #1 weights #2’s consumption by 0.75, she must care less about #2’s consumption than her own. She cares about #3’s consumption even less, weighting it at 0.5. This does not make #1 selfish: although she privileges her own consumption, her care for both of the others is genuine and reasonably strong.

  #1 maximizes her utility subject to her budget constraint. Assuming her income is 10, then her budget is:

  c1 + Δc2 + Δc3 = 10.

  Both transfers must be nonnegative, i.e., #1 can’t take money away from #2 or #3 to produce a preferable result:

  Δc2 ≥ 0,

  Δc3 ≥ 0.

  If #1 maximized her utility while ignoring the constraints that Δc2 ≥ 0 and Δc3 ≥ 0, she would simply equate the marginal utility of her own consumption to the (weighted) marginal utility of #2 and #3. Given our numerical assumptions, she would want to transfer consumption away from #3 to both herself and #2. That’s because #3 is so rich, the marginal benefit of #3’s consumption (from #1’s perspective) is small, particularly considering that #3’s welfare has a 0.5 weighting.

  Thus #1 must impose the constraint Δc3 = 0. Her solution is c1* = 8 and Δc2* = 2. That is, #1 achieves the greatest utility by transferring 2 units to #2, transferring nothing to #3 and consuming the remaining 8 out of her budget.

  In this example, #1 cares about #3, just not enough to do anything about it in light of #3’s wealth.

  But now suppose every $1 that #1 transferred to #3 would produce a $4 increase in #3’s consumption (perhaps #1 has an asset that #3 values more than she does). In this case, #1’s utility takes the following form: ln(c1) + 0.75 ln(4 + Δc2) + 0.5 ln(14.2 + 4 × Δc3).

  Maximizing this expression subject to #1’s budget, the new solution is c1* = 7.8, Δc2* = 1.85 and Δc3* = 0.35. The care that #1 has always felt for #3 can now be observed. Transferring to #3 now raises the utility of #1, even though #3 is richer than everyone else and #1 discounts #3’s welfare by a factor of 2.

  Everything discussed in this endnote is care altruism, perfectly consistent with rational choice.

  10. Montaigne, “Taste of Good and Evil,” 43.

  11. See Andreoni, “Giving with Impure Altruism.”

  12. Busboom, “Bat 21,” 30.

  13. De Waal, Leimgruber, and Greenberg, “Giving Is Self-Rewarding,” 13685–13687. The authors controlled for various biases, such as whether the monkeys favored their left or right hand. The token exchange was hidden from the monkeys’ groups to control for the possibility that the subject would choose the prosocial token out of fear that selfish behavior would be punished after the experiment.

  14. We’ve said that altruism has to impose a cost on the altruist. This example satisfies the definition (barely) because the subject monkeys had to go to the trouble of figuring out which token was which and exerting the energy to make the prosocial choice. The subject monkeys also had to set aside concerns that any food going to the partner monkey would eventually come out of their allocation.

  15. The couple run an inefficient household, leaving opportunities on the table to make both better off. While an inefficient household proves that they don’t exhibit care altruism, an efficient household would not be sufficient to prove that they did. For instance, they may not care at all about each other but be very good at coordinating. The husband might then walk the dog so the wife can work late and contribute the extra money to joint expenditures that benefit the husband. To distinguish care altruism from effective coordination, we’d have to consider a case where one spouse was unaware that the other had advanced his interests.

  16. Bentham, Introduction to the Principles of Morals and Legislation, 36.

  17. Sartre, Existentialism Is a Humanism, 30–31.

  SEVEN

  Public Policy

  1. This is not the textbook normative versus positive distinction. I assume the policymaker knows everyone’s preferences, including values—e.g., environmental standards, income equality, equality of opportunity, and so forth. With all this information, the policymaker can factor values into the pursuit of Pareto efficiency.

  2. Taylor, Rationality, 20. Taylor’s aim is to “overthrow” economic theory by identifying examples with no Pareto-efficient solution. He starts his book with several such stories of people refusing to give something up for money. Taylor, however, is launching his assault on a straw man. He has not proven that the rational choice model is useless—simply that it does not apply to every decision.

  3. Cicero, De Officiis, 319–325; Foot, “Problem of Abortion,” 23.

  4. Cicero, De Officiis, 321.

  5. To the extent that some people are hungrier than others, the distribution of income is unequal, Rhodes lacks a “social safety net” that ensures minimal grain allotments to the poor, and the wait before the next ships arrive is substantial, the allocation of grain will deviate from the social optimum if the merchant fails to disclose. If everyone knew that more grain was coming, some richer, less-hungry people would wait to make their purchase, thereby driving prices down before the next ships arrive and allowing poorer, hungrier people to eat sooner.

  6. Thomson, “Trolley Problem,” 1397–1399.

  7. If we can truly exclude the possibility of the questioner cheating, however, then odds of 1 in a trillion for $0.25 should appeal to many people. Playing 10 million times, you’d have about a 1 in 100,000 chance of death in exchange for $2.5 million. To put this in context, assume a person can expect to live another 50 years. The expected loss of
life from betting with 1 in 100,000 odds of losing is 262 minutes. Assuming each cigarette cuts 10 minutes from a person’s life, this wager would correspond to a smoker skipping 26 cigarettes for $2.5 million. If the bet feels too risky to you, make the odds 1 in 10 trillion. If you still resist, I suspect that you just don’t trust the questioner to play fairly, no matter what anyone says.

  8. Searle, “Philosophy of Society, Lecture 20,” 41:00–58:00.

  EIGHT

  Changing Our Minds

  1. For a formal statement and proof of this proposition, see my 2009 paper “Nietzsche and the Economics of Becoming,” and for a more general, upgraded proof, see the Online Technical Appendix (www.willful-appendix.com). In a very similar setup to “Nietzsche and the Economics of Becoming,” Simone Galperti and Bruno Strulovici independently prove the same result: “an agent who cares about his well-being beyond the immediate future [i.e., more than one period ahead] cannot be time consistent” (“From Anticipations to Present Bias,” 9; see also “Forward-Looking Behavior Revisited,” 13). This means a person who looks ahead more than one period will inevitably change his mind. In a 2017 follow-up, Galperti and Strulovici apply the theorem to intergenerational transfers, replacing future selves with future generations linked by altruism. The Online Technical Appendix shows how the notation and assumptions of Galperti and Strulovici map onto my own.

  2. Of course, the agent could maximize given perfect knowledge of how he’ll behave in the future, because any function can be maximized given regularity conditions. In the first period the agent chooses what is best, anticipating what he’ll freely choose in the next period. The resulting consumption plan will differ, however, from the optimal plan assuming control over the future. See the Online Technical Appendix (www.willful-appendix.com) for details. A certain stability also arises in models where the individual is made up of subagents, or homunculi, who enter into dynamic bargaining games and can undertake actions on the individual’s behalf. The one with the long-term interest will punish the one with the short-term interest if it causes the individual to indulge too much or too often. “Wasteful” procrastination can emerge from the bargaining. But even as the subagent tried to optimize its own preferences, it would still be subject to the limits on time-consistent preferences. See Ross, “Economic Models of Procrastination,” for a summary.

  3. To express these steps mathematically, define ht as the hedonic index that measures a person’s well-being at time t:

  Well-being today depends on the utility derived from current-period consumption, U(ct), and the discounted value of the next period’s well-being. Repeated substitution leads to a representation of the agent’s objective that is familiar to economists:

  where n = number of periods.

  The solution that maximizes ht in the earlier expression is time consistent, so an optimal path from the perspective of t = 1 will remain optimal as future periods arrive. It works for an infinite period as long as ρ is greater than the interest rate. See the Online Technical Appendix (www.willful-appendix.com) for details.

  4. See the Online Technical Appendix (www.willful-appendix.com). Formally, this analysis also applies to intergenerational transfers. It is impossible for the generations of a family to plot a time-consistent course connected by mutual care. But the way generations interact differs fundamentally from an individual confronting her own life in different time periods, so the near impossibility of time consistency is less remarkable.

  5. That formula is geometric discounting. See the Online Technical Appendix (www.willful-appendix.com).

  6. Someone afflicted with hyperbolic discounting might prefer $50 today to $100 in one year while also preferring $100 in six years to $50 in five years. When five years have passed, that second preference will reverse, and he’ll prefer $50 right away to waiting a year for $100.

  7. Gazzaniga, Ethical Brain, 148–149.

  8. Libet, “Do We Have Free Will?,” 49.

  9. Nagel, View from Nowhere, 127.

  10. Schopenhauer, World as Will and Idea, 3:118.

  11. A 2 percent average annual chance of death in the ancestral environment is consistent with the observation that real interest rates tend to hover around 2 percent; that is, we are compensated by 1 to 2 percent per year above the rate of inflation for waiting.

  12. Nozick, Nature of Rationality, 14–15.

  NINE

  Homo Economicus and Homo Ludens

  1. In 1920, the Journal of Political Economy was essentially equation-free (the single equation that year was an accounting identity). By 1930, 15 percent of papers had at least one equation although none with more than high-school-level math. By 1960, the proportion of papers with at least one equation had climbed to 30 percent. It shot up to 90 percent in 1970 and higher math became commonplace.

  2. As Jacob Viner wrote in 1925, “There are many who would place greater stress on the importance of the process of desire-fulfillment itself than on the gratifications or other states of consciousness which result from such fulfillment” (“Utility Concept in Value Theory,” 641).

  3. Veblen, “Limitations of Marginal Utility,” 620.

  4. Skidelsky, John Maynard Keynes, 224.

  5. Huizinga, Homo Ludens, 13.

  6. Keynes, General Theory of Employment, Interest, and Money, 160.

  7. See Dow and Dow, “Animal Spirits Revisited.”

  8. Sen, “Maximization,” 747.

  9. An experimental literature stretching back to 1956 (Brehm, “Postdecision Changes”) finds that subjects who choose between two equally valuable options will reevaluate the options after they choose. Subjects tend to increase their assessment of the value of the option they select and decrease their assessment of the value of the one they reject. This is the case even if they choose randomly—a tendency to favor their choice outweighs any “buyer’s remorse.” But they stop short of upgrading their assessment if a computer picks for them. Somehow they settle into a choice if it results from the free exercise of their will (see Sharot, Velasquez, and Dolan, “Do Decisions Shape Preference?”). While these experiments suggest that consumers care about choosing in itself, I would not go so far as to say that “choices influence preferences.” For economics, if not psychology, “preferences” signifies what people actually choose, not how they rate what they’ve already done.

  10. Smith, Wealth of Nations, 18.

  11. See Delmonico’s Dinner Menu, 1899.

  12. Iyengar and Lepper, “When Choice Is Demotivating.”

  13. Tversky and Shafir, “Disjunction Effect,” 305–306.

  14. Personal finance guides counsel living by a budget, but many people resist this advice. We want each spending decision to feel, to the extent possible, like a unique act of will.

  15. Oprea, “Survival versus Profit Maximization,” 2227, 2234–2235.

  16. Plutarch, Lives of Illustrious Men, 438.

  17. Ricardo, On the Principles, 158–170.

  18. Knight, “World Justice, Socialism, and the Intellectuals,” 442.

  19. See Karabarbounis, “Labor Wedge,” 212.

  20. Keynes, “Economic Possibilities,” 365–373.

  21. The grandnephew’s comment is from Kestenbaum, “Keynes Predicted.”

  22. Phelps, Mass Flourishing, 19–40; Phelps, “The Good Economy,” 6.

  23. Nutton, “Seeds of Disease,” 11.

  24. To understand why these discoveries never exploded into business and commercial activity, we can look to Aristotle, who devalued work aimed at material well-being. Aristotle distinguished between the technical knowledge needed to perform a craft and abstract knowledge, the jurisdiction of philosophers. He argued that since humans alone engage in rational thought, it must be our highest aim. Technical knowledge addresses the needs for food and physical comfort we share with animals. This attitude has proven difficult to shake. As Andrzej Rapaczynski observes, “The Christian and aristocratic worlds provided … an incredibly powerful vehicle for carrying the Aristotelian mindset a
ll the way to our times” (“Moral Significance of Economic Life,” 5).

  25. See, e.g., Alexandridis, Petmezas, and Travlos, “Gains from Mergers and Acquisitions,” 1671.

  SUMMING UP

  Purposeful versus For-Itself

  1. Bentham, Introduction to the Principles of Morals and Legislation, 29–42.

  2. John Stuart Mill was literally born into utilitarianism. Bentham, his father’s close friend, trained Mill in his theories from childhood. No wonder Mill had a mental breakdown when he was twenty years old.

  3. Aristotle, On Rhetoric, 116.

  4. Kahneman, Thinking, Fast and Slow, 225.

  5. I also wonder about the extent to which cross-country happiness data simply measures how people in different cultures respond to surveys. According to the World Happiness Report 2016 (Helliwell, Layard, and Sachs), Europe’s poorest country, Moldova, with GDP per capita of $1,900, a 10 percent population decline since the 2004 census, and an average life expectancy of 67 years, ranked ahead of South Korea (GDP per capita of $28,000). It nearly tied with Italy (GDP per capita of $30,000) and Japan (GDP per capita of $37,000). Moldova, the third-least-visited country in Europe (after Lichtenstein and San Marino, each of which has around 1 percent of Moldova’s population and half as many tourists), hosted 120,000 tourists in 2016. This compares with 17 million tourists to South Korea, 52 million to Italy, and 24 million to Japan. Hong Kong, with GDP per capita of $43,000 and 27 million tourists per year, is tied on the happiness scale with Somalia, with GDP per capita of $450 and no tourism (for these data and other comparisons, see worldbank.org, imf.org, and unwto.org). Perhaps people in Japan, South Korea, and Italy are reluctant to boast, while those from repressive regimes or with cultural norms that discourage complaining report that they are happy regardless of their circumstances.

 

‹ Prev