Book Read Free

Assholes

Page 20

by Aaron James


  The book is the product of many minds and many enjoyable conversations. For regular exchanges and encouragement, I especially thank Taylor Blodgett, Marshall Cohen, Margaret Gilbert, Sunny Karnani, and David Tannenbaum, with special thanks to Fiona Hill and Nicholas Jolley for discussion of public and historical figures, and to Cristiana Sogno for suggesting a Horatian version of the Letter to an Asshole. I am indebted to John-Paul Carvalho and Jennifer Herrera for help with the game theory appendix, and to the UC Irvine graduate students in our asshole discussion group at the Anteater Tavern, which included Andreas Christiansen, Michael Duncan, Matthew Dworkin, Justin Harvey, Violet McKeon, Daniel Pilchman, Valentina Ricci, Justin Thomsen, and Amanda Trefethen. For their insights, I also thank Secil Artan; Graeme Bird and Kerstin Maas; Xavier Cornillie; John Dent; Joseph Dowd; Ed Feuer; Luca Ferrero; Steve Finlay; Mark Fiocco; Al Franklin; Samuel Freeman; Julia Fremon; Brad Frohling; Nathan Fulton; Mike Granieri; Sean Greenberg; Phil Goodrich; Liz Harman; Nicole Hassoun; Matt Hayden; Jeffrey Helmreich, Pamela Hieronymi; Kristin Huerth; Nadeem Hussain; Linda Jack, Alex, Alin, Elizabeth, and Wendy James; Thijis Janssen; Mark Johnson; Melissa Johnson; A. J. Julius; Ken Keen; Erin Kelly; Bonnie Kent; Louise Kleszyk; Rahul Kumar; Doug Lavin; Brian Leiter; Alissa Maitino; Daniel McClure; Dan Oberto; Alexi Patsaouras; Casey Perin; Cynthia Pilch; Jesse and Elaine Pike; David Plunkett; Mike Powe; Ankita Raturi; Andy Reath; Holly Richardson; Vanessa Rollier; Jacob Ross; Chris Sanita; Debra Satz; Lucy Scanlon; T. M. Scanlon; Ricky Schaffer; Tamar Schapiro; Martin Schwab; Bob Scott; Brian Skyrms; Kelly Slater; David W. Smith; Larry Solum; Lucho Soto; Eric Schwitzgebel; Dan Speak; David Sussman; Julie Tannenbaum; Paul Tannenbaum; Peter and Sally Tannenbaum; R. Jay Wallace; Leif Wenar; Stephen White; Douglas Woodward; Gideon Yaffe; the 2009–10 fellows at CASBS; and the audience at a CASBS continuing-studies lecture at Stanford University. I apologize to anyone I forgot to mention; the book is still better because we talked. Finally, I am grateful to Peet’s coffee at the University Center in Irvine, California, where much of the book was drafted during the fall of 2011.

  I share the reluctance about “pop philosophy” widely felt among professional philosophers, and it has been uncomfortable to relax (some might say abandon) the usual professional standards of rigor and decorum in favor of much tomfoolery. The risk of transgression, of doing bad work, and of forgoing important projects, even if temporarily, has seemed tolerable only given my hopes of offering a different kind of contribution: a book that shares all the fun conversations I’ve been having; that gives succor to those afflicted by an asshole; that offers a glimpse into philosophy through a basic human concern; and that draws together standard philosophical themes in a new way. I thank the many people who encouraged me in this. I also thank those who urged caution, for caring enough to say something.

  APPENDIX

  A Game Theory Model of Asshole Capitalism

  Our story of asshole capitalism’s decline in chapter 6 is inspired by the formal theory of games. I therefore consulted Oxford-trained UC Irvine game theorist Jean-Paul Carvalho on how this distinctive process of decline might be modeled. Over a fruitful lunchtime discussion, Carvalho thought up and proved (with simple, mainly illustrative math) a possible model that captures central features of the way asshole capitalism undoes itself. This appendix describes the model for the general reader while providing some background explanation of game theory. (It was written with the generous help of talented UC Irvine logic and philosophy of science graduate student Jennifer Herrera.)

  The theory of games studies how different agents would strategically interact, given the choices of other agents. Each player in the game is said to have preferences for how things go, which he or she acts on, depending on what other agents are choosing. The theorist considers what patterns of action emerge when such players interact, in a single round of play or in repeated interactions.

  In the “stag hunt” game inspired by Rousseau, for example, each player—“you” and “I”—can either hunt stag or hunt hare. If we both hunt stag, we both eat more bounteously than if we each separately hunted hare. So we both prefer to hunt stag. However, both of us must join the hunt—we both must cooperate—in order for either of us to reap this greater benefit. But neither us can know whether the other will in fact show up for the hunt. If you show up and I don’t, you miss out on a chance of hunting hare, or indeed of hunting at all, and wind up with nothing.

  What should you do? Take a risk on the greater benefit of bounteous eating? Or play it safe and simply hunt hare on your own from the start? The answer depends on what you think I will do, how sure you are in that belief, and what risks you are willing to take. Hunting stag is obviously the best option for both of us, and it is perfectly possible. Nevertheless, the best course might not be taken, simply because of our uncertainty about what the other will do. If you are like most people, you won’t pass up a modest but certain benefit for a better but uncertain possibility of gain, and so you’ll choose to hunt hare on your own instead of taking a risk that I won’t show up to hunt stag. As game theorists put the idea, because the situation in which we both hunt hare is less risky, given that we are uncertain about what the other player will do, hunting hare is the “risk-dominant” choice.

  In analyzing games, the game theorist is looking for different situations of “equilibrium.” As defined by mathematician John Nash, the players are in a situation of equilibrium when each adopts the best response he or she can, given that the other players are playing their best responses (where each decides according to specified preferences, e.g., for hunting stag over hunting hare, with a belief about what the other player will do). Given these motivations, no player has incentive to deviate from a situation of equilibrium—unless there is some change in what others are choosing. As long as no perturbation or shock interrupts the system, there is something of a stable balance, much as a large object (e.g., a plank or the Eiffel Tower) might sit at rest upon a fulcrum, with each of its sides balancing the other. (A gust of wind, a “shock” to the system, would throw the situation into disequilibrium or shift it to a new balance point.)

  Now notice that a situation of social equilibrium needn’t involve cooperation. In the stag hunt game, there are two situations of equilibrium, a cooperative equilibrium (both players hunt stag) and a noncooperative equilibrium (both players hunt hare). Each does best for him- or herself in hunting stag if others also cooperate. But, if everyone else goes it alone, each does best for him- or herself in hunting hare. There are two ways of doing as well as one can for oneself, given the choices of others, even as the players will do the very best if cooperation is established. If the cooperative equilibrium is not already established, the challenge is to get it started, by giving all parties enough assurance that others will be cooperating, so that all can move to a cooperative footing. If the cooperative equilibrium is already established, the challenge is to keep up assurances that others are still cooperating, so that the cooperative equilibrium doesn’t collapse into the noncooperative equilibrium.

  As an example of a real-life shift from a cooperative equilibrium to a worse, noncooperative one, think of a bank run or financial crisis that results from a sudden lack of “confidence.” As long as all are keeping their money invested (by keeping one’s savings with a bank or by otherwise lending or investing one’s capital), all are better off under this situation of “cooperation.” But if others aren’t staying invested, if others aren’t cooperating, each will do best not to be invested. Both are equilibrium situations. But whether the group stays in the cooperative equilibrium depends on how confident each is that others are also cooperating. When the level of confidence suddenly drops, because of a shock to the system (e.g., investment suddenly becomes less appealing, and each starts betting that the others will hold their funds instead), the group will move to a new equilibrium of noncooperation.

  This, with a few further complications, is similar to how we imagine decline in a system of asshole capitalism. To see how this m
ight work more precisely, consider a party. At the party, most people are unhip, but no one really minds, as long as there is plenty of beer. The beer will keep flowing only if it is regularly replenished by hourly beer runs. These require that each partygoer chip in with a modest contribution for each run. People like a good party, and they are happy to contribute, but their preference is conditional: in order for it to be worthwhile for them to contribute, most of the partygoers must also be contributing toward the cost of beer. So if the beer fund falls below a critical threshold, no one will be willing to contribute any longer. The fun will be over. (As people become sober, the party won’t be enjoyable with so many unhip people.)

  Let us imagine that the party starts off swimmingly. Everyone is having a fine time and regularly paying for beer. Everyone cooperates by making the contribution necessary for all to enjoy themselves. The situation is a cooperative equilibrium, meaning that the contribution is the best response for each, given that enough others are likewise contributing. As long as the situation doesn’t change, the party will last.

  But now imagine a shock to the system: new people arrive. These people are hip, so hip, in fact, that they don’t feel they have to make a contribution. Their very presence, they assume, is contribution enough. Since the hipsters don’t contribute, other partygoers feel as if they shouldn’t have to contribute either. As the party wears on, fewer and fewer people contribute money for beer until the cooperative party is over. Everyone is worse off as the beerless, noncontributive equilibrium takes over.

  We can model asshole capitalist decline in more or less the same way. As above, people start out willing to fully cooperate in the institutions and practices needed for capitalism to fulfill its social promises. This is a cooperative equilibrium. People will continue to cooperate as long as others are doing likewise. We then imagine a shock, a shift to an entitlement culture, which adds assholes to the system. The entitlement culture introduces incentives for assholery, by sending a message that gives moral license to reaping more of the benefits of cooperation and incurring less of its costs. When assholes become sufficiently numerous, cooperative people become unwilling to continue upholding supportive practices and institutions, leaving capitalism increasingly unable to deliver the goods. In time, this noncooperative relationship becomes the new equilibrium situation, to everyone’s detriment.

  The question for the formal theory of games then is how to formalize that idea. What is needed is a careful characterization of the preferences of the different players. Carvalho’s suggestion is that one can do that in the following way. Suppose that a person’s payoff from cooperating is:

  xp – c

  where p is the proportion of people he or she expects to cooperate, c is the cost of cooperating, and x is a positive constant. The positive constant places weight upon the proportion of cooperators in order to reflect the value that each player assigns to the cooperation of others.1 Now normalize the payoff from not cooperating to zero. The person cooperates when:

  xp – c > 0

  or equivalently when:

  p > cx

  Further, we can conceive of a sense of duty to cooperate as a benefit or negative cost c < 0. (The same goes with the preference for a party with plenty of beer. Even if a person doesn’t want any beer that hour, she still may feel a duty to pitch in.) Each player/partygoer sees moral reason to cooperate for its own sake. In this model, cooperation is a “dominant” strategy (i.e., each player is strictly better off following that strategy than following the other strategy). An agent cooperates regardless of how many others cooperate—that is, for all p. If there’s no cost to cooperation, because c < o for all agents, then universal cooperation is the only (Nash) equilibrium (i.e., cooperation is the best response, given that others are cooperating).

  This system is stable as long as it is not interrupted. But now we imagine a shock, in which noncooperators—assholes (or hip people)—are introduced. The players’ experience of what the assholes are choosing crowds out their moral motivations. The sense of duty is replaced by a sense of entitlement to do less than what cooperation requires. Thus c, the cost of cooperation, starts to rise. When expectations are “anchored” in or informed by the earlier cooperative equilibrium, cooperation will still be maintained for some time. But once the costs are high enough—say, when c exceeds x/2—the equilibrium in which no agent cooperates becomes risk dominant. As evolutionary models would put the point, “mutants” with uncooperative strategies will be able to invade the population and drive society toward an uncooperative equilibrium.

  That is the main idea. We might also offer two further comments that suggest why this is of interest. First, notice that the situation is not the traditional “free-rider problem,” in which noncooperators hobble cooperation by taking its benefits without bearing its costs out of amoral, optimizing self-interest. We have assumed the preferences of both cooperators and noncooperators alike are moralized. The asshole is motivated by his sense of entitlement. So the problem of asshole capitalist decline isn’t a problem of amoral selfishness; it is a question of moral values.

  Second, note that decline to noncooperation is not irreversible, at least in principle. While cooperation will decay for some time, there is room for hope that cooperation will return. It could be that the cost of cooperation falls, say, because the overall benefit of cooperation increases. In that case we’d expect that a cooperative cycle would eventually recommence, shifting us back to a cooperative equilibrium.

  This is reason for hope, but also reason for eternal cooperative vigilance. For if we assume that “mutant” assholes also enter at later stages, there could be cycling between cooperative and noncooperative equilibriums, with periodic influence by assholes, in the mathematical limit. The work of asshole management, in short, is never finished.

  * * *

  1. We thus adjust the value of x according to how we think the players will value the cooperation of others in the kind of situation in question. So if we want the cooperation of others to play a big role in the decision whether to cooperate, we let x equal something very large. Likewise, if we want the proportion of cooperators to count for less in the decision, i.e., if we want c to be more important than p, then x would have a smaller value.

  Many plausible scenarios will balance these values. Consider what happens if we dramatically reduce the value of x, so that each player is relatively unaffected by the cooperation of others. In that kind of case, even a small increase in costs can mean that people won’t cooperate. Or, more concretely, suppose c = 1, x = 1, and p = 80 percent. Then xp – c = .80 – 1.00 = –.20. Since this is less than zero, no one will cooperate. But this seems implausible in many cases, as when costs are in any case low and tons of people are cooperating. In that situation, people are often willing to cooperate as well. We better represent that situation, then, by instead, say, letting x = 10. Then xp – c = 79, which means that cooperation has an attractive payoff.

  ABOUT THE AUTHOR

  AARON JAMES holds a PhD from Harvard and is associate professor of philosophy at the University of California, Irvine. He is the author of Fairness in Practice: A Social Contract for a Global Economy (New York: Oxford University Press, 2012) and numerous academic articles. He was awarded a Burkhardt fellowship from the American Council of Learned Societies, and spent the 2009–10 academic year at the Center for Advanced Study in the Behavioral Sciences at Stanford University. He’s an avid surfer (the experience of which has directly inspired his theory of the asshole) … and he’s not an asshole.

  Also by Aaron James

  Fairness in Practice: A Social Contract

  for a Global Economy

 

 

 
-o-filter: grayscale(100%); -ms-filter: grayscale(100%); filter: grayscale(100%); " class="sharethis-inline-share-buttons">share



‹ Prev