Stubborn Attachments
Page 12
We still do not have the Best Ethical Theory. But we should live our lives to the fullest, knowing that common sense morality has a deep connection to what is truly right.
Appendix A—Some optional mathematics and remarks on a few metrics
Mathematically, we can formalize our concern for the future in a few ways. The simplest method postulates a strict zero discounting of utility. In mathematical terms, it looks like this:
(1) SW = Σ U (at)
An ethics based on that equation would mean basic neutrality for utilities across time. No person’s well-being would count for less simply because of its temporal distance.1
Another approach builds on the “Golden Rule” in the economic theory of capital. The Golden Rule tells us to choose the highest sustainable path of utility or consumption over time. Mathematically, it looks like this:
(2) Max: lim U (at)
That is, we should maximize a steady state value—in this case, Wealth Plus—over an indefinite time horizon.2
I sympathize with both zero discounting of well-being and the Golden Rule. And under a variety of technical conditions, the two principles will imply the same choices across all comparisons.3 But there is another approach to comparing current and future values. An explicitly zero rate of discount would require that a future interest always receives equal consideration to a current interest. Whether or not this conclusion fits our moral intuitions, it sounds like a very strong claim, especially with that word “always.” As for the Golden Rule, the properties of the mathematics make it hard to generalize beyond the infinite horizon case.
As a modification of these approaches, the overtaking criterion is a more modest rule of thumb for present/future trade-offs. We should always be willing to give up a discrete benefit today if in return we can create a sufficiently long string of well-being increases for the future. Yet the criterion does not specify any single numerical or universally applicable discount rate. We can write the following:
The Overtaking Criterion: A sequence g∞ = (b1, b2, …) is preferred to h∞ = (a1, a2, …) if one sequence, at some point in time, remains systematically higher than the other.
The math may look intimidating, but it simply means that we prefer one sequence of values to another if one sequence, at some point in time, remains systematically higher than the other. For instance, compare the two sequences:
(a) 3, 3, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6…
(b) 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5…
By design, sequence (a) will continue with sixes, and sequence (b) will continue with fives. Using the overtaking criterion, (a) is better than (b), even though (b) brings higher well-being in the first few periods. Again, I am thinking of these numbers as corresponding to what I call Wealth Plus.
Similarly, the overtaking criterion will prefer (c) to (d) for:
(c) 3, 3, 7, 8, 9, 10, 11, 12, 13, 14…
(d) 4, 4, 6, 7, 8, 9, 10, 11, 12, 13…
As written, this assumes that each element in each sequence continues to rise by one.
I do not require the superimposition of an infinite time horizon for the overtaking criterion to make sense. Instead, we can prefer (a) to (b) and (c) to (d) if the relevant time horizon is long enough for one sequence to be obviously welfare-dominating over the other. Quite simply, if (a) beats (b) for a few hundred years, I am happy to sign off on preferring (a). We can also add Pareto principles to the overtaking criterion so that if one sequence has some unambiguously higher values along the way, it is better even if it does not overtake the other for all later periods of time, but rather remains equal to it.4
Low rates of discount, as expressed through this variety of mathematical assumptions, might appear to give the future too much weight relative to the present. But there are deep empirical reasons why the present will not fade into irrelevance in our decisions. Economic growth is a cumulative process with causal relationships between the variables at the beginning of growth and at the later stages of growth. This means that—usually—the best way to satisfy the overtaking criterion is to put the present in a good position to build for the future. Put another way, economic growth requires that we invest in healthy institutions, which means doing some good things for the here and now, too.5
Cyclic theories of society, as found in some of the classic thinkers, such as Montesquieu and Vico, would make it difficult to apply an overtaking criterion in a useful way. Assume, for instance, that successful societies become infected by hubris and then self-destruct as a consequence of their earlier success. Furthermore, it might be the fallen societies which rise from the ashes and go on to achieve greater glories. In this case, a higher sequence in one period would mean, on average, systematically lower sequences in later future periods. Systematic overtaking would never hold. That said, while there may be some degree of catch-up at play, there is not much systematic evidence that today’s losers become tomorrow’s winners. If you’re trying to predict absolute levels of future prosperity, you’re generally well advised to bet on the previous winners, for reasons discussed in chapter three.6
Comparison with gamma discounting
There is another way to derive relatively low discount factors, and that is to consider models in which the discount factor changes over time.7 The way that expected value calculations work, mathematically, is that the lowest discount rate will have a relatively high contribution to the final assessment of an outcome. This stems from the literature on what is known as gamma discounting, and it leads to results broadly consistent with a deep concern for the distant future.
Here is a simple way to think about the logic: say there is an equal chance that the relevant interest rate will be either one percent or five percent. The value of one dollar in one hundred years is 36.9 cents at a one percent discount rate but only 0.76 cents at a discount rate of five percent. Due to the convexity of returns, the average or expected value discount rate is below two percent, because that is the rate at which you would get 36.9 + 0.76 divided by two as an expected payoff. It is wrong to average the discount rates themselves, for instance by taking 1 + 5 = 6 and then dividing by two to get an average of three percent. The averaging is applied to the payoffs, and that gives the lower discount rates a greater influence over the expected values overall.8
So the lower discount rates have greater weight for determining the assessment of distant future values. This does not exactly mirror the overtaking criterion, but it does increase its plausibility by favoring a relatively low discount rate.9
Our actual behavior sometimes reflects a version of gamma discounting. On questionnaires, for instance, many people will suggest that the present is more important than forty years hence, but that eventually further distances of time should cease to matter very much. In this view, what happens four hundred years from now is not much less important than what happens three hundred years from now, even though the application of strict exponential discounting would suggest otherwise.10
My arguments are consistent with the spirit of gamma discounting, but I have chosen a slightly different direction, as discussed above. Gamma discounting still implies that we can choose a single correct (non-zero) number or set of numbers for the long-term rate of discount. If such rates are positive, we still face the possibility that a single life today will be worth more than continued world survival, provided we choose a long enough time horizon for the comparison. For most practical issues, such problems are unlikely to arise, but I find this conclusion morally counterintuitive nonetheless. Alternatively, we might choose a zero rate for the latter years of our gamma discounting comparisons, in which case we will move closer to the overtaking criterion.
1. Cowen (1992) tries to axiomatize this approach.
2. For early presentations of the Golden Rule, see Ramsey (1928), Phelps (1961), and Meade (1962). The “Green” Golden Rule pays closer attention to resource exhaustibility but embodies the same basic p
rinciples; see Beltratti, Chichilnisky, and Heal (1993) and Heal (1998).
3. Heal (1998, 110–111) discusses these conditions. For a version of an overtaking axiom, but more general and without the same continuity requirements, see Basu and Mitra (2007); see also Banerjee (2006) for an analysis of related ideas. For a look at related philosophical issues, see Vallentyne (1993, 1995) and Vallentyne and Kagan (1997).
4. Chichilnisky (1997) suggests a modified version of the overtaking criterion that does not force the present time period to count for nothing. We could entertain such an alternative if in fact we did face an infinite time horizon and a “dictatorship of the future” problem. Note that with an infinite horizon, the Overtaking Criterion will fail to satisfy certain axioms of intergenerational equity, such as anonymity or indifference across labeling decisions. On the difficulties of satisfying all reasonable axioms in an infinite horizon setting, see for instance Fleurbaey and Michel (2003) and Sakai (2003). Asheim, Buchholz, and Tungodden (2001) respond to some charges of this kind. On the postulates of stationarity and independence, see Koopmans (1960). Bostrom (2011) considers some philosophical and mathematical issues within an infinite horizon framework.
5. On the fundamentally cooperative nature of the intergenerational problem, see Heath (2013). For an interesting look at sustainable generational exchange over time, see Rangel (2000).
6. On the convergence issue, see for instance Pritchett (1997) and also Comin, Easterly, and Gong (2010).
7. See Weitzman (1998, 2012), and also Farmer and Geanakoplos (2009) and Arrow et al. (2014).
8. See Posner (2004, 153–154) and also Weitzman (2001, 260).
9. See Weitzman (2001, 260).
10. See for instance Price (1993) and Weitzman (2001) for versions of this view. Leahy (2000) argues that standard techniques measure how much a current agent cares about his future self, when we could just as well ask how much a future self would care about the current self. Adjusting for this discrepancy could also cause us to choose lower discount rates and hold greater concern for the distant future.
Appendix B—Animal welfare and Derek Parfit’s repugnant conclusion
Before closing, I’d like to offer a few remarks on the problem that initially drew my attention to rational choice ethics, namely Derek Parfit’s repugnant conclusion. I first read Parfit’s work in 1984, and I’ve been thinking about it ever since. I haven’t solved or refuted his repugnant conclusion, but given the framework of these arguments, I’ll sum up my thinking and look at why the repugnant conclusion still represents a hole in some of these arguments. By the way, if you don’t like reasoning from “absurd” moral counterfactuals, you can stop reading right now.
Parfit’s repugnant conclusion compares two population scenarios. The first outcome has a very large, very fulfilled, and very happy population. The world also has many ideal goods, such as beauty, virtue, and justice. The second outcome has a much larger population but few, if any, ideal goods. Parfit asks us to conceive of a world of “Muzak and potatoes.” Nonetheless, the lives in this scenario are still worth living, although perhaps by only the tiniest of margins. Parfit points out that if the population of the second scenario is large enough, the second scenario could welfare-dominate the first. No matter how good we make the first world, quantity can weigh in on the side of the second.1
Few people would regard the second scenario—a world of Muzak and potatoes—as better than the first. Yet it is surprisingly difficult to find a welfare algorithm that avoids the endorsement of sheer numbers per se. Many of the attempts to cut off an endorsement of the second scenario fall prey to further philosophic counterexamples.2
Parfit’s statement of the repugnant conclusion reads as follows: “The Repugnant Conclusion. For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population, whose existence, if other things were equal, would be better, even though its members have lives that are barely worth living.”3
I’m not going to work through all of the different attempts to solve (or state) this conundrum, as entire books have been devoted to that purpose. For my purposes, it suffices to note that most people refuse to endorse the repugnant conclusion because they feel that very small utilities, no matter how large in number, ought not to add up to something with great moral significance. That is, they downgrade the very small utilities that comprise the lives of Muzak and potatoes.
I’ve focused on Wealth Plus as a foundational concept for a Crusonia plant around which we can build a moral theory. I’ve found, I think, one such Crusonia plant, but I haven’t offered any argument that this is our only Crusonia plant. I’ve picked a Crusonia plant from the lives of human beings as we know them, namely lives full of ideal goods whose richness extends well beyond the consumption of Muzak and potatoes. Parfit’s repugnant conclusion can be read as depicting the results of another Crusonia plant consisting of a very, very long sequence of “Muzak and potatoes” lives. It is easy enough to flesh out Parfit’s scenario and have these Muzak and potatoes lives be numerous because they are self-reproducing and go on for a very long time.
When I think about the repugnant conclusion in a very literal fashion, I’m not sure those Muzak and potatoes lives are well described by comparing them to human lives as we know them. Even in the world’s poorest countries, people have rich human relations and very moving and beautiful cultures. Maybe those Muzak and potatoes lives are better described by a comparison with (some) non-human animals. That is, there are animals whose lives are worth living, even though they don’t have most of the goods we associate with flourishing and more complex human lives. Why should we choose flourishing and more complex human lives over a larger number of slightly happy animal lives? Might we not consider voluntarily extinguishing the human race to make more room for a greater number of non-human animals?
In other words, once we consider non-human lives, there are multiple competing Crusonia plants. I do not and cannot, within this framework, give you any reason for choosing the Crusonia plant of human lives as we know them rather than a Crusonia plant drawn from other realms of nature. I can only say that, for practical purposes, we have to work with the Crusonia plant before us, which is very much centered on the rich, plural vision for human lives as we know them in their excellence.
For similar reasons, the arguments of this book do not and cannot resolve long-standing disputes over animal welfare and animal rights. I do personally have considerable sympathy for the view that we should treat non-human animals better than we do now. But the rights of animals, at least non-domesticated animals, belong to a different Crusonia plant from the one we are considering. Nothing within my framework, as presented here, will resolve those debates.
You may recall that some frameworks, such as contractarianism, imply that animal welfare issues stand outside of the mainstream discourse of ethics because we have not and cannot form agreements, even hypothetical ones, with (non-domesticable) animals. There is no bargaining with a starling, even though starlings appear to be much smarter than we once thought. My reasons for excluding animals from the argument are different, and have little to do with consent or hypothetical consent.
One implication of this argument is that we will not have an easy way of circumventing aggregation problems when it comes to animal welfare issues because there is—domesticable animals aside—no common Crusonia plant. This also explains why some of the more convincing treatments of animal welfare are not those of the utilitarians but rather of the Christian commentators on animal welfare, such as Matthew Scully and his excellent book Dominion: The Power of Man, the Suffering of Animals, and the Call to Mercy. For Christian thinkers, but not utilitarians, it is natural to see animal welfare as falling under a separate dominion, but still deserving of our mercy, which comes relatively close to the position I am outlining in this book.
So Parfit’s conundrum is l
ikely insoluble in some central ways, just as we do not have a comprehensive moral theory for weighing the interests of humans and bats, or humans and alien beings from another planet. Moral judgments occur within some kind of cone of value, and we must look for cones which allow aggregation problems to be overcome. Such cones are by no means available for every problem we might face, lifeboat situations included.
Perhaps these remarks will disappoint those who expect a moral theory to resolve Parfit’s dilemmas. But we can take comfort in having classified them into a broader and better-known—though still insoluble—set of dilemmas. But within the moral cones we do have, and using those Crusonia plants we do understand, I say full steam ahead.
1. See Parfit (1984, 1986).
2. Cowen (1996) surveys some of the different options, such as capping the importance of numbers or capping the importance of total utility. See also Cowen (2005).
3. See Parfit (1984, 387).
References
Acemoglu, Daron, Simon Johnson, and James A. Robinson. 2001. “The Colonial Origins of Comparative Development: An Empirical Investigation.” American Economic Review 91, no. 5: 1369–1401.