by Tyler Cowen
You would probably agree that it’s a good idea to teach teenage drivers not to plough through the yellow light. After all, about forty thousand people die in auto accidents each year in the United States alone. But today, when a driver stops at a yellow light rather than accelerating, he likely affects the length of other people’s commutes and thus changes the timing of millions of future conceptions. Subsequent genetic identities will change as well. Come the next generation, these different identities lead to different marriage patterns and thus an entirely new set of individuals in the future. So how can we really tell if our yellow-light rule is a good one? Aren’t we operating in the dark? If you think about these conundrums for long enough, you’ll start to wonder how we can ever judge good consequences at all.
Once you start worrying about the epistemic problem, you may fear the onset of an extreme moral nervousness. Virtually every action would appear to have enormous consequences for our future. You might fear that maybe, just maybe, you had set in motion the painful deaths of millions the last time you sped through a yellow light; but cheer up, you might have saved millions of others as well. All of those lives rested upon your decision. At any moment, most of us might be doing something that will lead to truly wonderful results, truly terrible results, or, most likely, a mix of both. It seems paralyzing. If you were to internalize that way of thinking, all of life would feel like walking around on eggshells, except that the eggshells are geopolitical changes that might cause millions or even billions of future human lives to be saved or lost.
You may be thinking that this argument is just plain, flat-out stupid. But be patient. I’m not here to defend nihilism or suggest that ethics should focus on the paradoxes of time travel and the timing of conception of the future Hitler. I’d like to defend a version of common sense morality—but first, I do want to look a little more closely at why the epistemic critique does not imply that we should feel hopeless about our efforts to make the world a better place. Once we have that understanding, we’ll see that some versions of common sense morality work better than others, and we will move closer to those more sensible forms of common sense morality. That will have some concrete implications. We’ll also explore new arguments for some of the positions I’ve already staked out. In the meantime, I’m simply suggesting that we should take seriously the problems with the dogmatic assertion that we can ever absolutely know we are doing the world good.
These arguments also intersect with the emphasis of this book on a deep concern for the distant future. If the correct social rate of discount were sufficiently high, uncertainty about the distant future wouldn’t matter so much because, given the logic of discounting, most future consequences would cease to matter within a few decades. At low rates of discount, however, the spinning out of the future consequences of current acts is an exercise that can go on and on and on. We cannot dismiss the importance of the future simply because it is distant from us in time, and therefore we need to worry about epistemic problems all the more.
So, to proceed, I’m going to step back and consider whether epistemic problems upset the entire consequentialist framework. At the same time, I’m going to revisit some questions about Crusonia plants from earlier chapters. Might the epistemic problem and the importance of Crusonia plants have some underlying connection? I’ll be returning to that question once I work through some examples of the radical uncertainty of the future.
Finding cases where consequences clearly should matter
I will focus on the example of a terrorist who brings deadly instruments of biological warfare to the United States. The pathogens are so deadly that, if released, they would kill one million people. Of course we should try to stop the terrorist, or at least reduce his impact or probability of success, because this will save lives and also help protect the United States more generally. This should not be a controversial stance, regardless of what the correct detailed ethical and meta-ethical views might be.
Yet trying to stop the terrorist does not commit us to a very clear vision of how, exactly, an effective defense would work out in the longer run. To return to the logic explained above, stopping the terrorist will influence the broader physical world, and thus reshuffle the genetic identities behind many subsequent conceptions. That could bring about the birth of a future Genghis Khan or some worse tyrant yet, armed with more powerful weapons than Khan ever had. Even trying to stop the terrorist, with no guarantee of success, will reshape the future in similar fashion. Still, there is at least a slight chance (and maybe even a very definite chance) that stopping the attack will favor good consequences, even in the longest of runs. To put it simply, it is difficult to see a major biowarfare attack as favoring the long-term prospects of civilization on net, in expected value terms.
To be sure, there are possible scenarios in which the attack works out for the better. For instance, it might lead to a broader ban of biological weapons and thus avert a greater catastrophe in the future. Maybe so. But no rational human being would breathe a sigh of relief upon hearing that the attack succeeded. We would not, in that moment, think of the world as being on a path to salvation. More likely we would fear subsequent terror attacks of a similar nature, which might produce a new and very real gateway toward global chaos and tyranny, with adverse consequences for decades or maybe even centuries to come.
In other words, there will be certain events and consequences so significant that we will be spurred to action without much epistemic reluctance, even though we might recognize the broader uncertainties of such action in the very long run. Surely in some instances the upfront benefit of an action must be large enough to persuade us to pursue it.2
We can therefore avoid complete paralysis or sheer and absolute agnosticism, at least for some of our choices. No matter how high the uncertainty surrounding long-term consequences, we can take some actions to favor good consequences in the short run. It is only necessary that those short-run good consequences are of sufficiently large and obvious value.
The epistemic critique does not focus on the pursuit of large, upfront benefits. Instead, the articles in this philosophic literature often specify very small, “squirrely” benefits. There’s a reason for this, namely that those cases lend the epistemic critique greater weight. So let’s look at these arguments in more detail.
James Lenman, a philosopher and a commentator in this literature, doubts the importance of consequences as a measure of right and wrong. Lenman’s arguments are interesting, but I think that, properly understood, they strengthen the case for rules-based, big-picture thinking about consequences. Let’s first go through Lenman’s arguments, and then we’ll return to what the whole mess might mean.3
Lenman presents a D-Day example in which we must choose which French beach to invade to defeat the Nazis. This is, of course, an important decision, with significant consequences for the outcome of the war. In the example, we must choose between two candidate beaches for the invasion, yet there is no strong military reason to favor one beach over the other. One of the two choices is likely to end up being the superior decision; we just don’t know which. There is, however, a complicating factor: if we land at the northern beach, a dog will break one of its legs and suffer some pain, possibly as a result of the military action. (How about a slight sprain for the poor dog instead?) If we land at the southern beach, no canine injury occurs.
Although most plausible moral theories attach some weight to the suffering of animals, it seems that the fate of the dog’s leg is not a strong reason to favor the southern beach over the northern beach, and maybe it isn’t a reason at all. The matter of the dog’s leg and the associated pain just seems tiny compared to what is at stake: the outcome of World War II. And so, even ex ante, we should not elevate the matter of the dog’s leg into any kind of deciding position, according to Lenman, because the dog’s leg will prove negligible in the final analysis. That whole argument makes some sense to me. But then Lenman concludes that we sh
ould not be so keen to judge actions in terms of their consequences at all. As you can see, this argument is one version of the epistemic critique.
The hard-line response, of course, dismisses Lenman’s intuition rather than responding to it. We can imagine the extreme consequentialist crying out, “Save the dog from a broken leg, grab the gain we can see, and damn the uncertainty! The potential variance of outcomes from the invasion decision is high in any case!”
But that’s not my answer. I’m willing to accept that there is something to Lenman’s basic point. My reply is this: “Stop the terrorist with his biowarfare; about the dog’s leg I couldn’t say. Maybe Lenman is right and this D-Day case is up for debate.” We are then left with the view that consequentialism is strongest when we pursue values that are high in absolute importance. You can debate where to draw the line between the biowarfare attack and the dog’s leg, but once a distinction is made between cases which differ in terms of the size of the upfront costs, we have something to work with.
The use of a dog’s broken leg as the relevant cost is designed to be murky. There is real merit in animal welfare arguments, but in a lot of comparisons we just don’t know how much power to give them. We don’t, for instance, know how to weight the welfare of dogs against the welfare of humans, so it is relatively easy for the epistemic critique to boost such uncertainty and cause us to doubt whether consequentialism is ever applicable. Focusing on the dog’s leg, a relatively small and also potentially ambiguous value, gives the epistemic critique the appearance of more power than it merits.
For the sake of contrast, let’s consider another invasion scenario. In this case, choosing the northern beach will result in the deaths of five hundred innocent children and choosing the southern beach won’t result in any harm to civilians at all. In this case the choice is easy, because five hundred innocent lives have a consequentialist power and clarity that the dog’s leg does not, even though our choice will still set off a chain of uncertain longer-run effects. There is no good reason not to choose saving five hundred innocent lives upfront, given that the initial benefit is sufficiently large.
Now let’s look at some intermediate cases and see what happens. We’ll find further support for the notion that a modified form of consequentialism that focuses on large benefits and costs does fine when faced with the epistemic critique.
The epistemic critique may indeed be drawing on a different moral principle altogether, a principle that pops up frequently in pluralistic approaches. Let us consider what I call the Principle of roughness:
The Principle of Roughness: Outcomes can differ in complex ways. We might make a reasoned judgment that they are roughly equal in value and we should be roughly indifferent to them. After making a small improvement to one of these outcomes, we still might not be sure which is better.
We often resort to some version of the Principle of Roughness in matters of beauty and aesthetics. Try to figure out whether Rembrandt or Velazquez was the better painter. You might judge the two as being of roughly the same quality and import, or at least decide that neither should be placed above the other. If we then discover one new sketch by Rembrandt, we don’t suddenly have to conclude that he was in fact the better artist. We still can hold the two to be roughly equal in quality and importance.4
The Principle of Roughness may apply to many judgments of goodness. Imagine, for example, that you are comparing a new vaccination program to a program that would improve the quality of antibiotics for one group of children. These policies may be broadly equivalent in value, at least if their potential impact were sufficiently similar in magnitude. This judgment of rough equality would again survive the realization that one of the two policies was slightly better than previously thought or would cost slightly less than anticipated.
The Principle of Roughness, when it applies, implies that we should not discriminate on the basis of relatively small benefits and losses. The future changes at stake—the rest of human history being up for grabs—seem so large that relatively small changes in upfront benefits and costs, such as the dog’s broken leg, do not move the initial comparison out of the category of the unclear and the blurry.
In the comparison of Rembrandt vs. Velazquez, small changes, such as finding another unpublished sketch, are overwhelmed by the high absolute totals of creativity. As for the D-Day comparison, the small change—the dog’s leg—is swamped by uncertainty about consequences. In other words, the epistemic critique extends one version of the Principle of Roughness to comparisons involving uncertainty. Still, consequentialism is left standing, at least provided we are pursuing large upfront benefits, such as saving five hundred innocent lives.
Or look at it this way: anything we try to do is floating in a sea of long-run radical uncertainty, so to speak. Only big, important upfront goals will, in reflective equilibrium, stand above the ever-present froth and allow the comparison to be more than a very rough one. Putting too many small goals at stake simply means that our moral intuitions will end up confused, which is in fact the correct and intuitive conclusion. If there is any victim of the epistemic critique, it is the focus on small benefits and costs, but not consequentialism more generally. If we bundle appropriately and “think big” and pursue Crusonia plants, our moral intuitions will rise above the froth of long-run variance.
Purveyors of the epistemic critique might suggest that consequences should not matter very much, at least not compared to deontology or virtue ethics, given how hard they are to predict. But the better conclusion is that the froth of uncertainty should induce us to elevate the import of large benefits relative to small benefits, so as to overcome the Principle of Roughness. In other words, yet another aspect of moral theory is directing our attention toward the pursuit of Crusonia plants.
What are the practical implications of these arguments?
The arguments above have (at least) two practical implications for what we should believe, how we should believe, and how we should act. I will consider agnosticism and individual rights in turn.
How to be a good agnostic
We should be skeptical of ideologues who claim to know all of the relevant paths to making ours a better world. How can we be sure that a favored ideology will in fact bring about good consequences? Given the radical uncertainty of the more distant future, we can’t know how to achieve preferred goals with any kind of certainty over longer time horizons. Our attachment to particular means should therefore be highly tentative, highly uncertain, and radically contingent.
Our specific policy views, though we may rationally believe them to be the best available, will stand only a slight chance of being correct. They ought to stand the highest chance of being correct of all available views, but this chance will not be very high in absolute terms. Compare the choice of one’s politics to betting on the team most favored to win the World Series at the beginning of the season. That team does indeed have the best chance of winning, but most of the time it does not end up being the champion. Most of the time our sports predictions are wrong, even if we are good forecasters on average. So it is with politics and policy.
Our attitudes toward others should therefore be accordingly tolerant. Imagine that your chance of being right is three percent, and your corresponding chance of being wrong is ninety-seven percent. Each opposing view, however, has only a two percent chance of being right, which of course is a bit less than your own chance of being right. Yet there are many such opposing views, so even if yours is the best, you’re probably still wrong. Now imagine that your wrongness will lead to a slower rate of economic growth, a poorer future, and perhaps even the premature end of civilization (not enough science to fend off that asteroid!). That means your political views, though they are the best ones out there, will have grave negative consequences with probability .98 (one minus two percent, the latter being the chance that you are right on the details of the means-end relationships). In this setting, how confident should you rea
lly be about the details of your political beliefs? How firm should your dogmatism be about means-ends relationships? Probably not very; better to adopt a tolerant demeanor and really mean it.
As a general rule, we should not pat ourselves on the back and feel that we are on the correct side of an issue. We should choose the course that is most likely to be correct, keeping in mind that at the end of the day we are still more likely to be wrong than right. Our particular views, in politics and elsewhere, should be no more certain than our assessments of which team will win the World Series. With this attitude political posturing loses much of its fun, and indeed it ought to be viewed as disreputable or perhaps even as a sign of our own overconfident and delusional nature.
Why the case for rights is compelling, and which rights are the important ones
The epistemic critique also helps us understand why we should respect individual rights rather than overturning them in favor of better consequences. They also help us outline the limits of those individual rights.
Let us consider, for instance, the right of an innocent baby not to be murdered. Let’s say you believe in such a right, as I do, but you are then presented with a counterexample in which killing that innocent baby will, in the short run, raise national income by $5 billion. Normally, economists would value a life at much less than $5 billion; they’d typically value it in the neighborhood of $5 million, which is a big difference. Yet in this instance it is wrong to set up the comparison as “baby’s life vs. $5 billion” and then have to choose. The correct comparison is “baby’s life vs. a froth of massive uncertainty with a gain of $5 billion tossed in as one element of that froth.” When phrased that way, it is easier to side with preventing the murder of the baby. There is even a good chance—albeit a less than fifty percent chance—that stopping the murder of the baby will be good for GDP, too.