The World Philosophy Made

Home > Other > The World Philosophy Made > Page 34
The World Philosophy Made Page 34

by Scott Soames


  For the slogan to have its intended force, the consequence relation must be conceptual, not merely logical—e.g., the relation must be necessary or a priori consequence. A proposition Q (expressed by a sentence) is a necessary consequence of a proposition P if and only if it is impossible for P to be true without Q being true—if and only if for any state w that it is possible for the world to be in, if P would be true were the world in state w, then Q would also be true were the world in w. Proposition Q is an a priori consequence of proposition P if and only if it is possible to determine that Q is true, if P is true by deductive reasoning alone, without appealing to empirical evidence to justify one’s conclusion. Presumably those who say that one cannot derive ought from is mean that no claim about what one ought to do is both a necessary and an a priori consequence of any factual claims.

  This way of understanding the alleged impossibility of deriving ought from is is more interesting. To see why, we need to say more about the truth conditions of a statement that A ought to do X. To make things easier, let’s begin with cases in which the ought statement expresses, not a judgment about what is morally required, but a prudential judgment about what is best for A, understood as what contributes most to A’s welfare (from a range of relevant acts). What is welfare? It is natural to think of it as consisting of the advancement of one’s most basic interests and the development of one’s most important capacities, which contribute most to one’s flourishing and well-functioning as a human being. These, in turn, depend on human nature. According to the view of human beings as intensely social animals to be advocated here, the basic constituents of our welfare may be said to be: health, safety, companionship, membership in a community, freedom of action, development of our physical and intellectual capacities, satisfaction of our native curiosity, enjoyment of sensual pleasures, opportunities for excitement and the pursuit of difficult goals in concert with others, the ability to contribute to the welfare of others we care about and to benefit from those who care about us, and the knowledge that we are contributing to a larger human enterprise that will outlast us.

  Welfare, so understood, comes in degrees. Normal human beings usually care about their own welfare and wish to advance it, while differing on how much significance they attach to the various components that go into it. They also care about other things, for which they are sometimes willing, quite properly, to sacrifice their welfare. In addition, they are often either ignorant of, or mistaken about, what their welfare consists in and what will advance it. Though they typically want to be better off—to increase their welfare—they not infrequently want or desire things that are inconsistent with that goal.

  What, then, is the connection between one’s welfare and one’s reasons for action? First, the fact that performing a given action X would—objectively—increase one’s welfare provides one with a reason for action—the greater the increase the stronger the reason—whether or not one recognizes that X would do so. Second, recognizing the fact that performing X would increase one’s welfare nearly always provides one with some desire or motivation to perform X, even though (a) the intensity of one’s desire need not be proportionate to the strength of the reason, and (b) even when it is, one may have stronger reasons or more intense desires to do something else. What, in light of this, are the truth conditions of prudential ought claims? Putting aside both moral reasons and reasons stemming entirely from a concern for the welfare of others, we may take a prudential use of A ought to do X to be true if and only if A has more reason to do X than A has to do anything else (from a range of relevant alternative acts), in short, if and only if A’s doing X would most advance A’s welfare.

  In assessing what it is for A to have more reason to do X than to do Y, there are two confusions to avoid. First, when we ask, at a given time, which actions A has most (prudential) reason to perform, we are not asking which of A’s desires or interests are currently most intensely or urgently felt; we are asking what, on balance, would maximize A’s welfare, considered as a state that has its ups and downs over time. It is perfectly possible for agents who are otherwise rational, but who have not developed necessary self-discipline, to allow the intensity of immediate desire to lead to actions they know to be contrary to their larger interests. Second, we must not, harkening back to chapter 8, identify maximizing A’s welfare with maximizing the expected utility (from A’s point of view) of A’s choice of one action from a range of alternative actions (which is a product of A’s utilities plus A’s subjective probabilities that performing certain actions would produce desired outcomes). If I know that A is ignorant of relevant facts, or that some of A’s beliefs are false, I may know that some of A’s subjective probabilities are unrealistic, and, for that reason, I may be better able to evaluate the benefits to A of a given course of action than A is. If so, my remark “You ought (or ought not) do A” may be true, even if it doesn’t match A’s own ranking of expected utilities. My remark will be true if and only if A’s performing X will maximize A’s total welfare when compared with other relevant acts.

  Finally we ask, “What true factual premises are needed to derive prudential conclusions about what A ought to do that are necessary and a priori consequences of those premises?” The answer should be clear. We need truths about what A’s welfare consists in plus truths about what outcomes would be produced were A to perform various actions. In many situations, there may be considerable ignorance, uncertainty, and even error, about these matters. Because of this, we often won’t know which factual truths would allow us to derive truths about what, prudentially, A ought to do. But sometimes we will, and even when we don’t, our ignorance is no reason to doubt that there are such factual truths. Thus there is no compelling reason to doubt that, often, some prudential claims about what A ought, or ought not, do will be both necessary and a priori consequences of factual truths about A, and A’s situation.

  Nevertheless, this result is limited, even if we continue to put distinctively moral reasons aside. To see this, consider a case in which A contemplates an action X that would benefit someone B whom A cares about a great deal, even though performing X would diminish A’s welfare. This situation will arise when A’s knowledge of (or belief in) B’s benefit increases A’s welfare less than the cost to A’s welfare of performing X, which could be avoided by doing something else. Because of this, A ought not, prudentially, do X because A’s purely prudential reasons for doing X outweigh all his prudential reasons for doing anything else.

  However, it’s not obvious that this means that A ought not do X, all things considered—even if we continue to bracket moral reasons. If A cares more about B’s welfare than A’s own, A may think “I ought to do X” while being fully aware of what doing X would involve for both of them. A need not be thinking that benefiting B is morally required; in some cases it may not be. A may simply recognize that since A wants to benefit B more than A wants anything else, doing X will bring about the result that A most desires. Surely, we can’t say, in all such cases, that it would be wrong (morality aside) for A to do X, or that A ought not, all things considered, do X. This suggests that the ought statements we have been considering may be equivalent to different maximizing statements. The prudential statement is equivalent to the claim that doing X would be more beneficial to A’s welfare than performing any relevant alternative act, while the “all things considered” statement (in the circumstance we are imagining) is equivalent to the claim that doing X will satisfy A’s deepest desire. Presumably, these oughts can be derived from factual statements about A, the actions under consideration, and the targets of those actions.

  DERIVING MORAL OUGHT FROM FACTUAL PREMISES

  Why should statements about what one morally ought to do be different? The challenge is to find facts about normal human agents and their relations with others that are capable (i) of supporting the truth of the statements to the effect that they morally ought to do certain things and (ii) of providing them with what they can, in principle, recognize to be
reasons—facts with the potential of moving them—to perform the required actions. In looking for such facts, we look for other people an agent cares about plus relationships and activities that the agent values in which he or she is, in one way or another, involved with others. The others whom the one cares about (to widely varying degrees) may be family, friends, loved ones, associates, coworkers, members of the same profession, fellow citizens, and even all of humanity, including the unborn. They may include any on whose welfare one places some positive value (large or small) whom one imagines might in some way be affected by one’s actions. The relationships and activities with others that one values encompass any reciprocal or coordinated action from which the participants derive value that wouldn’t be available if they couldn’t, in general, count on others to play their expected parts. These include personal relationships, promises and commitments, participation in business and professional practices, common market-based economic activities, truthful linguistic communication, and many more activities.

  Moral reasons for one’s actions are facts about the impact of those actions on the welfare and legitimate activity-based or relationship-based expectations of others. The fact that an action one is capable of performing would have a positive effect on the welfare of those one cares about is a broadly moral reason for performing it—the stronger the effect, the stronger the reason. In a different sort of case, the fact that an action conforms to the legitimate activity-based expectations of those with whom one voluntarily interacts in an activity providing benefits for all is also a moral reason for performing the act. To understand this sort of reason, imagine yourself participating in a voluntary group activity that benefits all if each plays his or her part, but which may fail to be beneficial if one or more participants opt out. Realizing this, and wishing not to incur the anger and negative consequences that would result from discovery that one is shirking, one has a self-interested reason not to opt out. This, in turn, may, and very often does, provide the basis of a moral reason for conforming to the legitimate expectations of others. This occurs either when one cares, to some degree, for the other participants, or when one doesn’t want to be the kind of person who would let others down—e.g., the kind of person one would condemn oneself if were to view one’s action from the perspective of another participant. The strength of this second sort of moral reason is proportionate to the importance of one’s role in the activity, the benefits produced by it for the participants on the particular occasion in question, and the centrality of that general type of activity in the social life of which one is a part. Summing up, we may say that, in general, the acts one morally ought to perform are those one has the strongest moral reasons to perform, provided that they don’t require one to make sacrifices out of proportion with the nature of the benefits for others they achieve.

  Which moral reasons are stronger than which others, how they combine to produce an act’s overall moral stringency, as well as how and when that stringency is discounted, in determining what one morally ought to do, by the sacrifices to one’s welfare entailed by performing it, are complex matters studied by normative ethical theorists. I don’t, and I am not sure anyone does, know how to reduce these to any precise formula. But the foundational point remains. All the determinants of these moral calculations—one’s own welfare, the effects of the action on one’s welfare and that of others, and the relation of the action to the relation- and activity-based expectations of others with whom one is involved—are factual matters. If, in addition, the relative strengths of these matters and their manner of combination are also ordinary factual matters, then it may, in principle, be possible to derive moral oughts from factual premises about what is. Although I haven’t demonstrated this to be so, it’s not obvious that we should take for granted that it isn’t. If we don’t do that, we should be open to the idea that moral facts, like other facts, are capable of being investigated and known, even if that knowledge is sometimes very hard to achieve.

  There is, however, another worry to be confronted. Although the idea of moral objectivity is welcome, one might worry that it comes at the price of an objectionable moral relativity. In grounding moral reasons for action in the interests and values of the agent, one must give up the Kantian idea that moral obligations are binding on all rational agents, who could, in principle, entirely lack fellow feeling with, or compassion for, others. The point is illustrated by the reaction of a class of possible rational agents to three facts that would, in ordinary life, be regarded as relevant to establishing the truth of claims about moral obligation: (i) the fact that lying or breaking a promise subverts the trust that makes one’s lie or promise possible (which, all other things being equal, would be morally objectionable), (ii) the fact that one who avoids sharing the burden of a collective effort from which one benefits asks others to do what one refuses to do oneself, and (iii) the fact that benefiting oneself will, in certain situations, seriously harm innocent others. It is natural to think that facts such as these support the truth of moral claims about what one ought, or ought not, do only if they provide reasons for all agents to act in the morally required way.

  Do they? Imagine a rational being who lacks any concern for others, who coldly calculates benefits for himself alone, and always acts accordingly. Because facts (i)–(iii) are unconnected to his interests, they won’t count as reasons for him. To be sure, a race of relentless interest-maximizers might sometimes coordinate their actions to achieve mutually beneficial ends. They may then behave in a way that appears to be cooperative. But they won’t, thereby, behave morally, because they will opt out whenever they can enjoy the benefits without incurring the costs of participation, and because genuine affection, loyalty, trust, and reciprocity will be absent.

  This scenario suggests that some facts we commonly take to support moral conclusions don’t provide reasons for all conceivable rational agents to act. How, then, do they provide us with binding reasons? How do facts that can, in principle, be known, without one’s taking any special motivational stance toward them, facts with no conceptual connection to the values and interests of the knower, count as genuinely moral? Couldn’t you and I know those facts, while understanding our own interests perfectly, without taking them to provide us with reasons to act? If so, then the idea that we have other-regarding duties that can’t be shirked by adopting different motivating ends is a fairy tale.

  This is a powerful challenge to moral objectivity. Surely, reasons for action do depend on potentially motivating values and interests. If these can, in principle, vary without limit from one rational agent to another, no mere facts can provide all such agents with reasons to perform other-regarding acts. Thus, there is no objective morality that binds all possible rational agents. This conclusion has, plausibly, been taken to be a conceptual truth by many philosophers and social scientists for decades. If it is such a truth, nothing can override it.

  The way out of this intellectual cul-de-sac is to recognize that we do not seek the impossible—an objective morality for all possible rational beings. We seek an objective morality grounded in human nature, governing all normal human beings. This is what the tradition in moral philosophy stemming from Aristotle, Hume, Hutcheson, and Smith, through the logical positivist Moritz Schlick, tried to provide. However, as Schlick emphasized, the task can no longer be left to philosophers alone.1 If objective morality is to be grounded in sociological, psychological, and biological facts of human nature, philosophers, natural scientists, and social scientists must join forces in ways that are only beginning to be explored.

  THE EMPIRICAL SEARCH FOR FACTUAL, MORALLY RELEVANT PREMISES

  One of the most promising steps in this direction was taken by the renowned social scientist James Q. Wilson (1931–2012), in The Moral Sense.2 Its central philosophical thesis is that there is such a thing as empirical knowledge of moral facts, which can be advanced and made more systematic by social scientific research. Its central social-scientific thesis is that we have a moral sense consi
sting of a complex set of social and biological dispositions relating us to our fellows, which is the product of our innate endowment and our early family experience. Although the moral sense doesn’t by itself yield a comprehensive set of universal moral rules, it can, Wilson argues, provide a factual basis relevant to the moral assessment of agents, their acts, and their policies in widely different circumstances.

  Because his theses are empirical, it follows that, for him, moral truths grounded in human nature are not knowable a priori. Whether or not they are necessary truths (in the philosophical sense) depends on whether or not the parts of our innate endowment on which the moral sense depends are necessary to being human (in the sense that loss of them in any possible future evolution would result in new, nonhuman organisms). No matter how that turns out, his theses are directly relevant to the question of whether it is possible to derive moral claims about what one ought, or ought not, do as a priori and necessary consequences of true factual premises. If Wilson is right, this may be possible—provided our premises include, not only a full description of our inherent human nature, but also a complete specification of circumstances giving rise to the moral questions facing the agent.

 

‹ Prev