The World Philosophy Made

Home > Other > The World Philosophy Made > Page 18
The World Philosophy Made Page 18

by Scott Soames


  Similarly, if the agent would take either side of a bet on p at odds of, say, 5 to 4, we do our computations on the equivalent bet with odds of 4 to 5 on the truth of ~p. This gives us a 4/9 subjective probability for ~p and a 5/9 subjective probability for p, which means that U(A) = (4/9 × 5) + (5/9 × 4) = 40/9. So [U(A) minus U(C)] / [U(B) minus U(C)] = 4/9. Since this is the probability of ~p, the probability of p is 5/9. In short, given the utilities of A, B, C, we can always construct a bet that measures the agent’s subjective probability of a proposition as [U(A) minus U(C)] / [U(B) minus U(C)].

  DEFINING AGENT-RELATIVE UTILITIES

  All of this will follow if we can assign numerical values to an agent’s utilities. But how can we do this? Ramsey responds with a plan to propose options to the agent.

  Let us now discard the assumption that goods are additive and immediately measurable, and try to work out a system with as few assumptions as possible. To begin with we shall suppose … that our subject … will act so that what he believes to be the total consequences of his action will be the best possible. If we then had the power of the Almighty … we could, by offering him options, discover how he placed in order of merit all possible courses of the world. In this way all possible worlds would be put in an order of value, but we should [still] have no definite way of representing them by numbers. There would [still] be no meaning in the assertion that the difference in value between [outcomes] α and β was equal to that between γ and δ.… [W]e could test his degree of belief in different propositions by making him offers of the following kind. Would you rather have a world α in any event [i.e., for certain, without any contingency]; or a world β if p is true, and a world γ if p is false? If, then, he were certain that p is true, he would simply compare α and β and choose between them as if no conditions were attached.13

  Now Ramsey imposes two restrictions on the proposition p in his proposed option: α for certain vs. β if p is true, and γ if p is false. First, to avoid confusions that may plague the agent’s computations involving logically complex claims, Ramsey requires p to be a simple (atomic) proposition. Second, he requires p to be “ethically neutral,” by which he means that the agent has no evaluative interest in the world being as p describes; no preference that p be true, or that it be false. For example, I am neutral on the proposition that in the first month of 1899 with an even number of days, and in which the odd-numbered days in which it rained in Seattle is not equal to the even-numbered days in which it rained, the odd-numbered rainy days exceeded the even-numbered rainy days. I am neutral about this proposition; I place no value on its truth or its falsity.

  Next, Ramsey defines what it is to believe an ethically neutral proposition p to degree ½ (i.e., to assign it a subjective probability of ½). Let α and β be outcomes on which the agent places some value, and moreover prefers one to the other. An agent believes p to degree ½ if and only if the agent has no preference between the options α if p is true, β is false and α if p is false, β if p is true, despite preferring α, let’s say, to β. The fact that the agent is indifferent to the two options despite preferring the first, provided that p is true, and the second, provided that p is false, reflects the fact—or better, just is the fact—that the agent’s degree of belief in p is ½.

  This is Ramsey’s Archimedean point. Because p is ethically neutral, we can define what it is to have a credence of ½, without having to first measure the agent’s utilities. Since we can’t measure A’s utilities entirely independent of A’s credences, we need these special credences to give us a measure of A’s utilities. Ramsey uses the definition of what it is to have a credence (degree of belief) of ½ to define what it means for the difference in value between outcomes α and β to be equal to the difference between outcomes γ and δ. Once we have this, we will be able to quantify agent-relative utilities by assigning them numerical values. When these numerical utilities are in place, we can then use the relationship already illustrated between an agent’s subjective probabilities and the agent’s utilities to assign the agent’s subjective probabilities other than ½ to propositions. Thus, we have reached the final step in the conceptual task of providing precisely defined conceptions of probability and utility sufficient to ground modern theories of rational decision and action.

  The following diagram tracks our journey. We have already taken the first two steps. We have also seen how, once we have taken the third and fourth, the fifth and sixth will follow from our previous discussion. The task now is to establish (3) and (4).

  Ramsey’s Conceptual Path

  Step 3 is a variation on what we have already done. For Ramsey, to say that the difference, for an agent A, in value between α and β (the former preferred to the latter) to be equal to the difference in value between γ and δ (the former preferred to the latter) is to say that for any ethically neutral proposition p believed to degree ½, A has no preference between options (i) α if p is true, δ if p is false and (ii) β if p is true, γ if p is false. This tells us that the value to A of (i)—which is half [the value of α together with the value of δ]—is the same as the value to A of (ii)—which is half (the value of β together with the value of γ). So, the value of α plus the value of δ equals the value of β plus the value of γ. This can be so only if the loss to A of the difference in values between α and β—reflected in the first parts of the options (i) and (ii) (when p is true)—is exactly compensated by the gain to A of the difference between the values of γ and δ—reflected in the second parts of (i) and (ii) (when p is false). Otherwise put, α minus β equals γ minus δ. This, according to Ramsey’s definition, is what it means for the value to A of the difference between α and β to be the same as the value to A of the difference between δ and γ. Although this doesn’t, by itself, determine which numbers we assign to agent-relative utilities, it sharply constrains such assignments. For example, if α is assigned 9 and β is assigned 4, then the difference between numbers assigned to γ and δ will be 5, matching the difference between α and β.

  Our final step is to further elaborate Ramsey’s assignments of agent-relative utilities, using ethically neutral propositions p believed to degree ½, by showing how to numerically calibrate the scale of the agent’s values between any given outcome (state of the world) α that the agent prefers to another outcome (state of the world) β. We start with the option (i) α if p, β if not p. We let W(½) be a state of the world in which, were the agent to take himself to be in it, he would be indifferent between buying or selling this option after it is described to him. Its value to the agent is defined to be whatever value is assigned to β plus half the difference between the values of α and β. (So W(½) is intermediate in value between α and β.) Next, we find a second ethically neutral proposition p* probabilistically independent of p for the agent, also believed to degree ½. We let W(¾) be a state of the world in which, were the agent to take himself to be in it, he would be indifferent between buying and selling option (ii) α if p*, W(½) if not p*. The value of this option is half the difference between the values assigned to α and W(½)—which is the utility of W(¾).

  In the same fashion, we construct option (iii) β if p#, W(½) if not p#, evaluated at a world-state W(¼), the value to the agent of which is the value of β plus half the difference between the values of β and W(½). (As before, W(¼) is intermediate in value between W(½)and β.) Having divided the difference in values to A of the range between α and β, we could, if we wished, assign the five points we have identified the utilities 1, 2, 3, 4, and 5, or any multiple thereof. Moreover, the process could be repeated as long as we can continue to find ethically neutral, probabilistically independent propositions that are believed to degree ½. In this way, we can assign numerical values to all outcomes the agent prefers to β while also preferring α to them.

  SOCIAL-SCIENTIFIC APPLICATIONS

  This completes the account of the philosophical conception of subjective probabilities and agent-relative utilities growing out of ideas originally expressed by
Frank Ramsey and later developed in various ways by philosophers and philosophically minded social scientists.14 These ideas are now central to various different but related philosophical theories of rational decision and action. Versions of the ideas have also been used by social scientists to describe economically, politically, and socially significant behavior, and to critique political and economic institutions.

  To understand these uses, one must remember that the goal of the formal model is not to directly describe, or to recommend, detailed processes by which agents do, or should, make decisions. The goal is to specify factors that determine the effectiveness of actions as means to desired ends, and to indicate how that effectiveness might be measured. The psychologically real processes by which we make decisions depend on our aptitudes, the time we have to deliberate, the nature and availability of relevant evidence, the cost—in time, effort, and foregone opportunities—to search for new evidence, and a host of other factors. These factors vary from agent to agent and case to case. But whatever process we employ in making a given decision, the question of how successful we are in tailoring our actions to achieve our ends is a measure of how well we choose. The more we learn about the determinants of this evaluation, the greater chance we have of better achieving our goals in the future. Finally, it must not be thought that optimally effective rational choice is optimally efficient selfish choice. Nearly everyone values the welfare of others, including the preeminent value we place on the welfare of certain selected others, even if it sometimes can be purchased only at our own expense. Rational choice is as valuable in achieving the aims of a saint as it is in achieving the aims of a sinner.

  The theory of rational decision with agent-relative probabilities and utilities allows one to extend classical accounts of rational economic behavior, measured in dollars and cents, to utility-maximizing behavior in broader settings. Thus, it is natural that those leading this extension have been the Nobel Prize–winning economists Kenneth Arrow, James Buchanan, Gary Becker, and George Stigler, along with other leading economists such as Duncan Black, Anthony Downs, William Niskanen, Mancur Olson, and Gordon Tullock. Their applications of the decision-theoretic approach have been remarkably wide-ranging, including, especially in the case of Becker (1930–2014), the economic costs of social discrimination, the social utility of investments in education, the effect of certain kinds of negative incentives on deterring crime, and emerging trends in marriage and the family.15

  The following passage from Becker’s Nobel Prize lecture in 1992 underlines his recognition of how Ramsey’s pluralistic model of agent-relative utilities makes it possible to extend traditional economic thinking far beyond its usual bounds.

  [T]he economic approach I refer to does not assume that individuals are motivated solely by selfishness or gain. It is a method of analysis, not an assumption about particular motivations. Along with others, I have tried to pry economists away from narrow assumptions about self-interest. Behavior is driven by a much richer set of values and preferences. The analysis assumes that individuals maximize welfare as they conceive it, whether they be selfish, altruistic, loyal, spiteful, or masochistic. Their behavior is forward-looking, and it is also consistent over time. In particular, they try as best they can to anticipate the uncertain consequences of their actions.16

  This passage is followed by a summary of Becker’s contributions to (i) the causes and costs of discrimination plus the most promising ways of minimizing it, (ii) the effects on criminal behavior of changes in the probabilities of detection and conviction, and the utilities associated with different types and durations of punishment, (iii) the personal, social, and economic effects of various kinds of education and training, and (iv) the formation, structure, and dissolution of families.

  Apart from Becker, most public-choice economists have focused their decision-theoretic methods on the interface between government, politics, and economics. Typically described as the application of economic reasoning to new domains, this new social science methodology is, at bottom, an ambitious application of the philosophical framework for evaluating means-end decisions created by Ramsey, who was himself a brilliant, though amateur, economist, in addition to being one of the leading philosophers of the early twentieth century.17 This common terminological appropriation, describing a model for understanding all means-ends reasoning as economic reasoning, is illustrated by the following passage from Public Choice: A Primer, by the British economist Eamonn Butler.

  Will the view from the next hill be worth the effort of climbing it? How much time should we spend in finding exactly the right birthday card for a friend? No money is at stake, yet these are still economic decisions in the broad sense of the word. They involve us weighing up how much time or effort we think it worth spending to achieve our aims, and choosing between the different possibilities. Economics is actually about how we choose to spend any available resources (such as our time or effort) in trying to achieve other things that we value more highly—it is not just about financial choices.18

  One can get a sense of public choice theory from the way in which it ties together the problem long known as market failure with the important, but previously under-conceptualized, problem of government failure. First, market failure. The beauty of free competitive markets is that typical transactions are voluntary exchanges of goods and services by parties each of whom exchanges something of value for something he or she takes to be of greater value. If, as is natural to think, each participant is, generally, the best judge of what in the situation is best for him or her, then the transaction will typically result in a net gain in utility for all participants. The assumption that this is the normal case is Adam Smith’s invisible hand—which claims that agents maximizing their own utilities through voluntary transactions under conditions of fair competition end up maximizing the utility of society as a whole, despite not aiming at that.

  Smith’s idea is powerful, but the extent to which voluntary interactions that increase the utility of each party typically aggregate so as to increase total social utility is a matter of conjecture. What is not a matter of conjecture is that there are important cases in which they do not. These cases occur when voluntary transactions between parties impose costs (decrease utility) for those not party to the transaction—for example when they impose health and safety risks on others. When these costs, called externalities, exceed the gains generated by the original transaction, aggregate utility falls. The name for this is market failure. When recurring failures of this sort can be pinpointed, they often call for correction by government.

  While the idea is familiar, its cousin, government failure, was not very familiar prior to public choice theory. Public choice theorists have made it familiar by observing that, like actors in the private sector (individuals, businesses, corporations, and unions), actors in the public sector (elected officials and their staffs, department and agency heads and their staffs, and members of regulatory bodies) are decision makers with utilities and subjective probability functions of their own. When one examines their actions as functions of their utilities and subjective probabilities, one finds situations in which these utility maximizers respond to incentives that sometimes reduce rather than increase social utility, and may even defeat the stated purposes for which the laws and regulations they enforce were enacted. The name for these cases is government failure. Identifying such failures, and attempting to alleviate the problems they create, have been the main focus of work in public choice theory.

  One such failure, identified as political rent seeking in Tullock (1967), involves government restrictions—including unnecessary licensing fees (e.g., for taxis, beauticians, manicurists, etc.), extensive registration and reporting requirements, tariffs, quotas, and special subsidies. Since these nearly always favor established enterprises, they tend to make it more difficult for new firms to enter existing markets, thereby restricting competition, raising prices, and imposing costs on the general public. Since the gains to favored enterprises can
be great, it often makes sense for these enterprises to invest large sums in lobbying, in campaign contributions, and in other forms of behind-the-scenes politicking, thereby diverting resources that could otherwise be used productively. Since the costs to the public of the government’s actions, in the form of higher prices and reduced access to goods and services, are often unavoidable to consumers, while their causes are invisible to average voters, political and governmental actors often have much to gain and little to lose from policies detrimental to the public welfare.

  Similar results generated by a similar logic of interacting incentives have been found in studies of governmental regulation of business and industry by other public choice theorists, including the economist, George Stigler (1911–1991). Suspecting that the same combination of concentrated benefits for the few with highly defused, and largely invisible, costs for the many would generate perverse incentives for regulators, Stigler gathered empirical evidence that much regulation ends up benefiting established players in an industry at the expense of newcomers and the general public. In 1962, he and his coauthor Claire Friedland found that regulation of electricity prices had only a tiny effect on holding down those prices, while in 1971, he argued that instead of reducing the harmful effects of targeted monopolies, government regulation tended to reinforce them by curtailing competition.19 In addition to being among the contributions for which Stigler won the Nobel Prize, these ideas influenced the deregulation of the airline, transportation, and natural gas industries in the United States in the 1970s. Stigler was also cited by the Nobel committee for pioneering the economics of obtaining, organizing, and disseminating information.

  No project has received more attention from public choice theorists than the task of extracting defensible public choices from the preferences and subjective probabilities of individuals. Although some limited results have been achieved, the most important big questions remain unresolved. Ideally, one might hope to extract collective-relative utilities and subjective probabilities from agent-relative utilities of members of the collective. But no one seems to know exactly how to do that. The chief intellectual obstacle to be overcome arises from a result of another Nobel Prize winner, Kenneth Arrow, in his Social Choice and Individual Values (1951). There he demonstrated the impossibility of finding a general method for converting any realistic set of individual preferences over at least three options (without numerical utilities)—subject to seemingly necessary conditions for treating individual preference orderings fairly—into an acceptable social preference ordering. After decades of research into modifications of Arrow’s original conditions, no general and widely accepted positive solution has emerged. Whether this might change if the inputs were individual utilities (preferences with numerical values) is doubtful, in part because we don’t know how to objectively compare agent-relative utilities of different individuals. Thus, what might be called fully general social decision theory doesn’t yet exist.20

 

‹ Prev