The World Philosophy Made

Home > Other > The World Philosophy Made > Page 16
The World Philosophy Made Page 16

by Scott Soames


  Next consider possessive noun phrases NP’s N. Interpreting them requires identifying the possession relation R holding between the referent of the possessor NP and the individual designated by the phrase. When N is a relational noun, it provides a default possession relation. The default designation of ‘Tom’s teacher’ is someone who bears the teaching relation to Tom; the default designation of ‘Tom’s student’ is one who bears the converse of that relation to Tom. Similar remarks apply to ‘Tom’s mother’, ‘Tom’s boss’, and ‘Tom’s birthplace’. Crucially, however, the default choice can be overridden. Imagine that two journalists, Tom and Bill, have each been assigned to interview a local student. When this is presupposed, one can use ‘Tom’s student’ to refer to the student interviewed by Tom, and ‘Bill’s student’ to refer to the one interviewed by Bill. In these cases what is asserted isn’t fully determined by the linguistic meanings of the sentences used.

  The lesson extends to uses of possessive noun phrases involving non-relational nouns, like ‘car’ and ‘book’, to which a potential possessor may bear many different relations. ‘Tom’s car’ can be used to designate a car he owns, drives, is riding in, or has bet on in the Indianapolis 500; ‘Pam’s book’ may be used to designate a book she wrote, plans to write, is reading, or has requested from the library. As before, this isn’t ambiguity; it is non-specificity. The meaning of NP’s N requires it to designate something to which N applies that stands in relation R to what NP designates. But the meaning of the sentence doesn’t determine R; the context of use does. Hence, linguistic meanings of sentences containing possessive noun phrases often aren’t what they are used to assert.

  Temporal modification can also be incomplete. Descriptive phrases like “the chair of the department,” and “the owner of the Harrison Street house” lack any overt temporal specification. Depending on the context, the former can be understood as the one who chairs, the one who chaired, or the one who will chair, the department. The same is true of “the owner of the Harrison Street house.” One who says “The owner of the Harrison Street house is temporarily away on business,” shortly after the house has burned down, asserts that the person who owned the Harrison Street house is temporarily away. In other contexts, what one asserts by uttering the same sentence is that the person who presently owns the house is away. The linguistic meanings of the descriptive phrases lack temporal specifications, which must be contextually added before one has a candidate for assertion.

  Tenseless descriptive phrases also occur in crucial legal contexts. Two prominent examples are found in the following fragment of the First Amendment to the U.S. Constitution: “Congress shall make no law … abridging the freedom of speech, or [the freedom] of the press.” This statement promises that the government will never abridge two things—the freedom of speech and the freedom of the press. To understand the promise you must know that you can’t abridge something that isn’t already a reality. To abridge War and Peace is to truncate the original. So, to abridge the freedom of speech and of the press is to limit, restrict, truncate, or otherwise diminish the existing freedom to speak, write, communicate, and publish. The freedom to do these things when? At the time the Constitution was adopted, of course. What can’t be abridged is the kind of freedom that existed to do these things then. Thus, the original asserted content of this fragment of the First Amendment is roughly as follows:

  Congress shall not abridge (restrict, truncate, or diminish) freedoms enjoyed in America at the time (1788) to speak, write, communicate, publish, and disseminate information and opinion.

  The new appreciation of the pervasive interpenetration of contextual and semantic information has opened a second new frontier in the study of linguistic meaning and the communicative use of language that supplements the one prompted by the recognition of a more realistic, cognitive, conception of propositions as pieces of information. This second new development has brought with it two important questions: What normative principles govern the rational, efficient, and cooperative exchange of information that characterizes much linguistic communication? and What are the psychological processes involved in the extraction of asserted and conveyed information by normal human language users? Both have been investigated since the mid-1960s, when the philosopher Paul Grice (1913–1988) showed how informal conversational principles guiding the rational and efficient exchange of information add extra content, beyond the linguistic meanings of the sentences uttered, to the messages communicated by ordinary uses of language.18

  Building on this idea, contemporary philosophers of language are looking for more powerful tools to deepen and extend Grice’s insights. How, they are asking, would ideally rational speaker-hearers converge on information asserted or conveyed by utterances of sentences the linguistic meanings of which merely constrain, without fully determining, the contents of communicated messages? Since modern decision and game theory provide mathematically sophisticated models of rational belief and action, these models are directly relevant to answering this question. For this reason, some linguistically and mathematically minded philosophers are now trying to figure out how to transform existing multi-person signaling games to incorporate conventionally meaningful linguistic signals—utterances of sentences of natural languages—in cooperative games in which players maximize benefits by communicating accurate information about the world. If, as seems likely, a productive new line of research emerges from this effort, it will be the latest fundamental contribution to the young sciences of language, mind, and information to which philosophers have already contributed so greatly.

  CHAPTER 8

  THE SCIENCE OF RATIONAL CHOICE

  The rational assessment of actions as means to desired ends performed under conditions of uncertainty; how laws of probability constrain rational action; Ramsey’s philosophical foundations of general theory of rational decision and action; subjective probability and agent-relative utility; groundbreaking applications in recent social science.

  Along with the world-transforming contributions of philosophy to the mathematical theory of computation leading to the digital age and the birth of the emerging sciences of language, mind, and information, few developments in the philosophy of the last 100 years have had as broad a social impact as the philosophical origins of subjectivist interpretations of probability and their use in modern decision theory. Starting with the Cambridge philosopher F. P. Ramsey in 1926 and continuing through the mid-century work of other important philosophers, the approach has grown exponentially, both in the social sciences and in the new philosophical sub-discipline, formal epistemology.

  The model of rational action and belief resulting from this approach is based on the familiar idea that actions are, typically, the products of belief and desire. We do things to achieve desired ends by performing actions we think are best suited to do so. The propositions we believe represent the state we take the world to be in, while our desires express our preferences among different possible outcomes of the actions we are contemplating. Both are crucial when deliberating what to do. The aim of deliberation is to select the action that makes the best use of the limited information we have in bringing about the results we most desire. A theory of rational decision and action that spells this out is not an attempt to describe the psychological processes agents always, or even typically, undergo in making decisions. It is an idealized model of the rationally optimal thing to do; it aims to identify which action, from an array of possible actions, has the best chance of advancing one’s interests, given one’s information. In short, the theory is normative.

  If it is successful, agents familiar with the theory may sometimes be able to use it as a tool to improve the decisions they would otherwise have made informally. Within limits, the theory can also be used descriptively. If it is true that many individuals and groups are reasonably well attuned to what will advance their interests—no matter what idiosyncratic decision processes they utilize—a normative theory of rational decision can help explain their
general behavioral tendencies, over time, in responding to similar reward structures in similar situations. This is how leading social scientists often use decision theory.

  To grasp this, we need to understand how the theory conceives of beliefs and desires as coming in degrees. In the case of desire, the idea is transparent. I want some things more than I want others, and when I want x more than I want y, and I want y more than I want z, I want x more than I want z. Right now, I would rather have two oranges than three bananas, and I would rather have one apple than two oranges; so, I would rather have one apple than three bananas. In the case of belief, we first put aside those propositions of which we are rationally certain—e.g., logical truths or tautologies—as well as their negations—logical falsehoods or contradictions. The remaining propositions—some of which we believe, some of which we neither believe nor disbelieve, and some of which we doubt (more or less strongly)—are all candidates for being true. None may be completely secure and none may be utterly insecure. Our degree of confidence in a proposition p—our credence in p (or, p’s credence for us)—plays an important role in determining the actions we are rationally willing to take based on p. The higher the credence, the more sense it makes to base an action on p. The lower the credence, the less sense it makes to perform an act the effectiveness of which in producing a desirable result depends on the truth of p—unless the value of that result is high enough to compensate for our low credence in p.

  Simple gambles illustrate the point. When rolling a pair of dice, we know in advance that there are 36 possible outcomes—one in which the two face-up sides total 12, two in which they total 11, three in which they total 10, four in which they total 9, five in which they total 8, six in which they total 7, five with a total of 6, and so on. If we know that the dice are fair, we will say the odds of rolling a 7 are 1 to 5, which means that the probability of rolling something other than 7 is five times that of rolling 7, and hence that the probability of 7 is 1/6. This suggests that, other things being equal, we should be willing to take either side of repeated gambles returning $500 on a roll of 7 versus losing $100 on a roll of anything other than 7.

  The caveat, other things being equal, is important. When aren’t they equal? When one has moral objections to gambling; when one is risk averse and so would never risk losing a substantial amount; when one regards gambling as thrilling entertainment for which one is willing to pay by accepting slightly lower odds of winning; or when one’s marginal utility for dollars is nonlinear, so that dollars above (or below) a certain amount are worth more, or less, than dollars below (or above) that amount. For example, if you are $500 short of the money to finance an operation needed to save your child’s life, it could be rational for you to wager more than $100 for the chance of winning $500 because 1/500 of the ultimate value you could purchase with those winnings (the life-saving operation) far exceeds the values of the individual dollars wagered attempting to win it. Similarly, if you can’t afford to lose $50, lest you go to prison for defaulting on a loan, it could be rational not to wager that amount, even if offered what would otherwise be favorable odds. In these cases, the dollar amounts of one’s gains or losses are poor measures of the real value of what one wins or loses.

  These considerations highlight a limitation on models of rational decision illustrated by simple games of chance. Money alone isn’t always a good measure of the values we place on possible results when deciding what actions to perform. Another limitation, which must be overcome in a general theory, is that the probabilities we assign to propositions needed to assure desired results are often less transparent than they are in simple games of chance. We must overcome these limitations if we are to transform uncontroversial observations about rational betting strategies in such games into general theories of rational decision under uncertainty. As we shall see, this was the locus of important philosophical contributions to modern decision theory.

  First, however, more must be said about why the obvious strategies for simple games of chance are rational. One reason is merely an artifact of the range of possible outcomes defined by the games plus the determination of specific outcomes (e.g., the number showing on the dice) by a procedure that is assumed not to favor one outcome over another (because the dice are assumed to be fair). Within these parameters, we can read off the probabilities of simple propositions from the ratios of outcomes of a given sort (e.g., six ways of rolling a 7) to the totality of possible outcomes (36 configurations). The rationality of taking the probability that the dice will come up 7 to be 1/6 is then a trivial mathematical fact—made true by the rules of the game plus the assumption that the dice are fair.

  But there is also another factor. What constraints does rationality impose on the relations between the probabilities assigned to one set of propositions and those assigned to another set? How must the probabilities of simple propositions be related to the probabilities of complex propositions—negations, conjunctions, disjunctions, universal or existential generalizations, etc.—in order for an overall assignment of probabilities of propositions to be rational? This question is analogous to a question about systems of deductive logic (deriving from Frege). Such systems never specify the truth or falsity of any of simple sentences; that is the job of observation, experience, and science. But modern logical systems do tell us which complex sentences must be true (logical truths or tautologies) and which must be false (logical falsehoods or contradictions). They also tell us when collections of sentences are inconsistent, and so can’t be true because they violate the laws of logic. In telling us this, systems of deductive logic put constraints on rational beliefs. A similar point can be made about models that evaluate actions based on the desirability of their intended outcomes and the probability that performing them will result in those outcomes. What constraints do they place on assignments of probabilities to propositions guiding action, and why are assignments violating those constraints irrational?

  To pursue this question, we begin by letting the assignment of probabilities to simple (logically independent) propositions be whatever one likes—so long as they are always between zero and 1.1 Disallowing any proposition to have a probability exceeding 1, we next state general principles of probability theory deriving from the widely accepted formalization of Kolmogorov.2

  WIDELY ACCEPTED LAWS OF PROBABILITY

  The probability of the negation, ~p, of a proposition p is 1 minus the probability of p.

  The probability of the disjunction p or q is the probability of p plus the probability of q minus the probability that p and q are both true.

  If p and q are incompatible (and so can’t both be true), then the probability of p or q is the probability of p plus the probability of q.

  If neither p nor q entails the other, or its negation, then the conjunction p and q is the probability of p times the probability of q.

  Let p be a proposition that can be true in either finitely, or infinitely but countably, many incompatible ways. The probability of p is the sum of the probabilities of those ways.3

  The probability p = the probability of p and q plus the probability of p and ~q.

  If p logically entails q, then the probability of p is less than or equal to the probability of q.

  The probabilities of logically equivalent propositions are identical.

  Another fundamental notion of the probability calculus is the conditional probability of p given q—e.g., the probability that the dice come up 7, given that they come up odd. This probability, represented prob p|q, is not the probability of a single proposition; it is a special measure of the relationship between a pair of propositions. It is the probability of p and q divided by the probability of q (provided that the probability of q isn’t 0). In other words, it is the proportion of cases in which both p and q are true, from among all cases in which q is true. For example, the probability that the dice come up 7 given that they come up odd is the probability that the dice come up both 7 and odd (which is just the probability that they come up 7)
divided by the probability that they come up odd. Since the probability of the former is 1/6 and the probability of the latter is ½, the probability that they come up 7 given that they come up odd is ⅓. This makes sense because 6 of the 18 combinations in which the dice come up odd are combinations adding to 7.

  Conditional probabilities are intimately related to fair prices of conditional bets. Consider a bet made by purchasing a ticket that pays $6 if the dice come up 7, conditional on the dice coming up odd. Since the probability that the dice come up 7 is 1/6, $1 is the fair price of an unconditional ticket that pays $6 if they do (giving you a gain of $5). What about a ticket that pays $6 if 7 comes up, conditional on an odd number coming up? If you buy it and roll 7, you win $6 (from which you deduct the cost of the ticket). You lose the price of the ticket if you roll an odd number other than 7, but your purchase price is refunded if you roll an even number. Since the bet eliminates all the rolls in which the dice come up even, which make up half the total combinations, the conditional gamble is twice as valuable as the unconditional gamble. Hence the fair price for it is $2, and the conditional probability of rolling 7 given that the dice come up odd is ⅓ (meaning that a $4 net gain one time will compensate for a pair of $2 losses.) This illustrates the rule for conditional probability

 

‹ Prev