L.A. Math: Romance, Crime, and Mathematics in the City of Angels
Page 23
Now let’s generalize this a little. Suppose that you send a random free-throw shooter to the line N times. You assume that her probability of making each free throw is p and that each shot is independent of the others. The probability that she will make exactly k free throws (and miss exactly N − k) is given by the formula
This formula, which is known as the binomial distribution formula, can be used in a much wider context. The binomial distribution is the theoretical probability distribution in which
Bernoulli Trials
Suppose you conduct an experiment that has only two outcomes, one of which you call success, which occurs with probability p, and (naturally enough), the other is called failure, which occurs with probability 1 − p. Suppose further that each trial of this experiment is independent of the other trials; this situation is known as a Bernoulli Trials experiment. The probability of getting exactly k successes in N trials is given by formula [11.4].
P(X = k) = C(N,k) × pk × (1 − p)N−k
Earlier you obtained the probability of Theresa making exactly eighty of a hundred free throws as C(100,80) × 0.880 × 0.220 = 0.0993. There is a tremendous amount of calculation involved in obtaining the number 0.0993. (WARNING! Even with a pocket calculator, this calculation is time-consuming. Additionally, if the numbers are not computed in the correct order, it will take you beyond the limit of your calculator; 100! is often beyond the limit of a typical pocket calculator.)
Example 4: Assume that Theresa is an 80% foul shooter. What is the probability that she will make at least eighty out of a hundred free throws?
Solution: Using the same reasoning as in example 1, it is easy to see that this probability is the sum of the probabilities that she will make exactly eighty out of a hundred, and the probability that she will make exactly 81 out of a hundred, …, and the probability that she will make exactly a hundred out of a hundred (the NBA record for consecutive free throws, as of 2012, is 97). This number is
C(100,80) × 0.880 × 0.220 + … + C(100,100) × 0.8100 × 0.20
Computing the value of this number is such a strain that we will temporarily abandon the project until we find a shortcut. ■
Fortunately, a shortcut looms just over the horizon.
The Normal Approximation to a Binomial Distribution
Mathematicians are no different from anyone else; they like to do as little grunt work as possible. It was discovered more than a century ago that the normal curve provided a good approximation to the binomial distribution as long as both Np and N(1 − p) are both greater than 5. The normal curve that best matches the binomial distribution has mean μ = Np and standard deviation σ = Np(1 − p).
Recall that this is exactly what Pete did in the story. Freddy reported back that Theresa had made sixty-six out of a hundred free throws. Freddy’s attitude was, “Oh well, Theresa’s just having an off day.” Pete, however, approximated the binomial distribution with a normal distribution whose mean μ = 100 × 0.8 = 80 and whose standard deviation σ = 100 × 0.8 × 0.2 = 16 = 4. This was valid because Np = 100 × 0.8 = 80 > 5, and N(1 − p) = 100 × 0.2 = 20, which is greater than 5. Since 66 was 14 below the mean of 80, and 14/4 = 3.5 standard deviations, Pete realized that the probability of Theresa’s having that bad an off day was less than one chance in a thousand, and so something other than an “off day” had to be going on.
To finish off example 4, because 80 is the mean of the distribution, the probability that Theresa will make at least eighty free throws is computed by looking at the normal distribution and seeing what percentage of the distribution lies above 79.5 (in approximating free throws made, which is a discrete variable, by a continuous variable, one thinks of “making precisely eighty free throws” as making more than 79.5 free throws but less than 80.5 free throws). The probability that she will make at least eighty free throws, using this method, is actually about 55% because 79.5 = μ − 0.125 σ.
APPENDIX 12
GAME THEORY IN “IT’S ALL IN THE GAME”
In the story, Freddy found himself faced with a choice of two possible actions, or strategies: He could either call Lisa, or he could decide not to call her. How his choice worked out did not depend solely on what strategy he chose, but also on how Lisa felt. Lisa either wanted Freddy to call or she didn’t, but it is probably incorrect to say that she is an opponent bent upon making life as miserable for Freddy as possible. When the opposing actions are not the result of a conscious choice, they are usually referred to as states, rather than strategies. The payoff Freddy received (which is measured in what might be called relative happiness points, with a high of 10 and a low of 0) depended on the strategy he chose and the state Lisa happened to be in. Since each of the two principals has two possible choices, this is called a 2 × 2 game. If Freddy expanded his list of strategies to include a surprise visit to New York, the result would be termed a 3 × 2 game. It is customary to use the first of the two numbers to denote the number of strategies available to the player from whose standpoint we measure the payoffs.
Game theory was started in the 1920s by the mathematician Emile Borel, although the first significant work was a book published in 1944 called Theory of Games and Economic Behavior, by John von Neumann and Oskar Morgenstern. Because many conflict situations arising in war can be conveniently formulated in the framework of game theory, it was intensively investigated during World War II. However, game theory has found a large number of applications to areas probably not considered by its founders. This is characteristic of mathematics—applications pop up in surprising places.
2 × 2 GAMES
(2 × 2 games continued from p. 112)
Saddle Points
Let’s take another look at the diagram that Pete made for Dolores.
If Dolores auctions the card at Classic, the worst that can happen is that she receives 20 points (recall that each point is equal to $1,000). This number is called a row minimum (it is the minimum of all the numbers in the first row). Similarly, if she auctions the card at Vintage, the worst that can happen is that she receives thirty points. The “best of the worsts,” the larger of the two row minimums, is thirty points. This number is called the maxmin, which is an abbreviation for the maximum of the row minimums. The maxmin of thirty points occurs when she auctions the card at Vintage.
Now look at the problem of whether the card will be purchased for the reserve price or for something higher. If it is auctioned at the reserve price, the worst that could happen is if the card was auctioned at Vintage, and it would cost the buyer thirty points. This number is called a column maximum (it is the maximum of all the numbers in the first column). If the card is purchased for a high price, the column maximum is a hundred points. From the standpoint of potential buyers, who do not know whether the card will go for a reserve price or a high price, the “best of the worsts,” the smaller of the two column maximums, is thirty points. This number is called the minmax, which is an abbreviation for the minimum of the column maximums. The minmax of thirty points occurs when the card goes for the reserve price.
Originally, game theory was devised under the assumption that two intelligent rivals each could choose a particular strategy. The contestants were simply described as Red and Blue, and the strategies as Red 1 and Red 2, Blue 1 and Blue 2. Each side could view the diagram and decide which strategy to choose, much as the two contestants in rock–paper–scissors are free to make their choices. Dolores is in such a position; she is free to sell her card at either auction house.
In this case, Dolores faces is not a flesh-and-blood rival but a situation. She has no way of knowing whether the card will sell for a high price or a reserve one. However, if she looks at the game as if the world were choosing the best way to thwart her desires, this is the way she would analyze it. The world knows that Dolores’ maxmin calls for her to auction the card at Vintage, and she knows that its minmax occurs if the card goes for a reserve price. Even if she tells them she will auction the card at Vintage, she cannot be prevented from making at
least $30,000. Even if the world lets us know that the card is going for the reserve price, she cannot make more than $30,000. This number, $30,000, is the value of the game.
You may be a little worried that the real world does not conspire in such a fashion as to do its best to ensure that the card will sell at the reserve price. That’s certainly true. Nonetheless, there’s no way she can tell—the card may not interest any buyer. Markets for collectibles are frequently misjudged. Dolores has to choose her strategy based on the assumption that she has a rational opponent (that was one of the original assumptions of game theory)—and she will accrue added profit if her opponent deviates from the best strategy. No one ever does well in a game by depending on one’s opponent to make a mistake. You can hope your opponent makes a mistake, but it’s better to plan on the assumption that he or she won’t.
Suppose that each side plays its best strategy: Dolores auctioning the card at Vintage and the buyers purchasing it for the reserve price. If either side deviates while the other side plays its best strategy, the side that deviates loses. If Dolores sells at Classic for the reserve price, she loses by doing so. If the card commands a high price at Vintage, Dolores gains (and the world of buyers loses) as a result. This is characteristic of games in which the maxmin is equal to the minmax; the side that deviates from its best strategy loses.
When the maxmin and the minmax have the same value, the game is said to have a saddle point (the rider of a horse sits at a place that is simultaneously a maximum and a minimum—the lowest point from the back of the horse to the front, and the highest point from the left side of the horse to the right). In this case, each player elects to follow the single strategy (called a pure strategy) that guarantees that the maxmin and minmax are both achieved, and the value of the game is always the maxmin (or the minmax, depending on which of the two opponents you are).
Mixed Strategies
Now let’s look at the situation that Freddy encountered. The diagram is repeated below for convenience, but the analysis will be a little clearer if we just look at the classic situation in which there are two intelligent opponents, Red and Blue, each of whom has a choice of two strategies. Remember that high scores are good for Red, low scores are good for Blue.
The first row minimum is 0, the second is 2, so the maxmin is 2. The first column maximum is 10, the second is 7, so the minmax is 7. Let’s suppose that Red elects Strategy 2 in order to choose his maxmin strategy and Blue also elects Strategy 2, choosing his minmax strategy. The payoff is 7 points—but Blue can improve his score by switching from Strategy 2 to Strategy 1 if Red sticks with Strategy 2. In fact, no matter which of the four possible choices are made (Red 1 vs. Blue 2, etc.), one side always benefits from deviating if the other side continues to play the same strategy. This is the type of thing that’s seen in repeated plays of rock–paper–scissors; one side always benefits from switching its strategy if the other side continues to play the same strategy.
This type of situation was analyzed in the story when Freddy was trying to decide whether or not to call Lisa. If there is no saddle point, each side’s best long-term procedure is to adopt a strategy whose expectation is the same against either of its opponent’s options. Freddy played the role of Red in the story, so let’s suppose that Red elects to play Strategy 1 randomly with probability p, and Strategy 2 randomly with probability 1 − p. We can compute Red’s expectation against either of Blue’s strategies.
Against Blue 1, Red wins 10 points with probability p and 2 points with probability of 1 − p, for an expectation of 10p + 2(1 − p) = 8p + 2.
Against Blue 2, Red wins 0 points with probability p and 7 points with probability of 1 − p, for an expectation of 0p + 7(1 − p) = 7 − 7p.
If Red equates these two expectations, nothing Blue can do will prevent Red in the long run from achieving the equated expectation. Solving 8p + 2 = 7 − 7p gives p = 1/3. Against Blue 1, Red therefore has an expectation of 8 × 1/3 + 2 = 4⅔ points, and against Blue 2, Red has an expectation of 7 − 7 × 1/3 = 4⅔ points, just as in the story.
Analyzing 2 × 2 Games
Step 1: Find the minmax and the maxmin. If these two are equal, the game has a saddle point, and each side should play a pure strategy. The row player should play the strategy with the higher minimum, and the column player should play the strategy with the lower maximum. The value of the game is the maxmin (or the minmax, as they are the same).
Step 2: If the game does not have a saddle point, assume that the row player plays one row with probability p and the other with probability 1 − p. Compute the expectation of this strategy against each of the column player’s two strategies. Equate these two expectations to determine the value of p. The value of the game can be determined by computing the expectation against either strategy using the value of p just determined.
The column player should go through a similar analysis.
APPENDIX 13
ELECTIONS IN “DIVISION OF LABOR”
It has long been a dream of social scientists to come up with an ideal system for translating individual preferences to the preferences of the society. Arrow’s Theorem, which was proved by Kenneth Arrow while he was still a grad student, shows that this cannot be accomplished.
VOTING METHODS AND ARROW’S THEOREM
(Voting methods continued from p. 120)
You saw three different voting schemes in the Bankers Club Election. Assume that every voter submits a ballot that has a preference listing among all the candidates. For example, if a voter submits a ballot that has Ackroyd as the first choice, Williams as the second, and Morris as the third, then if Ackroyd is not elected, the voter prefers Williams to Morris. Ties are not permitted.
For convenience, the result of the Bankers Club election is reprinted below.
Election Scheme 1: Plurality. The candidate who receives the most first-place vote wins.
In the story, this was Forrest Ackroyd’s preferred voting method. This has the advantage that it is generally easy to compute and rarely results in ties (especially if there are a large number of voters). The disadvantage of this scheme is that it is possible for the winner to be loved by a minority of the electorate and vigorously detested by everyone else. It appears that this is how the electorate at the Bankers Club felt about Ackroyd.
Election Scheme 2: Runoff. Eliminate all candidates except those who have the two highest first-place totals and then have a secondary election between them.
This was how Helen Williams felt the election should be decided. One of the difficulties with this method is that it lends itself to insincere voting. An example of insincere voting could be seen in a four-person race decided by the runoff method, in which there is a clear front-runner, a close contest for the second slot, and a splinter candidate. The splinter candidate could wield a lot of clout by telling his or her supporters to throw their votes to one of the two candidates who are in contention for the second position on the post-runoff ballot. This frequently happens in the real world. Sincere voting occurs when each individual lists his or her preferences, letting the chips fall where they may.
Election Scheme 3: Borda count. An arithmetical weighting scheme is devised for first place, second place, third place, …, with more points given for higher placings (first place gets more points than second, etc.). The winner has the highest total (or highest average per voter).
In its favor, Borda counts reflect how each voter feels about each candidate. One disadvantage of Borda counts is that different Borda counting schemes can result in different winners!
Example 1: Suppose that there are three candidates in an election, whom we shall call A, B, and C, and the balloting produces the following results:
Compute the results of the election by the Borda count method (a) if the weighting scheme is 3–2–1, and (b) if the weighting scheme is 5–3–2.
Solution: This is basically simple arithmetic. With a 3–2–1 weighting scheme, the point totals are
So B is the winner. However
, with a 5–3–2 weighting scheme, the point totals are
In this case, C is the winner. ■
The Paradox of Transitivity
It has been known for at least a century that difficulties can arise in certain situations. One obvious property that individual preferences display is that, if the individual prefers alternative A to alternative B, and that same individual also prefers alternative B to alternative C, then that individual must prefer alternative A to alternative C. This is called transitivity. Numbers display a similar type of transitivity: if a > b and b > c, then it follows that a > c.
However, transitivity of individual preferences does not produce transitivity of the preferences of a majority of society! Look at the following example.
Example 2: Suppose that A, B, and C are candidates in an election and that the voters cast their ballots as follows:
Notice that A is preferred to B by a majority of voters (20 to 10), and similarly B is preferred to C by a majority of voters (21 to 9). If an individual exhibited these preferences (A over B and B over C), we would be justified in concluding that he or she preferred A to C. However, when a majority prefers A to B and B to C, we cannot reach the same conclusion, for in this example, a majority of voters prefers C to A (by 19 to 11).
This example simply illustrates that there is something wrong with our intuitive ideas about how the preferences of the majority can be deduced from the preferences of individuals.