A Beautiful Math

Home > Other > A Beautiful Math > Page 6
A Beautiful Math Page 6

by Tom Siegfried


  It turns out that in some games, you may indeed find one pure strategy that will maximize your winnings (or minimize your losses) no matter what the other player does. Obviously, then, you would play that strategy, and if the game is repeated, you would play the same strategy every time. But sometimes, depending on the rules of the game, your wisest choice will depend on what your opponent does, and you might not know what that choice will be. That's where game theory gets interesting.

  Let's look at an easy example first. Say that Bob owes Alice $10. Bob proposes a game whereby if he wins, his debt will be reduced. (In the real world, Alice will tell Bob to take a hike and fork over the $10.) But for purposes of illustrating game theory, she might agree.

  Bob suggests these rules: He and Alice will meet at the library. If he gets there first, he pays Alice $4; if she gets there first, he pays her $6. If they both arrive at the same time, he pays $5. (As I said, Alice would probably tell him to shove it.)

  Now, let's say they live together, or at least live next door to each other. They both have two possible strategies for getting to the library—walking or taking the bus. (They are too poor to own a car, which is why Bob is haggling over the $10.) And they both know that the bus will always beat walking. So this game is trivial—both will take the bus, both will arrive at the same time, and Bob will pay Alice $5.23 And here's how game theory shows what strategy to choose: a "payoff matrix." The numbers show how much the player on the left (Alice) wins.

  In a zero-sum game, the numbers in a payoff matrix designate how much the person on the left (in this case, Alice) wins (since it's zero sum, the numbers tell how much the player on top, Bob, loses). If the number is negative, that means the player on top wins that much (negative numbers signaling a loss for Alice). In non-zero-sum games, each matrix cell will include two numbers, one for each player (or more if there are more players, which makes it very hard to show the matrix for multiplayer games).

  Obviously, Alice must choose the bus strategy because it always does as well as or better than walking, no matter what Bob does. And Bob will choose the bus also, because it minimizes his losses, no matter what Alice does. Walking can do no better and might be worse.

  Of course, you didn't need game theory to figure this out. So let's look at another example, from real-world warfare, a favorite of game theory textbooks.

  In World War II, General George Kenney knew that the Japanese would be sending a convoy of supply ships to New Guinea. The Allies naturally wanted to bomb the hell out of the convoy. But the convoy would be taking one of two possible routes—one to the north of New Britain, one to the south.

  Either route would take three days, so in principle the Allies could get in three days' worth of bombing time against the convoy. But the weather could interfere. Forecasters said the northern route would be rainy one of the days, limiting the bombing time to a maximum of two days. The southern route would be clear, providing visibility for three days of bombing. General Kenney had to decide whether to send his reconnaissance planes north or south. If he sent them south and the convoy went north, he would lose a day of bombing time (of only two bombing days available). If the recon planes went north, the bombers would still have time to get two bombing days in if the convoy went south.

  So the "payoff" matrix looks like this, with the numbers giving the Allies' "winnings" in days of bombing:

  If you just look at this game matrix from the Allies' point of view, you might not see instantly what the obvious strategy is. But from the Japanese side, you can easily see that going north is the only move that makes sense. If the convoy took the southern route, it was guaranteed to get bombed for two days and maybe even three. By going north, it would get a maximum of two days (and maybe only one), as good as or better than any of the possibilities going south. General Kenney could therefore confidently conclude that the Japanese would go north, so the only logical Allies strategy would be to send the reconnaissance planes north as well. (The Japanese did in fact take the northern route and suffered heavy losses from the Allied bombers.)

  Proper strategies are not, of course, always so obvious. Let's revisit Alice and Bob and see what happened after Alice refused to play Bob's stupid game. Knowing that she was unlikely ever to get her whole $10 back, she proposed another game that would cause Bob to scratch his head about what strategy to play.

  In Alice's version of the game, they go the library every weekday for a month. If they both ride the bus, Bob pays Alice $3. If they both walk, Bob pays Alice $4. If Bob rides the bus and Alice walks, arriving second, Bob pays $5. If Bob walks and Alice rides the bus, arriving first, Bob pays $6. If you are puzzled, don't worry. This game puzzles Bob, too. Here's the matrix.

  Bob realizes there is no simple strategy for playing this game. If he rides the bus, he might get off paying only $3. But Alice, realizing that, will probably walk, meaning Bob would have to pay her $5. So Bob might decide to walk, hoping to pay only $4. But then Alice might figure that out and ride the bus, so Bob would have to pay her $6. Neither Bob nor Alice can be sure of what the other will do, so there is no obvious "best" strategy.

  Remember, however, that Alice required the game to be played repeatedly, say for a total of 20 times. Nothing in the rules says you have to play the same strategy every day. (If you did, that would be a pure strategy—one that never varied.) To the contrary, Alice realizes that she should play a mixed strategy—some days walking, some days riding the bus. She wants to keep Bob guessing. Of course, Bob wants to keep Alice guessing too. So he will take a mixed strategy approach also.

  And that was the essence of von Neumann's ingenious insight. In a two-person zero-sum game, you can always find a best strategy—it's just that in many cases the best strategy is a mixed strategy.

  In this particular example, it's easy to calculate the best mixed strategies for Alice and Bob. Remember, a mixed strategy is a mix of pure strategies, each to be chosen a specific percentage of the time (or in other words, with a specific probability).24 So Bob wants to compute the ratio of the percentages for choosing "walk" versus "bus," using a recipe from an old game theory book that he found in the library.25 Following the book's advice, he compares the payoffs for each choice when Alice walks (the first row of the matrix) to the values when Alice takes the bus (the second row of the matrix), subtracting the payoffs in the second row from those in the first. (The answers are -2 and 2, but the minus sign is irrelevant.) Those two numbers determine the best ratio for Bob's two strategies—2:2, or 50-50. (Note, however, that it is the number in the second column that determines the proportion for the first strategy, and the number in the first column that determines the proportion for the second strategy. It just so happened that in this case the numbers are equal.) For Alice, on the other hand, subtracting the second column from the first column gives -3 and 1 (or 3 and 1, ignoring the minus sign). So she should play the first strategy (bus) three times as often as the second strategy (walk).26

  Consequently, Alice should ride the bus one time in four and walk three-fourths of the time. Bob should ride the bus half the time and walk half the time. Both should decide which strategy to choose by using some suitable random-choice device. Bob could just flip a coin; Alice might use a random number table, or a game spinner with three-fourths of the pie allocated to walking and one-fourth to the bus.27 If either Alice or Bob always walked (or took the bus), the other would be able to play a more profitable strategy.

  So you have to keep your opponent guessing. And that's why game theory boils down to the need to bluff while playing poker. If you always raise when dealt a good hand but never when dealt a poor hand, your opponents will be able to figure out what kind of a hand you hold.

  Real poker is too complicated for an easy game theory analysis. But consider a simple two-player version of poker, where Bob and Alice are each dealt a single card, and black always beats red.28 Before the cards are dealt, each player antes $5, so there is $10 in the pot. Alice then plays first, and she may either fold
or bet an additional $3. If she folds, both players turn over their cards, and whoever holds a black card wins the pot. (If both have black or both have red, they split the pot.)

  If Alice wagers the additional $3, Bob can then either match the $3 and call (making a total of $16 in the pot) or fold. If he folds, Alice takes the $13 in the pot; if he calls, they turn over their cards to see who wins the $16.

  You'd think, at first, that if Alice had a red card she'd simply pass and hope that Bob also had red. But if she bets, Bob might think she must have black. If he has red, he might fold—and Alice will win with a red card. Bluffing sometimes pays off. On the other hand, Bob knows that Alice might be bluffing (since she is not a Vulcan), and so he may go ahead and call.

  The question is, how often should Alice bluff, and how often should Bob call her (possible) bluff? Maybe von Neumann could have figured that out in his head, but I think most people would need game theory.

  A matrix for this game would show that both players can choose from four strategies. Alice can always pass, always bet, pass with red and bet with black, or bet with red and pass with black. Bob can always fold, always call, fold with red and call with black, or fold with black and call with red. If you calculate the payoffs, you will see that Alice should bet three-fifths of the time no matter what card she has; the other two-fifths of the time she should bet only if she has black. Bob, on the other hand, should call Alice's bet two-fifths of the time no matter what card he has; three-fifths of the time he should fold if he has red and call if he has black.29 (By the way, another thing game theory can show you is that this game is stacked in favor of Alice, if she always goes first. Playing the mixed strategies dictated by the game matrix assures her an average of 30 cents per hand.)

  The notion of a mixed strategy, using some random method to choose from among the various pure strategies, is the essence of von Neumann's proof of the minimax theorem. By choosing the correct mixed strategy, you can guarantee the best possible outcome you can get—if your opponent plays as well as possible. If your opponent doesn't know game theory, you might do even better.

  BEYOND GAMES

  Game theory was not supposed to be just about playing poker or chess, or even just about economics. It was about making strategic decisions—whether in the economy or in any other realm of real life. Whenever people compete or interact in pursuit of some goal, game theory describes the outcomes to be expected by the use of different strategies. If you know what outcome you want, game theory dictates the proper strategy for achieving it. If you believe that people interacting with other people are all trying to find the best possible strategy for achieving their desires, it makes sense that game theory might potentially be relevant to the modern idea of a Code of Nature, the guide to human behavior.

  In their book, von Neumann and Morgenstern did not speak of a "Code of Nature," but did allude to game theory as a description of "order of society" or "standard of behavior" in a social organization. And they emphasized how a "theory of social phenomena" would require a different sort of math from that commonly used in physics—such as the math of game theory. "The mathematical theory of games of strategy," they wrote, "gains definitely in plausibility by the correspondence which exists between its concepts and those of social organizations."30

  In its original form, though, game theory was rather limited as a tool for coping with real-world strategic problems. You can find examples of two-person zero-sum games in real life, but they are typically either so simple that you don't need game theory to tell you what to do, or so complicated that game theory can't incorporate all the considerations.

  Of course, expecting the book that introduces a new field to solve all of that field's problems would be a little unrealistic. So it's no surprise that in applying game theory to situations more complicated than the two-person zero-sum game, von Neumann and Morgenstern were not entirely successful. But it wasn't long before game theory's power was substantially enhanced, thanks to the beautiful math of John Forbes Nash.

  * * *

  3

  Nash's Equilibrium Game theory's foundation

  Nash's theory of noncooperative games should now be recognized as one of the outstanding intellectual advances of the twentieth century … comparable to that of the discovery of the DNA double helix in the biological sciences.

  —economist Roger Myerson

  As letters of recommendation go, it was not very elaborate, just a single sentence: "This man is a genius."

  That was how Carnegie Tech professor R. L. Duffin described John Nash to the faculty at Princeton University, where Nash entered as a 20-year-old graduate student in 1948. Within two years, Duffin's assessment had been verified. Nash's "beautiful mind" had by then launched an intellectual revolution that eventually propelled game theory from the fad du jour to the foundation of the social sciences.

  Shortly before Nash's arrival at Princeton, von Neumann and Morgenstern had opened a whole new continent for mathematical exploration with the groundbreaking book Theory of Games and Economic Behavior. It was the Louisiana Purchase of economics. Nash played the role of Lewis and Clark.

  As it turned out, Nash spent more time in the wilderness than Lewis and Clark did, as mental illness robbed the rationality of the man whose math captured rationality's essence. But before his prolonged departure, Nash successfully steered game theory toward the mathematical equivalent of manifest destiny. Though not warmly welcomed at first, Nash's approach to game theory eventually captured a major share of the economic-theory market, leading to his Nobel Prize for economics in 1994. By then game theory had also conquered evolutionary biology and invaded political science, psychology, and sociology. Since Nash's Nobel, game theory has infiltrated anthropology and neuroscience, and even physics. There is no doubt that game theory's wide application throughout the intellectual world was made possible by Nash's math.

  "Nash carried social science into a new world where a unified analytical structure can be found for studying all situations of conflict and cooperation," writes University of Chicago economist Roger Myerson. "The theory of noncooperative games that Nash founded has developed into a practical calculus of incentives that can help us to better understand the problems of conflict and cooperation in virtually any social, political, or economic institution."1

  So it's not too outrageous to suggest that in a very real way, Nash's math provides the foundation for a modern-day Code of Nature. But of course it's not as simple as that. Since its inception, game theory has had a complicated and controversial history. Today it is worshiped by some but still ridiculed by others. Some experimenters claim that their results refute game theory; others say the experiments expand game theory and refine it. In any event, game theory has assumed such a prominent role in so many realms of science that it can no longer intelligently be ignored, as it often was in its early days.

  IGNORED AT BIRTH

  When von Neumann and Morgenstern introduced game theory as the math for economics, it made quite a splash. But most economists remained dry. In the mid-1960s, the economics guru Paul Samuelson praised the von Neumann–Morgenstern book's insight and impact—in other fields. "The book has accomplished everything except what it started out to do—namely, revolutionize economic theory," Samuelson wrote.2

  It's not that economists hadn't heard about it. In the years following its publication, Theory of Games and Economic Behavior was widely reviewed in social science and economics journals. In the American Economic Review, for example, Leonid Hurwicz admired the book's "audacity of vision" and "depth of thought."3 "The potentialities of von Neumann's and Morgenstern's new approach seem tremendous and may, one hopes, lead to revamping, and enriching in realism, a good deal of economic theory," Hurwicz wrote. "But to a large extent they are only potentialities: results are still largely a matter of future developments."4 A more enthusiastic assessment appeared in a mathematics journal, where a reviewer wrote that "posterity may regard this book as one of the major scientific achievements
of the first half of the twentieth century."5

  The world at large also soon learned about game theory. In 1946, the von Neumann–Morgenstern book rated a front page story in the New York Times; three years later a major piece appeared in Fortune magazine.

  It was also clearly appreciated from the beginning that game theory promised applications outside economics—that (as von Neumann and Morgenstern had themselves emphasized) it contained elements of the long-sought theory of human behavior generally. "The techniques applied by the authors in tackling economic problems are of sufficient generality to be valid in political science, sociology, or even military strategy," Hurwicz pointed out in his review.6 And Herbert Simon, a Nobel laureate-to-be, made similar observations in the American Journal of Sociology. "The student of the Theory of Games … will come away from the volume with a wealth of ideas for application … of the theory into a fundamental tool of analysis for the social sciences."7

  Yet it was also clear from the outset that the original theory of games was severely limited. Von Neumann had mastered two-person zero-sum games, but introducing multiple players led to problems. Game theory worked just fine if Robinson Crusoe was playing games with Friday, but the math for Gilligan's Island wasn't as rigorous.

  Von Neumann's approach to multiple-player games was to assume that coalitions would form. If Gilligan, the Skipper, and Mary Ann teamed up against the Professor, the Howells, and Ginger, you could go back to the simple two-person game rules. Many players might be involved, but if they formed two teams, the teams could take the place of individual players in the mathematical analysis.

 

‹ Prev