The Predictioneer’s Game: Using the Logic of Brazen Self-Interest to See and Shape the Future

Home > Other > The Predictioneer’s Game: Using the Logic of Brazen Self-Interest to See and Shape the Future > Page 5
The Predictioneer’s Game: Using the Logic of Brazen Self-Interest to See and Shape the Future Page 5

by Bruce Bueno De Mesquita


  Consider, for example, what policies you think our national leaders should follow to protect and enhance our national interest. When we think carefully about how to further the national interest, it becomes evident that sometimes things that seem obviously true are not, and that a little logic can go a long way to clarify our understanding.

  It is commonplace to think that foreign policy should advance the national interest. This idea is so widespread that we accept it as an obvious truth, but is it? We hardly ever pause to ask how we know what is in the national interest. Most of the time, we seem to mean that policies benefiting the great majority of people are policies in the national interest. Secure borders to prevent foreign invasions or illegal immigration are thought to be in the national interest. Economic policies that make citizens more prosperous are thought to be in the national interest. Yet we also know that money spent on defending our national security is money that is not spent on building the economy. There is a trade-off between the two. What, then, is the right balance between national security and economic security that ensures the national interest?

  Imagine that American citizens are divided into three equally sized groups. One group wants to spend more on national defense and to adopt more free-trade programs. Call these people Republicans. Another wants to cut defense spending and shift trade policy away from the status quo in order to better protect American industry against foreign competition. Call them Democrats. A third wants to spend more on national defense and also to greatly increase tariffs to keep our markets from being flooded with cheap foreign-made goods. Call this faction blue-collar independents. With all of these voters in mind, what defense and trade policy can rightfully call itself “the national interest”? The answer, as seen in figure 2.1 (on the next page), is that any policy can legitimately lay claim to being in—or against—the national interest.

  Figure 2.1 places each of our three voting blocs—Republicans, Democrats, and blue-collar independents—at the policy outcomes they prefer when it comes to trade and defense spending. That’s why Republicans are found in the upper right-hand corner as you look at the figure, indicating their support for much freer trade and much higher defense spending. Democrats are on the far left-hand side just below the vertical center. That is consistent with their wanting much less spent on defense and a modest shift in trade policy. Blue-collar independents are found on the bottom right, consistent with their preference for trade protection and higher defense outlays. And, as you can see, there is a point labeled “Status Quo,” which denotes current defense spending and trade policy.

  FIG. 2.1. Defense and Trade Policy in the National Interest

  By putting the two issues together in the figure I am acknowledging that they are often linked in public debate. The debate generally revolves around how best to balance trade and defense given that there are inherent tradeoffs between them. Free trade, for instance, can imply selling high-end computer technology, weapons technology, and other technologies that adversaries might use to threaten our national security. High tariffs might provoke trade wars or worse, thereby potentially harming national security and prompting arguments to spend more on national defense.

  I assume that everyone prefers policies closer to their favored position (that’s where the black dots associated with the Republicans, Democrats, and independents are positioned) to policies that are farther away. For example, blue-collar independents would vote to change the status quo on defense and trade if they had the chance to choose a mix on these issues that was closer to the black dot associated with them—that is, closer to what they want.

  To show the range of policy combinations that the blue-collar independents like better than the status quo, I drew a circle (showing only a part of it) whose center is their most desired policy combination and whose perimeter just passes through the status quo policy.6 Anything inside the arc whose center is what blue-collar independents most want is better for them than the prevailing approach to defense spending and trade. The same is true for the points inside the arcs centered on the Republicans and the Democrats that pass through the status quo.

  By drawing these circles around each player’s preferred policy mix we learn something important. We see that these circles overlap. The areas of overlap show us policy combinations that improve on the status quo for a coalition of two of the three players. For instance, the lined oblong area tilting toward the upper left of the figure depicts policies that improve the well-being of Democrats and Republicans (ah, a bipartisan foreign policy opposed by independent blue-collar workers). The gray petal-shaped area improves the interests of Democrats and blue-collar independents (at the expense of Republicans), and the bricked-over area provides a mix of trade and defense spending that benefit the Republicans and blue-collar independents (to the chagrin of Democrats).

  Because we assumed that each of the three voting blocs is equal in size, each overlapping area identifies defense and trade policies that command the support of two-thirds of the electorate. Here’s the rub, then, when it comes to talking about the national interest. One coalition wants more free trade and less defense spending. Another wants less free trade and less defense spending. The third wants less free trade and more defense spending. So, we can assemble a two-thirds majority for more defense spending and also for less. We can find a two-thirds coalition for more free trade or for higher tariffs or (in the politically charged rhetoric of trade debate) for more fair trade. In fact, there are loads of ways to allocate effort between defense spending and trade policy to make better off whichever coalition forms.7

  What, then, is the national interest? We might have to conclude that except under the direst circumstances there is no such thing as “the national interest,” even if the term refers to what a large majority favors. That is surprising, perhaps, but it follows logically from the idea that people will align themselves behind policies that are closer to what they want against policies that are farther from what they advocate. It just happens that any time there are trade-offs between alternative ways to spend money or to exert influence, there are likely to be many different spending or influence combinations that beat the prevailing view. None can be said to be a truer reflection of the national interest than another; that reflection is in the eyes of the beholder, not in some objective assessment of national well-being. So much for the venerable notion that our leaders pursue the national interest, or, for that matter, that business executives single-mindedly foster shareholder value. I suppose, freed as they are to build a coalition that wants whatever it is they also want, that our leaders really are free to pursue their own interests and to call that the national interest or the corporate interest.

  WHAT IS THE OTHER GUY’S BEHAVIOR?

  (DOES HE HAVE GOOD CARDS OR NOT?)

  To understand how interests frame so many of the questions we have at stake, game theory still requires that people behave in a logically consistent way within those interests. That does not mean that people cannot behave in surprising ways, for surely they can. If you’ve ever played the game Mastermind, you’ve confronted the difficulties of logic directly. In Mastermind—a game I’ve used with students to teach them about really probing their beliefs—one player sets up four (or, in harder versions, more) colored pegs selected from among six colors in whatever order he or she chooses. The rest of the players cannot see the pegs. They propose color sequences of pegs and are told that yes, they got three colors right, or no, they didn’t get any right, or yes, they got one color in the right position but none of the others. In this way, information accumulates from round to round. By keeping careful track of the information about what is true and what is false, you gradually eliminate hypotheses and converge on a correct view of what order the colored pegs are in. This is the point behind a game like Mastermind, Battleship, or charades. It is also one point behind the forecasting games I designed and use to predict and engineer events.

  The key to any of these games is sorting out the difference between knowledg
e and beliefs. Different players in any game are likely to start out with different beliefs because they don’t have enough information to know the true lay of the land. It is fine to sustain beliefs that could be consistent with what’s observed, but it’s not sensible to hold on to beliefs after they have been refuted by what is happening around us. Of course, sorting out when beliefs and actions are inconsistent requires working out the incentives people have to lie, mislead, bluff, and cheat.

  In Mastermind this is easy to do because the game has rules that stipulate the order of guessing and that require the person who placed the pegs to respond honestly to proposed color sequences suggested by other players. There is no point to the game if the person placing the pegs lies to everyone else. But even when everyone tells the truth, it is easy to slip into serious lapses in logic that can lead to entirely wrong beliefs. That is something to be careful about.

  Slipping into wrong beliefs is a problem for many of us. It is easy to look at facts selectively and to reach wrong conclusions. That is a major problem, for instance, with the alleged police practice of profiling, or some people’s judgment about the guilt or innocence of others based on thin evidence that is wrongly assessed. There are very good reasons why the police and we ordinary folk ought not to be too hasty in jumping to conclusions.

  Let me give an example to help flesh out how easily we can slip into poor logical thinking. Baseball is beset by a scandal over performance-enhancing drugs. Suppose you know that the odds someone will test positive for steroids are 90 percent if they actually used steroids. Does that mean when someone tests positive we can be very confident that they used steroids? Journalists seem to think so. Congress seems to think so. But it just isn’t so. To formulate a policy we need an answer to the question, How likely is it that someone used steroids if they test positive? It is not enough to know how likely they are to test positive if they use steroids. Unfortunately, we cannot easily know the answer to the question we really care about. We can know whether someone tested positive, but that could be a terrible basis for deciding whether the person cheated. A logically consistent use of probabilities—working out the real risks—can help make that clear.

  Imagine that out of every 100 baseball players (only) 10 cheat by taking steroids (game theory notwithstanding, I am an optimist) and that the tests are accurate enough that 9 out of every 10 cheaters test positive. To evaluate the likelihood of guilt or innocence we still need to know how many honest players test positive—that is, how often the tests give us a false positive answer. Tests are, after all, far from perfect. Just imagine that while 90 out of every 100 players do not cheat, 10 percent of the honest players nevertheless test (falsely) positive. Looking at these numbers it’s easy to think, well, hardly anyone gets a false positive (only 10 percent of the innocent) and almost every guilty party gets a true positive (90 percent of the guilty), so knowing whether a person tested positive must make us very confident of their guilt. Wrong!8

  With the numbers just laid out, 9 out of 10 cheaters test positive and 9 out of 90 innocent ball players also test positive. So, 9 out of 18 of the positive test results include cheaters and 9 out of 18 include absolutely innocent baseball players. In this example, the odds that a player testing positive actually uses steroids are fifty-fifty, just the flip of a coin. That is hardly enough to ruin a person’s career and reputation. Who would want to convict so many innocents just to get the guilty few? It is best to take seriously the dictum “innocent until proven guilty.”

  The calculation we just did is an example of Bayes’ Theorem.9 It provides a logically sound way to avoid inconsistencies between what we thought was true (a positive test means a player uses steroids) and new information that comes our way (half of all players testing positive do not use steroids). Bayes’ Theorem compels us to ask probing questions about what we observe. Instead of asking, “What are the odds that a baseball player uses performance-enhancing drugs?” we ask, “What are the odds that a baseball player uses performance-enhancing drugs given that we know he tested positive for such drugs and we know the odds of testing positive under different conditions?”

  Bayes’ Theorem provides a way to calculate how people digest new information. It assumes that everyone uses such information to check whether what they believe is consistent with their new knowledge. It highlights how our beliefs change—how they are updated, in game-theory jargon—in response to new information that reinforces or contradicts what we thought was true. In that way, the theorem, and the game theorists who rely on it, view beliefs as malleable rather than as unalterable biases lurking in a person’s head.

  This idea of updating beliefs leads us to the next challenge. Suppose a baseball player who had a positive (guilty) test result is called to testify before Congress in the steroid scandal. Now suppose he knows of the odds sketched above. Aware of these statistics, and knowing that any self-respecting congressperson is also aware of them, the baseball player knows that Congress, if citing only a positive test result as their evidence, in fact has little on him, no matter how much outrage they muster. The player, in other words, knows Congress is bluffing. But of course Congress knows this as well, so they have subpoenaed the player’s trainer, who is coming in to testify right after the player. Is this just another bluff by Congress, tightening the screws to elicit a confession with the threat of perjury looming? Whether the player is guilty or not, perhaps he shrugs off the move, in effect calling Congress’s raising of the stakes. Now what? Does Congress actually have anything, or will they be embarrassed for going on a fishing expedition and dragging an apparently innocent man through the mud? Will the player adamantly profess innocence knowing he’s guilty (but maybe he really isn’t), and should we shrug off the declarations of innocence lightly, as it seems so many of us do? Is Congress bluffing? Is the player bluffing? Is everyone bluffing? These are tough problems, and they are right up game theory’s alley!

  In real life there are plenty of incentives for others (and for us) to lie. That is certainly true for athletes, corporate executives, national leaders, poker players, and all the rest of us. Therefore, to predict the future we have to reflect on when people are likely to lie and when they are most likely to tell the truth. In engineering the future, our task is to find the right incentives so that people tell the truth, or so that, when it helps our cause, they believe our lies.

  One way of eliciting honest responses is to make repeated lying really costly. Bluffing at poker, for instance, can be costly exactly because other players sometimes don’t believe big bets, and don’t fold as a result. If their hand is better, the bluff comes straight out of the liar’s pocket. So the central feature of a game like five-card draw is not figuring out the probability of drawing an inside straight or three of a kind, although that’s certainly useful too. It’s about convincing others that your hand is stronger than it really is. Part of the key to accumulating bargaining chips, whether in poker or diplomacy, is engineering the future by exploiting leverage that really does not exist. Along with taking prudent risks, creating leverage is one of the most important features in changing outcomes. Of course, that is just a polite way of saying that it’s good to know when and how to lie.

  Betting, whether with chips, stockholders’ money, perjury charges, or soldiers, can lead others to wrong inferences that benefit the bettor; but gambling always suffers from two limitations. First, it can be expensive to bet more than a hand is worth. Second, everyone has an interest in trying to figure out who is bluffing and who is being honest. Raising the stakes helps flush out the bluffers. The bigger the cumulative bet, the costlier it is to pretend to have the resolve to see a dispute through when the cards really are lousy. How much pain anyone is willing to risk on a bluff, and how similar their wagering is when they are bluffing and when they are really holding good cards, is crucial to the prospects of winning or of being unmasked. That, of course, is why diplomats, lawyers, and poker players need a good poker face, and it is why, for example, you take your br
oker’s advice more seriously if she invests a lot of her own money in a stock she’s recommending.

  Getting the best results comes down to matching actions to beliefs. Gradually, under the right circumstances, exploiting information leads to consistency between what people see, what they think, and what they do, just as it does in Mastermind. Convergence in thinking facilitates deals, bargains, and the resolution of disputes.

  With that, we’ve just completed the introductory course in game theory. Nicely done! Now we’re ready to go on to the more advanced course. In the next chapter we look in more depth at how the very fact of our being strategic changes everything going on around us. That will set the stage for working out how we can use strategy to change things to be better for ourselves and those we care about and, if we are altruistic enough, maybe even for almost everyone.

  3

  GAME THEORY 102

  GAME THEORY 101 started us off thinking about how different people are from particles. In short, we are strategists. We calculate before we interact. And with 101 under our belts, we know enough to delve more closely into the subtleties of strategizing.

  Of the many lessons game theory teaches us, one of particular import is that the future—or at least its anticipation—can cause the past, perhaps even more often than the past causes the future. Sound crazy? Ask yourself, do Christmas tree sales cause Christmas? This sort of reverse causality is fundamental to how game theorists work through problems to anticipate outcomes. It is very different from conventional linear thinking. Let me offer an example where failing to recognize how the future shapes the past can lead to really bad consequences.

 

‹ Prev