The Predictioneer’s Game: Using the Logic of Brazen Self-Interest to See and Shape the Future

Home > Other > The Predictioneer’s Game: Using the Logic of Brazen Self-Interest to See and Shape the Future > Page 6
The Predictioneer’s Game: Using the Logic of Brazen Self-Interest to See and Shape the Future Page 6

by Bruce Bueno De Mesquita


  Many believe that arms races cause war.1 With that conviction in mind, policy makers vigorously pursue arms control agreements to improve the prospects of peace. To be sure, controlling arms means that if there is war, fewer people are killed and less property is destroyed. That is certainly a good thing, but that is not why people advocate arms control. They want to make war less likely. But reducing the amount or lethality of arms just does not do that.

  The standard account of how arms races cause war involves what game theorists call a hand wave—that is, at some point the analyst waves his hands in the air instead of providing the logical connection from argument to conclusions. The arms-race hand wave goes like this:

  When a country builds up its arms it makes its adversaries fear that their security is at risk. In response, they build up their own arms to defend themselves. The other side looks at that buildup—seeing their own as purely defensive—and tries to protect itself by developing still more and better weapons. Eventually the arms race produces a massive overcapacity to kill and destroy. Remember how many times over the U.S. and Soviet nuclear arsenals could destroy the world! So, as the level of arms—ten thousand nuclear-tipped missiles, for instance—grows out of proportion to the threat, things spiral out of control (that’s the hand wave—why do things spiral out of control?), and war starts.

  Wait a moment, let’s slow down and think about that. The argument boils down to claiming that when the costs of war get to be really big—arms are out of proportion to the threat—war becomes more likely. That’s really odd. Common sense and basic economics teach us that when the cost of anything goes up, we generally buy less, not more. Why should that be any less true of war?

  True, just about every war has been preceded by a buildup in weapons, but that is not the relevant observation. It is akin to looking at a baseball player’s positive test for steroids as proof that he cheats. What we want to know is how often the acquisition of lots more weapons leads to war, not how often wars are preceded by the purchase of arms. The answer to the question we care about is, not very often.

  By looking at wars and then asking whether there had been an arms race, we confuse cause and effect. We ignore all the instances in which arms may successfully deter fighting exactly because the anticipated destruction is so high. Big wars are very rare precisely because when we expect high costs we look for ways to compromise. That, for instance, is why the 1962 Cuban Missile Crisis ended peacefully. That is why every major crisis between the United States and the Soviet Union throughout the cold war ended without the initiation of a hot war. The fear of nuclear annihilation kept it cold. That is why lots of events that could have ignited world wars ended peacefully and are now all but forgotten.

  So, in war and especially in peace, reverse causality is at work. When policy makers turn to arms control deals, thinking they are promoting peace, they are taking much bigger risks than they seem to realize. Failing to think about reverse causation leads to poor predictions of what is likely to happen, and that can lead to dangerous decisions and even to catastrophic war.

  We will see many more instances of this kind of reasoning in later chapters. We will examine, for example, why most corporate fraud probably is not sparked by executive greed and why treaties to control greenhouse gas emissions may not be the best way to fight global warning. Each example reinforces the idea that correlation is not causation. They also remind us that the logic of reverse causation—called endogeneity in game theory—means that what we actually “observe”—such as arms races followed by war—are often biased samples.

  The fact that decisions can be altered by the expectation of their consequences has lots of implications. In Game Theory 101 we talked about bluffing. Working out when promises or threats should be taken seriously and when they are (in game-theory-speak) “cheap talk” is fundamental to solving complicated situations in business, in politics, and in our daily encounters. Sorting out when promises or threats are sincere and when they are just talk is the problem of determining whether commitments are credible.

  LET’S PLAY GAMES

  In predicting and engineering the future, part of getting things right is working out what stands in the way of this or that particular outcome. Even after pots of money are won at cards, or hands are shaken and contracts or treaties are signed, we can’t be sure of what will actually get implemented. We always have to ask about commitments. Deals and promises, however sincerely made, can unravel for lots of reasons. Economists have come up with a superbly descriptive label for a problem in enforcing contracts. They ask, is the contract “renegotiation-proof”?3 This question is at the heart of litigiousness in the United States.

  I once worked on a lawsuit involving two power companies. One produced excess electricity and sold it to a different electric company in another state. As it happened, the price for electricity shot way up after the contract was signed. The contract called for delivery at an agreed-upon lower price. The power seller stopped delivering the promised electricity to the buyer, demanding more money for it. Naturally, the buyer objected, pointing out that the contract did not provide for changing the price just because market conditions changed. That was a risk that the buyer and seller agreed to take when they signed their contract. Still, the seller refused to deliver electricity. The seller was sued and defended itself vigorously so that legal costs racked up on both sides. All the while that bitter accusations flew back and forth, the seller kept offering to make a new deal with the plaintiff. The deal involved renegotiating their contract to make adjustments for extreme changes in market prices. The plaintiff resisted, always pointing—rightly—to the contract. But the plaintiff also really needed the electricity and couldn’t get it anywhere else for a better price than the seller, my client, was willing to take—and my client knew that. Eventually, the cost of not providing the necessary electricity to their own clients became so great that the plaintiff caved in and took the deal they were offered.

  Here was nasty, avaricious human nature hard at work in just the way game theorists think about it. Yes, there was a contract, and its terms were clear enough, but the cost of fighting to enforce the contract became too great. However much the plaintiff declared its intent to fight the case in court, the defendant knew it was bluffing. The plaintiff’s need for electricity and the cost of battling the case out in court were greater than the cost of accepting a new deal. And so it was clear that the terms of the contract were not renegotiation-proof. The original deal was set aside and a new one was struck. The original deal really was not a firm commitment to sell (or probably, for that matter, to buy) electricity at a specified price over a specified time period when the market price moved markedly from the price stipulated in the agreement. Justice gave way, as it so often does in our judicial system, to the relative ability of plaintiffs and defendants to endure pain.

  Commitment problems come in other varieties. The classic game theory illustration of a commitment problem is seen in the game called the prisoner’s dilemma, which is played out on almost every cop show on TV every night of the week. The story is that two criminals (I’ll call them Chris and Pat) are arrested. Each is held in a separate cell, with no communication between them. The police and the DA do not have enough evidence to convict them of the serious crime they allegedly committed. But they do have enough evidence to convict them of a lesser offense. If Chris and Pat cooperate with each other by remaining silent, they’ll be charged and convicted of the lesser crime. If they both confess, they’ll each receive a stiff sentence. However, if one confesses and the other does not, then the one who confesses—ratting out the other—will get off with time served, and the other will be put away for life without a chance for parole.

  It is possible, maybe even likely, that Chris and Pat, our two crooks, made a deal beforehand, promising to remain silent if they are caught. The problem is that their promise to each other is not credible because it’s always in their interest—if the game is not going to be repeated an indefini
te number of times—to renege, talking a blue streak to make a deal with the prosecutor. Here’s how it works:

  THE PRISONER’S DILEMMA

  Pat’s Choices →

  Chris’s Choices

  ↓

  Don’t confess (stay faithful to Chris)

  Confess (rat out Chris)

  Don’t confess (stay faithful to Pat)

  Chris and Pat get 5 years

  Chris gets life; Pat gets time served

  Confess (rat out Pat)

  Chris gets time served; Pat gets life

  Chris and Pat get 15 years

  After Chris and Pat are arrested, neither knows whether the other will confess or really will stay silent as promised. What Chris knows is that if Pat is true to his word and doesn’t talk, Chris can get off with time served by betraying Pat. If instead Chris stays faithful to her promise and keeps silent too, she can expect to get five years. Remember, game theory reasoning takes a dim view of human nature. Each of the crooks looks out for numero uno. Chris cares about Chris; Pat looks out only for Pat. So if Pat is a good, loyal buddy—that is, a sucker—Chris can take advantage of the chance she’s been given to enter a plea. Chris would walk and Pat would go to prison for life.

  Of course, Pat works out this logic too, so maybe instead of staying silent, Pat decides to talk. Even then, Chris is better off confessing than she would be by keeping her mouth shut. If Pat confesses and Chris stays silent, Pat gets off easy—that’s neither here nor there as far as Chris is concerned—and Chris goes away for a long time, which is everything to her. If Chris talks too, her sentence is lighter than if she stayed silent while Pat confessed. Sure, Chris (and Pat) gets fifteen years, but Chris is young, and fifteen years, with a chance for parole, certainly beats life in prison with no chance for parole. In fact, whatever Chris thinks Pat will do, Chris’s best bet is to confess.

  This produces the dilemma. If both crooks kept quiet they would each get a fairly light sentence and be better off than if both confessed (five years each versus fifteen). The problem is that neither one benefits from taking a chance, knowing that it’s always in the other guy’s interest to talk. As a consequence, Chris’s and Pat’s promises to each other notwithstanding, they can’t really commit to remaining silent when the police interrogate them separately.

  IT’S ALL ABOUT THE DOG THAT DIDN’T BARK

  The prisoner’s dilemma illustrates an application of John Nash’s greatest contribution to game theory. He developed a way to solve games. All subsequent, widely used solutions to games are offshoots of what he did. Nash defined a game’s equilibrium as the planned choice of actions—the strategy—of each player, requiring that the plan of action is designed so that no player has any incentive to take an action not included in the strategy. For instance, people won’t cooperate or coordinate with each other unless it is in their individual interest. No one in the game-theory world willingly takes a personal hit just to help someone else out. That means we all need to think about what others would do if we changed our plan of action. We need to sort out the “what ifs” that confront us.

  Historians spend most of their time thinking about what happened in the world. They want to explain events by looking at the chain of things that they can observe in the historical record. Game theorists think about what did not happen and see the anticipated consequences of what didn’t happen as an important part of the cause of what did happen. The central characteristic of any game’s solution is that each and every player expects to be worse off by choosing differently from the way they did. They’ve pondered the counterfactual—what would my world look like if I did this or I did that?—and did whatever they believed would lead to the best result for them personally.

  Remember the very beginning of this book, when we pondered why Leopold was such a good king in Belgium and such a monster in the Congo? This is part of the answer. The real Leopold would have loved to do whatever he wanted in Belgium, but he couldn’t. It was not in his interest to act like an absolute monarch when he wasn’t one. Doing some counterfactual reasoning, he surely could see that if he tried to act like an absolute ruler in Belgium, the people probably would put someone else on the throne or get rid of the monarchy altogether, and that would be worse for him than being a constitutional monarch. Seeing that prospect, he did good works at home, kept his job, and freed himself to pursue his deepest interests elsewhere. Not facing such limitations in the Congo, there he did whatever he wanted.

  This counterfactual thinking becomes especially clear if we look at a problem or game as a sequence of moves. In the prisoner’s dilemma table I showed what happens when the two players choose without knowing what the other will do. Another way to see how games are played is to draw a tree that shows the order in which players make their moves. Who gets to move first matters a lot in many situations, but it does not matter in the prisoner’s dilemma because each player’s best choice of action is the same—confess—whatever the other crook does. Let’s have a look at a prospective corporate acquisition I worked on (with the details masked to maintain confidentiality). In this game, anticipating what the other player will do is crucial to getting a good outcome.

  The buyer, a Paris-based bank, wanted to acquire a German bank. The buyer was prepared to pay a big premium for the German firm but was insistent on moving all of the German executives to the corporate headquarters in Paris. As we analyzed the prospect of the acquisition, it became apparent that the price paid was not the decisive element for the Heidelberg-based bank. Sure, everyone wanted the best price they could get, but the Germans loved living in Heidelberg and were not willing to move to Paris just for money. Paris was not for them. Had the French bankers pushed ahead with the offer they had in mind, the deal would have been rejected, as can be seen in the game tree below. But because their attention was drawn to the importance the Germans attached to where they lived, the offer was changed from big money to a more modest amount—fine enough for the French—but with assurances that the German executives could remain in Heidelberg for at least five years, which wasn’t ideal for the French, but necessary for their ends to be realized.

  FIG. 3.1. Pay Less to Buy a Bank

  The very thick, dark lines in the figure show what the plans of action were for the French buyer and the German seller. There is a plan of action for every contingency in this game. One aspect of the plan of action on the part of the executives in Heidelberg was to say nein to a big-money offer that required them to move to Paris. This never happened, exactly because the French bankers asked the right “what if” question. They asked, What happens if we make a big offer that is tied to a move to Paris, and what happens if we make a more modest money offer that allows the German bank’s management to stay in Heidelberg? Big money in Paris, as we see with the thick, dark lines, gets nein and less money in Heidelberg encourages the seller to say jawohl. Rather than not make the deal at all, the French chose the second-best outcome from their point of view. They made the deal that allowed the German management to stay put for five years. The French wisely put themselves in their German counterparts’ shoes and acted accordingly.

  By thinking about the strategic interplay between themselves and the German executives, the French figured out how to make a deal they wanted. They concentrated on the all-important question, “What will the Germans do if we insist they move to Paris?” No one actually moved to Paris. Historians don’t usually ask questions about things that did not happen, so they would probably overlook the consequences of an offer that insisted the German management relocate to France. They might even wonder why the Germans sold so cheaply. In the end, the Germans stayed in Heidelberg.

  Why should we care about their moving to Paris when in fact they didn’t? The reason they stayed in Heidelberg while agreeing to the merger is precisely because of what would have happened had the French insisted on moving them to France: no deal would have been struck, and so there would have been no acquisition for anyone to study.

  The two
games I have illustrated in the preceding pages are very simple. They involve only two players, and each game has only one possible rational pair of strategies leading to an equilibrium result. Even a simple two-player game, however, can involve more than one set of sensible plans of action that lead to different possible ends of the game. We’ll solve an example of such a game in the last chapter. Of course, with more players and more choices of actions, many complicated games involve the possibility of many different strategies and many different outcomes. Part of my task as a consultant is to work out how to get players to select strategies that are more beneficial for my client than some other way of playing the game. That’s where trying to shape information, beliefs, and even the game itself become crucial, and in this next section I’d like to show you just what I mean.

  WANT TO BE A CEO?

  As we all know, great jobs are getting harder to come by, and reaching the top is as competitive as ever. Merit may be necessary, but, as many of us can attest, it’s unlikely to be sufficient. There are, after all, many more well-qualified people than there are high-level jobs to fill.

  That being said, even if you’ve managed to mask or overcome your personal limitations and have been blessed with great timing and good luck such that you now find yourself in the rarefied air of the boardroom, there’s something worth knowing that might have escaped you, something that might still prevent you from grabbing that cherished top spot: the selection process.

 

‹ Prev