Book Read Free

The Predictioneer’s Game: Using the Logic of Brazen Self-Interest to See and Shape the Future

Page 18

by Bruce Bueno De Mesquita


  They chose to have me analyze what became the 1994 baseball strike. I made detailed predictions about such matters as whether there would be a strike (the model said yes), whether there would be a World Series that year (the model said no), and whether President Clinton’s eventual threatened intervention, which was predicted by the model, would end the strike (again, the prediction was no). I did an in-class interview with the two or three students who were “experts” on baseball, and then ran my computer model in front of all the students. I provided an analysis of the results on the spot. That way the students knew that nothing more went into my predictions than the data collected in class and the logic of my model. The predictions, as it happened, turned out to be correct.

  Shortly before I left Athens, Professor Gaddis suggested that I write a paper applying the model to the end of the cold war. In particular, he proposed that I investigate whether the model would have correctly predicted the U.S. victory in the cold war based only on information that decision makers could have known shortly after World War II ended. That is, he asked for a sort of out-of-sample prediction of the type I used to validate the fraud model. And so my analytic experiences with Dan Rostenkowski and with the baseball strike came together to provide a motivation and a framework for assessing the end of the cold war. My work on this project, using only information available in 1948, would help me incorporate and test my new design for external shocks within a model, which, of course, I felt compelled to develop on account of my health care debacle.

  I used Gaddis’s proposal and my awful experience with health care to think through how to predict the consequences of inherently unpredictable events. I put together a data set that my model could use to investigate alternative paths to the end—or continuation—of the cold war. The data on stakeholder positions were based on a measure of the degree to which each country in the world as of 1948 shared security interests with the United States or the Soviet Union. The procedure I used to evaluate shared interests was based on a method I developed in publications in the mid-1970s.2 The procedure looks at how similar each pair of countries’ military alliance portfolios are to each other from year to year. Those who tended to ally with the same states in the same way were taken to share security concerns, and those who allied in significantly different ways (as the United States and the Soviet Union did) were taken to have different, perhaps opposed, security policies and interests.

  The correlation of alliance patterns as of 1948 was combined with information on the relative clout or influence of each state in 1948. To asses clout I used a standard body of data developed by a project then housed at the University of Michigan called the Correlates of War Project. Those data, like my measure of security interests, can be downloaded by anyone who cares to. They are housed at a website called EUGene, designed by two political science professors who were interested in replicating some of my research on war.3

  Each state in the data set—I focused on countries rather than individual decision makers to keep the data simple and easy to reproduce by others—was assigned a maximum salience score to reflect the urgency of security questions right after the Second World War. Combining these data to estimate expected gains and losses from shifting security policies, the model was run on these data for all country-pair combinations one hundred times. Each such run consisted of fifty “bargaining periods.” The “bargaining periods” were treated as years, and thus the model was being used to predict what would happen in the cold war roughly from 1948 until the end of the millennium.

  Each country’s salience score was assigned a one-in-four chance of randomly changing each year. That seemed high enough to me to capture the pace at which a government’s attention might move markedly in one direction or another and not so high as to introduce more volatility than was likely within countries or across countries over relatively short intervals. Naturally, this could have been done with a higher or lower probability, so there is nothing more than a personal judgment behind the choice of a one-in-four chance of a “shock.”

  Any changes in salience reflected hypothetical shifts in the degree to which security concerns dominated policy formation or the degree to which other issues, such as domestic matters, surfaced to shape decision making for this or that country. Thus, the salience data were “shocked” to capture the range and magnitude of possible political “earthquakes” that could have arisen after 1948. This was the innovation to my model that resulted from the combination of my visit to Ohio and my failed predictions regarding health care. Since then, I have incorporated ways to randomly alter not only salience but also the indicators for potential clout and for positions, and even for whether a stakeholder stays in the game or drops out, in a new model I am developing.

  Neither the alliance-portfolio data used to measure the degree of shared foreign interests nor the influence data were updated to take real events after 1948 into account. The alliance-portfolio measure only changed in response to the model’s logic and its dynamics, given randomly shocked salience. Changes in the alliance correlations for all of the countries were the indicator of whether the Soviets or the Americans would prevail or whether they would remain locked in an ongoing struggle for supremacy in the world.

  So here was an analysis designed to predict the unpredictable—that is, the ebb and flow of attentiveness to security policy as the premier issue in the politics of each state in my study. With enough repetitions (at the time, I did just a hundred, because computation took a very long time; today I would probably do a thousand or more) with randomly distributed shocks, we should have been able to see the range of possible developments on the security front. That, in turn, should have made it possible to predict the relative likelihood of three possible evolutions of the cold war: (a) it would end with a clear victory by the United States within the fifty-year period I simulated; (b) it would end with a clear victory by the Soviet Union in that same time period; or (c) it would continue, with neither the Soviet Union nor the United States in a position to declare victory.

  What did I find? The model indicated that in 78 percent of the scenarios in which salience scores were randomly shocked, the United States won the cold war peacefully, sometimes by the early to mid-1950s, more often in periods corresponding to the late 1980s or early 1990s. In 11 percent of the simulations, the Soviets won the cold war, and in the remaining 11 percent, the cold war persisted beyond the time frame covered by my investigation. What I found, in short, was that the configuration of policy interests in 1948 already presaged an American victory over the Soviet Union. It was, as Gaddis put it, an emergent property. This was true even though the starting date, 1948, predated the formation of either NATO or the Warsaw Pact, each of which emerged in almost every simulation as the nations’ positions shifted from round to round according to the model’s logic.4

  The selection of 1948 as the starting date was particularly challenging in that this was a time when there was concern that many countries in Western Europe would become socialist. This was a time, too, when many thought that a victory of communism over capitalism and authoritarianism over democracy was a historical inevitability. On the engineering front it was, of course, too late to change the course of events. Still, the model was quite provocative on this dimension, as it suggested opportunities that were passed up to win the cold war earlier. One of those opportunities, at the time of Stalin’s death (which, of course, was not a piece of information incorporated into the data that went into the model), was, as it turns out, contemplated by real decision makers at the time. They thought there might be a chance to wrest the Soviet Union’s Eastern European allies into the Western European fold. My model agreed. American decision makers did not pursue this possibility, because they feared it would lead to a war with the Soviet Union. My model disagreed, predicting that the Soviets in this period would be too preoccupied with domestic issues and would, undoubtedly with much regret, watch more or less helplessly as their Eastern European empire drifted away. We will, of course, n
ever know who was right. We do know that that is what they did a few decades later, between 1989 and 1991.

  So with the help of Dan Rostenkowski and John Gaddis’s students I was able to show how strongly the odds favored an American cold war victory. The account of the cold war, like the earlier examination of fraud, reminds us that prediction can look backward almost as fruitfully as it can look forward. Not everyone was as generous as John Gaddis in acknowledging that game-theory modeling might help sort out important issues, and not everyone should be (not that it isn’t nice when people are that generous). There should be and always will be critics.

  There are plenty of good reasons for rejecting modeling efforts, or at least being skeptical of them, and plenty of bad reasons too. Along with technical failures within my models, or any models for that matter, there is the obvious limitation in that they are simply models, which are, of course, not reality. They are a simplified glance at reality. They can only be evaluated by a careful examination of what general propositions follow from their logic and an evaluation of how well reality corresponds with those propositions. Unfortunately, sometimes people look at lots of equations and think, “Real people cannot possibly make these complicated calculations, so obviously real people do not think this way.” I hear this argument just about every semester in one or another course that I teach. I always respond by saying that the opposite is true. Real people may not be able to do the cumbersome math that goes into a model, but that doesn’t mean they aren’t making much more complicated calculations in their heads even if they don’t know how to represent their analytic thought processes mathematically.

  Try showing a tennis pro the equations that represent hitting a ball with topspin to the far corner of the opponent’s side of the court, making sure that the ball lands just barely inside the line and that it travels, say, at 90 miles an hour. Surely the tennis pro will look at the equations in utter bewilderment. Yet professional tennis players act as if they make these very calculations whenever they try to make the shot I just described. If the pro is a ranked player, then most of the time the shot is made successfully even though the decisions about arm speed, foot position, angle of the racket’s head, and so forth must be made in a fraction of a second and must be made while also working out the velocity, angle, and spin of the ball coming his or her way from across the court.

  Since models are simplified representations of reality, they always have room for improvement. There is always a trade-off between adding complexity and keeping things manageable. Adding complexity is only warranted when the improvement in accuracy and reliability is greater than the cost of adding assumptions. This is, of course, the well-known principle of parsimony. I’ve made small and big improvements in my game-theory modeling over the years. My original forecasting model was static. It reported what would happen in one exchange of information on an issue. As such, it was a good forecaster but not much good at engineering. While I was tweaking that static model to improve its estimation of people’s willingness to take risks and to estimate their probability of prevailing or losing in head-to-head contests, I was also thinking about how to make the process dynamic. Real people, after all, are dynamic. They change their minds, they switch positions on questions, they make deals, and, of course, they bluff and renege on promises.

  About ten years after creating the static version I finally worked out a dynamic model I was happy with. That is the model I’m mostly discussing in this book. Over the past few years I’ve been working on a completely new approach based on a more nuanced game than the one I described back in the third chapter. Preliminary tests of this new model indicate that it not only yields more accurate predictions, but also captures play dynamics more faithfully. As an added bonus, it also allows me to evaluate trade-offs across issues or across different dimensions on a single issue simultaneously. It also gives me the opportunity to assess how each player’s salience and influence changes from bargaining round to bargaining round, something the older model cannot do. I will apply this new model to some ongoing foreign policy crises and to global warming in the last two chapters. That will be my first foray into opening the opportunity to be embarrassed by my new approach.

  The process of discovery is never-ending. That’s both the challenge and the excitement behind doing this kind of research: finding better and better ways to help people solve real problems through logic and evidence. Not everyone, though, shares my enthusiasm for this sort of effort at discovery.

  Some critics object to predicting human behavior. They worry that government or corporations will misuse this knowledge. They’re concerned about the ethics of reducing people to equations. To me this is an odd set of objections, especially since it comes mostly from people who are unhappy with the quality of government policy choices and with corporate actions to begin with. Some of my academic colleagues particularly object to providing guidance to the intelligence community, the “evil” CIA, on national security matters. They seem to think that the government shouldn’t have the best tools at its disposal to make the best choices possible. I don’t share that view. If we want better decisions from our government, we ought to be willing to help it improve its decision making.

  Yes, there is always a risk that any tool will be misused. But science is about understanding how the world works. Different people have different personal views about what will make the world a better place, and it’s the job of officials and citizens to regulate unethical uses of information. Further, it is the responsibility of each of us as individuals to withhold our expertise when we think its use will make the world, or our little part of it, a worse place.

  I turn down clients when I don’t want to help them achieve their goals. Many years ago, for example, I was approached by someone claiming to represent the Libyan government. The person who contacted me wanted me to figure out how to facilitate overthrowing the Egyptian government then led by Anwar Sadat. The contact proposed flying me to Geneva, Switzerland, to avoid the possibility of the United States government or some other government being able to subpoena the results of my then very primitive modeling effort. I was offered a million dollars for my trouble. There is no way for me to know whether this approach was authentic or a hoax, although it certainly seemed real. I declined and immediately contacted people in the U.S. government to alert them to my experience.

  Several years later I was contacted by yet another person with an unsavory proposal. This person represented himself as an agent for Mobutu Sese Seko of Zaire. Mobutu’s hold on power had become tenuous. His economy was doing poorly, his soldiers were becoming agitated, and his loyal followers were becoming shaky because he was known to have a terminal illness. They were presumably worried about who would protect them and take care of them financially when he was gone. The contact person wanted to know if I could work out how to salvage Mobutu’s control over Zaire and offered a success fee of 10 percent of Mobutu’s offshore financial holdings. I know this sounds hard to believe, but it happened, and it was before unscrupulous Nigerians had worked out their famous Internet bank scams.

  Mobutu at the time was reputed to be worth somewhere between $6 billion and $20 billion. If this had been for real and if I could have engineered his continuation in office until he died peacefully or chose to step down, and if I had been willing to do so, I could have been paid an unbelievable fortune. But even if the fortune had been believable, the answer would still have been the same. As in the alleged Libya offer, I said no without a moment’s hesitation. I was confident that Mobutu’s difficulty was an analytic problem with a solution, but no amount of money could have justified my intervention. My main concern was that I would be on the radar screen of people I really preferred not to know about me. And once again I contacted people in the U.S. government to alert them to the situation.

  Of course, my personal judgment about who to do business with might differ from someone else’s judgment. I couldn’t see any justification for helping anyone topple Sadat. Here was a man who ha
d put his life at risk—and would tragically lose it—in a sincere and successful effort to advance the cause of peace. Mobutu’s case could be seen as (ever so slightly) more complicated. There was a slender ethical case to be made on Mobutu’s behalf. While it did not appeal to me, one could easily have argued that whoever came after Mobutu might be even worse. Back then, and even immediately after his overthrow, it wasn’t clear that the Congo was moving in a better direction. Still, for me the answer was unambiguous. For others—who knows how they would have evaluated the pros and cons of applying insights from science to help or hinder a dictator like Mobutu?

  Some of you may think I should not use game theory to help big corporations get good settlements in litigation, especially when their opponents in civil matters may not be able to afford (or choose not to afford) comparable help. Others may think I don’t do enough to help plaintiffs (although my firm is happy to do so; we just aren’t asked very often), or what have you. Still others may subscribe to the lawyer’s dictum: Everyone is entitled to the best defense they can muster. We all have our individual standards about how to use or withhold our knowledge and skills, and that is as it should be.

 

‹ Prev