Book Read Free

Expert Political Judgment

Page 26

by Philip E. Tetlock


  10 For wider ranging analyses of what went wrong for these models: L. Bartels and J. Zaller, “Presidential Vote Models: A Recount,” Political Science and Politics 34 (2001): 9–20; C. Wlezien, “On Forecasting the Presidential Vote,” Political Science and Politics 34 (2001): 25–32; J. Campbell, “The Referendum That Didn’t Happen: The Forecasts of the 2000 Presidential Election,” Political Science and Politics 34 (2001): 33–38; R. Erikson, “The 2000 Presidential Election in Historical Perspective,” Political Science Quarterly 116 (1) (2001): 29–52.

  11 Robert Jervis, “The Future of International Politics: Will It Resemble the Past?” International Security 16 (1992): 39–73; G. Almond and T. Genco, “Clouds, Clocks, and the Study of Politics,” World Politics 29 (1977): 489–522.

  12 D. Moynihan, Pandemonium (New York: Oxford University Press, 1993).

  13 “No. 2 Official of the IMF to Step Down at Year’s End, New York Times, May 9, 2001, C5.”

  14 See D. Kenny, Correlation and Causality (New York: Wiley-Interscience, 1979).

  15 On the perils of “naive falsificationism” and the frequent justifiability of refusing to abandon hypotheses that have run aground awkward evidence, see I. Lakatos, ed., Criticism and the Growth of Knowledge (Cambridge: Cambridge University Press, 1972), 9–101); F. Suppe, The Structure of Scientific Theories (Urbana: University of Illinois Press, 1974).

  16 See W. J. McGuire, “A Contextualist Theory of Knowledge: Its Implications for Innovation and Reform in Psychological Research,” Advances in Experimental and Social Psychology 16 (1983): 3–87.

  17 S. Hawkins and R. Hastie, “Hindsight: Biased Judgment of Past Events after the Outcomes Are Known,” Psychological Bulletin 107 (1990): 311–27.

  18 J. Campbell and A. Tesser, “Motivational Interpretations of Hindsight Bias: An Individual Difference Analysis,” Journal of Personality 51 (1983): 605–20; some also argue that hindsight distortion will be most pronounced when better possible worlds fail to occur because people are motivated to avoid disappointment by portraying such worlds as impossible. See O. E. Tykocinski, D. Pick, and D. Kedmi, “Retroactive Pessimism: A Different Kind of Hindsight Bias,” European Journal of Social Psychology 32(4) (2002): 577–88. We did not find support for this argument.

  19 In chapter 6, defenders of hedgehogs try to trivialize the hindsight bias by arguing that, cognitive resources being finite, it is adaptive to wipe the mental slate clean after we have learned what happened.

  20 Although the two defenses are weakly correlated, some observers displayed both effects. This is possible if beliefs serve shifting mixtures of functions over time. Close-call counterfactuals may initially serve as shock absorbers that cushion disconfirmation bumps as we travel through history: “Oops, my most likely scenario of a hard-liner coup to save the USSR did not materialize, but it almost did.” Gradually, though, these close-call arguments become integral parts of our mental models of the world with which we must come to terms: “Mulling it over, I guess the USSR was doomed and the coup that tried to stave off the inevitable was fated to fail.” Such revised mental models build on and subtly modify the cause-effect reasoning that led to the original off-base forecast. This restructuring is often so seamless that observers feel as though they “knew all along” both why the outcome had to occur roughly as it did and why the future they once thought likely was fated not to occur. What happened can feel inevitable even though it was unexpected and the unexpected initially had to be explained away as an aberration.

  21 Suedfeld and Tetlock, “Individual Differences.”

  22 Many academics have endorsed these prescriptions (for a review, P. E. Tetlock, “Social Psychology and World Politics”). And so have many intelligence analysts—foremost among them, Sherman Kent, who was famous for admonishing his colleagues to be skeptical of favorite sources and alert to the power of prejudices to bias assessments of evidence (S. Kent, Collected Essays, U.S. Government: Center for the Study of Intelligence, 1970, http://www.cia.gov/csi/books/shermankent/toc.html).

  23 E. R. May, Lessons of the Past: The Use and Misuses of History in American Foreign Policy (New York: Oxford University Press, 1973).

  CHAPTER 5

  Contemplating Counterfactuals

  FOXES ARE MORE WILLING THAN HEDGEHOGS TO ENTERTAIN SELF-SUBVERSIVE SCENARIOS

  The historian must … constantly put himself at a point in the past at which the known factors will seem to permit different outcomes. If he speaks of Salamis, then it must be as if the Persians might still win; if he speaks of the coup d’état of the Brumaire, then it must remain to be seen if Bonaparte will be ignominiously repulsed.

  —JOHAN HUIZINGA

  The long run always wins in the end. Annihilating innumerable events—all those which cannot be accommodated in the main ongoing current and which therefore are ruthlessly swept to one side—it indubitably limits both the freedom of the individual and even the role of chance.

  —FERNAND BRAUDEL

  Men use the past to prop up their prejudices.

  —A.J.P. TAYLOR

  THERE IS SOMETHING disturbing about the notion that history might turn out to be, as radical skeptics have indefatigably insisted, one damned thing after another. And there is something reassuring about the notion that people can, if they look hard enough, discover patterns in the procession of historical events and these patterns can become part of humanity’s shared endowment of knowledge. We need not repeat the same dreadful mistakes ad nauseam and ad infinitum. Here is a bedrock issue on which hedgehogs and foxes can agree: good judgment presupposes some capacity to learn from history.

  The agreement does not last long, however. The strikingly different intellectual temperaments that shaped thinking about the future in chapters 3 and 4 shape thinking about the past in chapter 5. Hedgehogs are still drawn to ambitious conceptual schemes that satisfy their craving for explanatory closure. And foxes are still wary of grand generalizations: they draw lessons from history that are riddled with probabilistic loopholes and laced with contingencies and paradoxes.

  Chapter 5 works from the premise that underlying all lessons that experts extract from history are implicit counterfactual assumptions about how events would have unfolded if key factors had taken different forms. If we want to understand why experts extract one rather than another lesson from history, we need to understand the preconceptions they bring to the analysis of what was possible or impossible at particular times and places. Chapter 5 also provides an array of evidence that suggests how powerful these preconceptions can be. We can do a startlingly good job of predicting how experts judge specific historical possibilities from broad ideological orientations. And these prediction coefficients are especially large among hedgehogs who are unembarrassed about approaching history in a top-down fashion in which they deduce what was plausible in specific situations from abstract first principles.

  Chapter 5 does not, however, confuse correlation with causality. It also relies on turnabout thought experiments—that manipulate the content of fresh discoveries from historical archives—to gauge experts’ willingness to change their minds. Reassuringly, most are prepared, in principle, to modify their counterfactual beliefs in response to new facts. But the effect sizes for facts are small and those for preconceptions large. Hedgehogs and foxes alike impose more stringent standards of proof on dissonant discoveries (that undercut pet theories) than they do on consonant ones (that reinforce pet theories). Moreover, true to character type, hedgehogs exhibit stronger double standards and rise more unapologetically to the defense of those standards.

  JUDGING THE PLAUSIBILITY OF COUNTERFACTUAL REROUTINGS OF HISTORY

  Learning from the past is hard, in part, because history is a terrible teacher. By the generous standards of the laboratory sciences, Clio is stingy in her feedback: she never gives us the exact comparison cases we need to determine causality (those are cordoned off in the what-iffy realm of counterfactuals), and she often begrudges us even the roughly comparable real-world cases th
at we need to make educated guesses. The control groups “exist”—if that is the right word—only in the imaginations of observers, who must guess how history would have unfolded if, say, Churchill rather than Chamberlain had been prime minister during the Munich crisis of 1938 (could we have averted World War II?) or if, say, the United States had moved more aggressively against the Soviet Union during the Cuban missile crisis of 1962 (could we have triggered World War III?).1

  But we, the pupils, should not escape all blame. A warehouse of experimental evidence now attests to our cognitive shortcomings: our willingness to jump the inferential gun, to be too quick to draw strong conclusions from ambiguous evidence, and to be too slow to change our minds as disconfirming observations trickle in.2 A balanced apportionment of blame should acknowledge that learning is hard because even seasoned professionals are ill-equipped to cope with the complexity, ambiguity, and dissonance inherent in assessing causation in history. Life throws up a lot of puzzling events that thoughtful observers feel impelled to explain because the policy stakes are so high. However, just because we want an explanation does not mean that one is within reach. To achieve explanatory closure in history, observers must fill in the missing counterfactual comparison scenarios with elaborate stories grounded in their deepest assumptions about how the world works.3

  That is why, cynics have suggested, it is so easy to infer specific counterfactual beliefs from abstract political orientations. The classic example is the recurring debate between hawks and doves over the utility of tougher versus softer influence tactics.4 Hawkish advocates of deterrence are convinced that the cold war would have lasted longer than it did if, instead of a Reagan presidency, we had had a two-term Carter presidency,5 whereas doveish advocates of reassurance are convinced that the cold war would have ended on pretty much the same schedule. Surveying the entire cold war, hawkish defenders of nuclear deterrence argue that nuclear weapons saved us from ourselves, inducing circumspection and sobriety in superpower policies. But doveish critics reply that we were extraordinarily lucky and that, if a whimsical deity reran cold war history one hundred times, permitting only minor random variations in starting conditions, nuclear conflicts would be a common outcome.6 The confidence with which observers of world politics announce such counterfactual opinions is itself remarkable. Whatever their formal logical status, counterfactual beliefs often feel factual to their holders. It is almost as though experts were telling us: “Of course, I know what would have happened. I just got back from a trip in my alternative-universe teleportation device and can assure you that events there dovetailed perfectly with my preconceptions.”

  Our first order of business was therefore to determine to what degree counterfactual reasoning is a theory-driven, top-down affair in which observers deduce from their worldviews what was possible at specific times and places. Is the appropriate mental model the covering-law syllogism in which the major premise is “this generalization about societies, economies, or international relations is true,” the minor premise is “this generalization covers this case,” and the conclusion is “this generalization tells me what would have happened if details of the case had been different”? Or is counterfactual reasoning a messy bottom-up affair in which observers often surprise themselves and discover things in the hurly-burly of history that they never expected to find? Do they often start out thinking an outcome inconceivable but, in the light of new evidence, change their minds?

  Common sense tells us that each hypothesis must capture some of the truth. If we did not rely on our preconceptions to organize the past, we would be hopelessly confused. Everything would feel unprecedented. And if we relied solely on our preconceptions, we would be hopelessly closed-minded. Nothing could induce us to change our minds. Common sense can only take us so far, though. There is no substitute for empirical exploration of how the mix of theory-driven and data-driven reasoning varies as a function of both the cognitive style of the observer and the political content of the counterfactual.

  This chapter tests two key hypotheses. First, hedgehogs should be drawn to more top-down, deductive arguments, foxes to more bottom-up inductive arguments. It should thus be easier to predict hedgehogs’ reactions to historical counterfactuals from their ideological orientation than to predict foxes’ reactions from theirs. Second, counterfactual arguments are logically complex. One can agree with some parts of subjunctive conditionals and disagree with others. Consider: “If Stalin had survived his cerebral hemorrhage in March 1953, but in an impaired state of mind, nuclear war would have broken out soon thereafter.” An observer could concede the mutability of the antecedent (Stalin could have survived longer if his medical condition had been different) but still insist that even a cowed Politburo would have blocked Stalin from acting in ways guaranteed to kill them all. Hence the observer would disagree with the implicit connecting principles that bridge antecedent and consequent. This analysis suggests that counterfactual reasoning is a two-stage affair in which the first stage is sensitive to historical details bearing on the mutability of antecedents (is there wiggle room at this juncture?) and the second stage is dominated by theory-driven assessments of antecedent-consequent linkages and long-term ramifications (what would be the short- and long-term effects of the permissible wiggling?).

  To test these hypotheses, we needed to satisfy an array of measurement preconditions in each historical domain investigated. The Methodological Appendix itemizes these preconditions, including reliable and valid measures of cognitive style, of ideological or theoretical convictions, and of reactions to specific counterfactual scenarios that tap into each possible line of logical defense against dissonant counterfactual scenarios. The next section summarizes the historical laws at stake in each domain, the counterfactual probes selected for provoking irritated rejection from believers in those laws, and the principal findings.

  History of the USSR

  Competing Schemas. Conservative observers viewed the Soviet state, from its Bolshevik beginnings, as intrinsically totalitarian and oppressively monolithic. Stalinism was no aberration: it was the natural outgrowth of Leninism. Liberal observers subscribed to more pluralistic conceptions of the Soviet polity. They dated cleavages between doctrinaire and reformist factions of the party back to the 1920s and they saw nothing foreordained about the paths taken since then.7 These observers suspected that the system had some legitimacy and that dissolution was not the inevitable result of Gorbachev’s policies of glasnost and perestroika.

  Counterfactual Probes. The competing schemas carry starkly different implications for the acceptability of specific close-call scenarios. Once the Soviet Union comes into existence in 1917, conservatives see far less flexibility than do liberals for “rewriting” history by imagining what might have happened had different people been in charge of the party apparatus: counterfactuals such as “If the Communist Party of the Soviet Union had deposed Stalin in the early 1930s, the Soviet Union would have moved toward a kinder, gentler version of socialism fifty years earlier than it did,” or “If Malenkov had prevailed in the post-Stalin succession struggle, the cold war would have ended in the 1950s rather than the 1980s,” or “If Gorbachev had been a shrewder tactician in his pacing of reforms, the Soviet Union would exist today.” Conservatives tend to believe only powerful external forces can make a difference and are thus receptive only to counterfactuals that front-load big causes: “Were it not for the chaos and misery of World War I, there would have been no Bolshevik Revolution” or “Were it not for Reagan’s hard-line policies, the cold war would not have ended as peacefully and quickly as it did.”

  TABLE 5.1

  Correlations between Political Ideology and Counterfactual Beliefs of Area Study Specialists

  Counterfactual Antecedent Antecedent/Consequent Linkage

  About Soviet Union

  No WWI, no Bolshevik Revolution

  .25 –.57

  Longer life to Lenin, no Stalinism

  .13 .68

  Depose Stalin, kin
der, gentler Communism

  .66 .70

  Malenkov prevails, early end to cold war

  .17 .71

  No Gorbachev, CPSU has conservative shift

  –.16 .30

  No Reagan, no early end to cold war

  –.30 –.74

  A shrewder Gorbachev, Soviet Union survives

  .11 .51

  About South Africa

  No de Klerk, still white-minority rule

  .15 –.42

  No Mandela, still white-minority rule

  .08 –.10

  No Western sanctions, still white-minority rule

  .06 .48

  No demographic pressures, still white-minority rule

  .11 .15

  No Soviet collapse, fewer white concessions

  .18 –.51

  Note: Larger positive correlations, stronger liberal endorsement.

  Findings. Table 5.1 shows that ideology proved a potent predictor of resistance to dissonant close-call counterfactuals, but it was primarily a predictor of resistance grounded in the more abstract, “theoretical” belief system defenses that either challenged connecting principles or invoked second-order counterfactuals. Sovietologists who subscribed to opposing images of the Soviet Union rarely disagreed over the mutability of antecedents: whether Malenkov could have won enough Politburo support in 1953 to prevail or whether Gorbachev could have failed to win enough support to prevail in 1985. To get a good brawl going among Sovietologists, it was usually necessary to put the spotlight on the large-scale historical consequences of these small-scale modifications of antecedent conditions: whether Malenkov would have brought about a more rapid end to the cold war or an alternative to Gorbachev could have done a better job of holding the Soviet Union together.8

  Discussion. Historical observers draw on different criteria in judging different components of counterfactual arguments. The initial “decision” of how to evaluate the “if” premise of what-if scenarios often appears to be under the control of strong narrative expectations grounded in assessments of particular historical players confronting particular challenges. People find it hard to resist being lured into what-if thoughts when this narrative coherence is violated by something surprising: when they learn the result of a close-call vote or learn that unusually tolerant or paranoid leaders have come to power or that commanding figures have fallen from grace or that healthy people have suddenly dropped dead. Our natural response to these violations of our expectations is to “undo” the aberration mentally, to wonder how things would have unfolded but for…. However, once people have been lured into counterfactual cogitation, they need to rely on increasingly abstract, ideology-laden beliefs about cause and effect to figure out the longer-term significance of these developments.9

 

‹ Prev