Book Read Free

A Sense of the Enemy

Page 21

by Shore, Zachary


  At first, this group was housed within the CIA’s division of medical services, but soon it migrated to the Directorate of Intelligence, the part of the agency that deals with analysis (as opposed to operations). The Center for the Analysis of Personality and Political Behavior recruited highly-trained psychologists and other experts in the behavioral sciences to scrutinize foreign leaders’ biographies. The teams of psychobiographers were tasked with inspecting a leader’s early childhood and later life experiences, all in the hope of drafting a composite picture of that person’s character. With good reason, presidents and principals (the heads of American national security departments) wanted to know what made foreign leaders tick.

  For twenty-one years, Jerrold Post headed this division. In his book about psychobiography, he explains that his Center focused on the key life events that shaped each leader. Post makes the assumption behind the Center’s work explicit:

  Moreover, one of the purposes of assessing the individual in the context of his or her past history is that the individual’s past responses under similar circumstances are, other things being equal, the best basis for predictions of future behavior.23

  As I argued above, scrutinizing past behavior does not tell you what you truly need to know. It cannot reveal someone’s underlying drivers. At best it can provide reasonable predictions only if future conditions are sufficiently similar to prior conditions. Unfortunately, in international affairs, the most crucial decisions are typically made under dramatically new settings, when old patterns are being upended and standard procedures overthrown. At such times, what statesmen need is heuristics for discerning their opponent’s underlying drivers—the things that the other side wants most. Psychobiographies can be helpful in many realms, such as determining an individual’s negotiating style or understanding his personal quirks, but they are less valuable when statesmen need to anticipate an enemy’s likely actions under fresh circumstances.

  The history of twentieth-century conflicts has been marked by the inability to gain a clear sense of one’s enemies. Grasping the other side’s underlying drivers has been among the most challenging tasks that leaders have faced. The analysts discussed above were by no means fools. They were smart, sober-minded students of international affairs, but they sometimes lacked an essential component to policymaking: a deep appreciation for what drives one’s enemies. Much of their difficulty stemmed from two flawed assumptions. First, the other side possessed a rigid, aggressive nature. Second, past behavior was the best predictor of future actions. Both assumptions not only proved to be untenable, they also helped to create a dynamic out of which conflict was more likely to flow.

  If the twentieth century saw frequent cases of the continuity heuristic, the twenty-first has begun with its own form of mental shortcuts: an excessive faith in numbers. Modern advances in computing, combined with increasingly sophisticated algorithms, have produced an irrational exuberance over our ability to forecast enemy actions. While mathematical measures can offer much to simplify the complex realm of decision-making, an overweighting of their value without recognizing their limitations will result in predictions gone horribly awry. The crux of those constraints rests upon our tendency to focus on the wrong data. And although that mental error is not new to the modern era, it has been magnified by modernity’s advances in technology. Our endless longing to use technology to glimpse the future might be traced back to the start of the 1600s, when a small boy wandered into an optics shop, fiddled with the lenses, and saw something that would change the world.

  9

  _____

  Number Worship

  The Quant’s Prediction Problem

  NO ONE KNOWS PRECISELY how Hans Lippershey came upon the invention. One legend holds that some children wandered into his spectacle shop, began playing with the lenses on display, and suddenly started to laugh. Tiny objects far away appeared as though they were right in front of them. The miniscule had become gigantic. Though the truth of that tale is doubtful, the story of the telescope’s invention remains a mystery. We know only that four centuries ago, on October 2, 1608, Hans Lippershey received a patent for a device that is still recognizable as a modern refractory telescope.1

  Not long after Lippershey’s patent, the device found its way to Pisa, where it was offered to the duchy for sale. Catching wind of this new invention, Galileo Galilei quickly obtained one of the instruments, dissected its construction, and redesigned it to his liking.2 Galileo intended it, of course, for stargazing, but his loftier intentions were not shared by the Pisans. This new tool had immediate and obvious military applications. Any commander who could see enemy ships at great distance or opposing armies across a battlefield would instantly gain a distinct advantage. That commander would, in effect, be looking forward in time, and, with that literal foresight, he could predict aspects of the enemy’s actions. The telescope offered its owner a previously unimaginable advantage in battle. It brought the invisible to light. It altered the perception of time. It presented a genuine glimpse into the future, beyond what the naked eye could see. We don’t know whether Lippershey, Galileo, or some other crafty inventor made the first sale of a telescope to a military, but when he did, that exchange represented one of the earliest mergers of Enlightenment science with the business of war. From that moment on, modern science has been searching for ways to extend its gaze into the future, and militaries have been eager to pay for it.

  In the seventeenth century, merely gaining an early glimpse of the enemy’s actions was enough to advantage one side over the other. By the twentieth century, strategists needed much more. They needed greater predictive power for anticipating enemy moves. Technology alone could not, and still cannot, fill that gap. Strategists have always needed to develop a sense of the enemy, but the craving for more concrete, reliable predictions has left militaries easily seduced by science. Lately, that longing has led them to focus on the wrong objective: predicting the unpredictable.

  The Numbers That Count

  The rush is on to quantify as much as possible and let the algorithms tell us what the future holds. While this method offers obvious advantages, it is not without serious pitfalls. In many realms of prediction, we often go astray when we focus on the facts and figures that scarcely matter, as Nate Silver has shown in his thoughtful, wide-ranging study, The Signal and the Noise. Silver is America’s election guru. He has rocketed to prominence for his successful forecasts of U.S. primary and general election results. In his book, Silver concentrates on those predictions reliant on large, sometimes massive, data sets—so-called “big data.” Silver himself dwells mainly in the realm of number crunchers. He quantifies every bit of data he can capture, from baseball players’ batting averages to centuries of seismologic records, from poker hands to chessboard arrangements, and from cyclone cycles to election cycles. In short, if you can assign a number to it, Silver can surely crunch it.

  After four years of intensive analysis, Silver concludes that big data predictions are not actually going very well. Whether the field is economics or finance, medical science or political science, most predictions are either entirely wrong or else sufficiently wrong as to be of minimal value. Worse still, the wrongness of so many predictions, Silver says, tends to proliferate throughout academic journals, blogs, and media reports, further misdirecting our attention and thwarting good science. Silver contends that these problems mainly result from our tendency to mistake noise for signals. The human brain is wired to detect patterns amidst an abundance of information. From an evolutionary perspective, the brain developed ways of quickly generalizing about both potential dangers and promising food sources. Yet our brain’s wiring for survival, the argument goes, is less well-suited to the information age, when too much information is inundating us every day. We cannot see the signal in the noise, or, more accurately put, we often fail to connect the relevant dots in the right way.

  Silver urges us to accept the fallibility of our judgment but also to enhance our judgment by thin
king probabilistically. In short, he wants us to think like a “quant.” A quant—someone who seeks to quantify most of the problems in life—adheres to an exceedingly enthusiastic belief in the value of mathematical analysis. I use the term quant with respect, not simply because mathematical agility has never been my own strength and I admire this ability in others but also because I recognize the tremendous value that mathematics brings to our daily lives.

  Naturally, not everything is quantifiable, and assigning probabilities to nonquantifiable behaviors can easily cause disaster. Part of what makes Silver’s book so sensible is that he freely admits the value in combining mathematical with human observations. In his chapter on weather forecasts, he observes that the meteorologists themselves can often eyeball a weather map and detect issues that their own algorithms would be likely to miss. And when discussing baseball players’ future fortunes, Silver shows that the best predictions come when quants and scouts can both provide their insights. Software programs as well as human observations can easily go awry, and errors are most likely to occur when either the computer or the person is focused on the wrong data. If the software is designed to project a minor league pitcher’s future strike-outs but fails to include information on the weakness of the batters that pitcher faced, then the pitcher will be in for a rough ride when he reaches the major leagues. By the same token, scouts who assess a player’s promise by the athlete’s imposing physique might overlook some underlying flaws. Though he does not state it directly, Silver finds that scouts do better when they focus on pattern breaks. “I like to see a hitter, when he flails at a pitch, when he takes a big swing and to the fans it looks ridiculous,” one successful scout told Silver, “I like to look down and see a smile on his face. And then the next time—bam—four hundred feet!” There’s no substitute for resilience, and it can best be seen at those times when things don’t go as planned.3

  While prudent, thoughtful quantification can serve us well in many areas, it cannot be applied in every area. As a case in point, toward the close of his book, Silver turns to intelligence assessments, drawing specifically on the failure to predict the attacks on Pearl Harbor and 9/11. On the one hand he advocates that intelligence analysts must remain open to all possibilities, particularly by assigning probabilities to all imaginable scenarios, no matter how remote they might seem. On the other hand, he assumes that analyzing individuals is a less profitable endeavor. Silver writes: “At a microscopic level, then, at the level of individual terrorists or individual terror schemes, there are unlikely to be any magic bullet solutions to predicting attacks. Instead, intelligence requires sorting through the spaghetti strands of signals . . .” Of course it is true that we have no magic bullets. Statesmen do, however, possess ways of improving their odds. Rather than mining the trove of big data for patterns in their enemies’ behavior, or sorting through a sticky web of conflicting signals, statesmen can focus instead on the moments of pattern breaks. Again, it is obvious that this will not guarantee successful predictions, but it can help illuminate what the enemy truly seeks.

  As a quant, Silver is understandably less comfortable analyzing how individuals behave. His forte is calculating how groups of individuals are likely to behave over the long run most of the time. Here then is a crucial difference between the type of predictions made by Silver and his fellow quants and those predictions made by statesmen at times of conflict. Quantitative assessments work best with iterative, not singular, events. The financial investor, for example, can come out ahead after years of profits and losses, as long as his overall portfolio of investments is profitable most of the time. Depending on the arena, a good strategy could even be one that makes money just 60 percent of the time, as is a common benchmark in personal finance. The same is true of the poker player, baseball batter, or chess master. When the game is iterative, played over and over, a winning strategy just has to be marginally, though consistently, better than that of a coin flip. But leaders, in painful contrast, have to get it right this one time, before lives are lost. In the dangerous realm of international conflict, statesmen must be 100 percent right when it matters most. They cannot afford to repeat again and again the Nazi invasion of Russia or the American escalation in Vietnam. Unlike in competitive poker, the stakes in this setting are simply too high.

  The political scientist Bruce Bueno de Mesquita is arguably the king of quants when it comes to predicting foreign affairs. Frequently funded by the Defense Department, Bueno de Mesquita insists that foreign affairs can be predicted with 90 percent accuracy using his own secret formula. Of course, most of his 90 percent accuracy likely comes from predictions that present trends will continue—which typically they do.

  The crux of Bueno de Mesquita’s model rests largely on the inputs to his algorithm. He says that in order to predict what people are likely to do, we must first approximate what they believe about a situation and what outcomes they desire. He insists that most of the information we need to assess their motives is already available through open sources. Classified data, he contends, are rarely necessary. On at least this score, he is probably correct. Though skillful intelligence can garner some true gems of enemy intentions, most of the time neither the quantity nor the secrecy of information is what matters most to predicting individual behavior. What matters is the relevant information and the capacity to analyze it.

  The crucial problem with Bueno de Mesquita’s approach is its reliance on consistently accurate, quantifiable assessments of individuals. A model will be as weak as its inputs. If the inputs are off, the output must be off—and sometimes dramatically so, as Bueno de Mesquita is quick to note on his own website: “Garbage in, garbage out.” Yet this awareness does not dissuade him from some remarkable assertions. Take for example the assessments of Adolf Hitler before he came to power. Bueno de Mesquita spends one section of his book, The Predictioneer’s Game, explaining how, if politicians in 1930s Germany had had access to his mathematical model, the Socialists and Communists would have seen the necessity of cooperating with each other and with the Catholic Center Party as the only means of preventing Hitler’s accession to Chancellor.4 He assumes that Hitler’s opponents could easily have recognized Hitler’s intentions. He further assumes that the Catholic Center Party could have been persuaded to align against the Nazis, an assumption that looks much more plausible in a post–World War II world. In 1932, the various Party leaders were surely not envisioning the future as it actually unfolded. Their actions at the time no doubt seemed the best choice in a bad situation. No mathematical model of the future would likely have convinced them otherwise. Assessments are only as good as the assessors, and quantifying bad assessments will yield useless, if not disastrous, results.

  None of this means that all efforts at prediction are pure folly. Bueno de Mesquita’s larger aim is worthy: to devise more rigorous methods of foreseeing behavior. An alternative approach to his quantitative metrics is to develop our sense for how the enemy behaves. Though less scientific, it could be far more profitable, and it is clearly very much in need.

  Quants are skilled at harnessing algorithms for spotting pattern recognition and also pattern breaks. But their methods work best when their algorithms can scan big data sets of iterative events, focusing on the numbers that truly count. Anyone who has ever received a call from a credit card company alerting her to unusual activity on her account knows that MasterCard and Visa employ sophisticated algorithms to identify purchasing patterns and sudden deviations. This is a realm in which computers provide enormous added value. But in the realms where human behavior is less amenable to quantification, we must supplement number crunching with an old-fashioned people sense. It is here that meaningful pattern breaks can contain some clues. Perhaps surprisingly, within the heart of America’s defense establishment, one man and his modest staff have spent decades refining their strategic empathy. Their successes, as well as their failures, offer useful tips for those who would predict their enemies’ behavior.

  Yoda in
the Pentagon

  In October 1973, Arab states attacked Israel with overwhelming strength in numbers. The Egyptians deployed some 650,000 soldiers—a massive military force in its own right. Syria, Iraq, and other Arab states added another quarter of a million troops. Against these 900,000 enemies Israel could muster no more than 375,000 soldiers, and 240,000 of those were from the reserves. But the war was really a battle of tanks, and on this score the numbers looked even more daunting. Israel’s 2,100 tanks confronted a combined Arab fleet of 4,500.5 On the northern front when the war began, Syria massed 1,400 tanks against 177 Israeli—a crushing ratio of eight to one. Given the extraordinary disparity of force, after Israel recovered from initial losses and decisively won the war, most Western observers interpreted the conflict as proof of Israel’s unbreakable will to survive. Yet when Andrew Marshall and his staff analyzed the numbers, they saw something else entirely.

  Tucked into a nondescript section deep within the Pentagon’s labyrinthine rings, the Office of Net Assessment had just been created the previous year. Studying the war’s less glamorous details and drawing on the substantial research of others, Marshall and his team discovered an Egyptian army with a Soviet-style flaw. The entire military was astonishingly short on maintenance. When one of its tanks became damaged in battle, Egypt had no effective means for repairing it. Israel, in contrast, had well-trained technicians able to make rapid repairs. It turned out that on average Israeli tanks returned to battle three times, but Egyptian tanks were used only until damaged. In other words, the initial number of tanks was not the number that mattered.

 

‹ Prev