Book Read Free

The Doomsday Machine

Page 5

by Daniel Ellsberg


  Eisenhower’s reassurances and apparent calm about the challenge seemed to confirm the notion of him as a retired grandfather, out of touch with reality, focused only on his golf game. That was the image shared by everyone I came to meet at RAND. It was paired with the notion that our own sponsoring organization, the Air Force—which certainly didn’t underrate the prospect of a vast Soviet superiority in ICBMs—didn’t seem able bureaucratically to rise to that threat in an appropriate or effective way. That is, it was resisting or dragging its feet in adopting the recommendations that RAND had been making for several years at this point and which seemed all the more urgent after Sputnik.

  To my new RAND colleagues, the projected Soviet ICBM buildup looked unmistakably like an urgent effort, with a startlingly high chance of success, to acquire the capability to disarm SAC’s power to retaliate. Such a Soviet capability, and even the costly crash effort to achieve it, destroyed the basis for confidence in nuclear deterrence. At least, it did for anyone reading these studies who shared the widely accepted Cold War premise that the Soviets aimed ultimately at world domination. That included everyone I worked closely with at RAND. And in light of both the intelligence estimates that became available to me as I acquired security clearances and the views of my highly intelligent colleagues, it came to include me.

  Within weeks of my arrival in 1958, I found myself immersed in what seemed the most urgent concrete problem of uncertainty and decision-making that humanity had ever faced: averting a nuclear exchange between the Soviet Union and the United States. On the basis of the RAND studies, the challenge looked both more difficult and more urgent than almost anyone outside RAND seemed able to imagine. In the last years of the decade, nearly all the departments and individual analysts at RAND were obsessed with solving the single problem of deterring a Soviet nuclear attack on U.S. retaliatory forces and society, in the next few years and beyond, by assuring that a large U.S. ability to retaliate with nuclear weapons would survive any such attack. The concentration of focus, the sense of a team effort of the highest urgency, was very much like that of the scientists in the Manhattan Project.

  And the center of this obsessive ideation was the economics department, which I joined. In my first week as a summer consultant in 1958, I was assigned to be the rapporteur of a discussion group on responses to the strategic threat, which included Albert Wohlstetter, Harry Rowen, Andy Marshall, Alain Enthoven, and Fred Hoffman, the key strategic analysts in the economics department, as well as Bill Kaufmann from social science, and Herman Kahn from physics.

  From my academic life, I was used to being in the company of very smart people, but it was apparent from the beginning that this was as smart a bunch of men as I had ever encountered. That first impression never changed (though I was to learn, in the years ahead, the severe limitations of sheer intellect). And it was even better than that. In the middle of the first session, I ventured—though I was the youngest, assigned to be taking notes, and obviously a total novice on the issues—to express an opinion. (I don’t remember what it was.) Rather than showing irritation or ignoring my comment, Herman Kahn, brilliant and enormously fat, sitting directly across the table from me, looked at me soberly and said, “You’re absolutely wrong.”

  A warm glow spread throughout my body. This was the way my undergraduate fellows on the editorial board of the Harvard Crimson (mostly Jewish, like Herman and me) had routinely spoken to each other; I hadn’t experienced anything like it for six years. At King’s College, Cambridge, or in the Society of Fellows, arguments didn’t remotely take this gloves-off, take-no-prisoners form. I thought, “I’ve found a home.”

  And I had. I loved RAND, where I ended up spending ten years, in two hitches, the second when I came back from Vietnam in 1967. Much, I imagined, like members of a religious order would, I shared with my colleagues a sense of brotherhood, living and working with others for a transcendent cause.

  In fact, those former Manhattan Project scientists who stayed on in weapons work, as well as their successors at the nuclear weapons labs, are often described by others (not admiringly) as a secular priesthood. In part that’s a matter of their knowledge of secrets of the universe, arcana not to be shared with the laity: the sense of being an insider, the seductions of secrecy, to be counseling men of power. An article on the new “military intellectuals”40 likened RAND consultants in Washington and the Pentagon, moving invisibly across bureaucratic boundaries opaque to others, to the Jesuits of old Europe, moving between courts, serving as confessors to kings. But above all, precisely in my early missile-gap years at RAND and as a consultant in Washington, there was our sense of mission, the burden of believing we knew more about the dangers ahead, and what might be done about them, than did the generals in the Pentagon or SAC, or Congress or the public, or even the president. It was an enlivening burden.

  Materially, we led a privileged life. I started at RAND, just out of graduate study, at the highest salary my father had ever attained as a chief structural engineer. Working conditions were ideal, the climate was that of Southern California, and our offices were a block from the Santa Monica Beach.

  But my colleagues were driven men. They shared a feeling—soon transmitted to me—that we were in the most literal sense working to save the world. A successful Soviet nuclear attack on the United States would be a catastrophe, and not only for America. It was taken for granted that at some Russian equivalent of RAND in the Soviet Ministry of Defense or Strategic Rocket Forces, a similar team was working just as urgently and obsessively to exploit their lead in offensive forces, if not by a surprise attack then by compelling blackmail against the United States and its NATO allies. We were rescuing the world from our Soviet counterparts as well as from the possibly fatal lethargy and bureaucratic inertia of the Eisenhower administration and our sponsors in the Air Force.

  The work was intense and unrelenting. The RAND building’s lights were kept on all night because researchers came in and out at all hours, on self-chosen schedules. At lunch, over sandwiches on courtyard patios inside RAND, we talked shop—nothing else. During the cocktail interval at the frequent dinners that our wives took turns hosting, two or three men at a time would cluster in a corner to share secret reflections, sotto voce; the women didn’t have clearances. After the meal the wives would go together into the living room—for security reasons—leaving the men to talk secrets at the table.

  There were almost no cleared women professionals at RAND then. The only exceptions I remember were Nancy Nimitz, a Soviet specialist who was the daughter of Fleet Admiral Chester Nimitz; Alice Hsieh, a China analyst; and Albert Wohlstetter’s wife, Roberta, a historian who was then working on a study of how the Japanese had achieved a surprise attack41 on our Navy at Pearl Harbor and our Air Force in the Philippines in December 1941. Her draft findings, which we all read intensely that summer, greatly influenced our thinking and our anxieties, as a premonition of exactly what we were trying to prevent.

  My first summer there I worked seventy-hour weeks, devouring secret studies and analyses till late every night, to get up to speed on the problems and the possible solutions. I was looking for clues as to how we could frustrate the Soviet versions of RAND and SAC, and do it in time to avert a nuclear Pearl Harbor. Or postpone it. From the Air Force intelligence estimates I was newly privy to, and the dark view of the Soviets, which my colleagues shared with the whole national security community, I couldn’t believe that the world would long escape nuclear holocaust. Alain Enthoven and I were the youngest members of the department. Neither of us joined the extremely generous retirement plan RAND offered. Neither of us believed, in our late twenties, we had a chance of collecting on it.

  I remember one August night in particular, sitting in the office assigned to me, which looked out over the ocean. It was a moonless night, close to midnight. The ocean was dark outside my windows. I was reading an analysis of the optimal conditions, from a Soviet point of view, for a surprise attack. A key point, I read, would be f
or them to accompany ICBM and bomber attacks on SAC bases deep in our interior with carefully coordinated attacks by cruise missiles from submarines onto bases near our oceans and on command centers (outflanking our radar in the north and providing no warning, with only minutes of flight time).

  Since their submarines had to be on the surface for this, and considering various weather conditions, the ideal time for the attack, I read, would be in August, about midnight on a moonless night. I looked out the window at the blackness of the sea, then I glanced at my watch. I literally felt a shiver and the hair on my neck rose.

  In the circumstances described by these studies and by intelligence estimates (especially those of the Air Force), deterrence seemed imperative—and uncertain. According to these Top Secret estimates, we faced a powerful enemy making costly efforts to exploit the potential of nuclear weapons totally to disarm us and to gain unchallenged global dominance. No non-nuclear U.S. military capability could promise to survive such an attack and respond to it on a scale that would reliably deter an enemy so determined and ruthless. Nothing could do so other than a reliable capability for devastating nuclear retaliation: capability that would assuredly survive a well-designed nuclear first strike.

  As Wohlstetter emphasized in his briefings to the Air Force, our ability to deter a Soviet attack on the United States was not measured by the scale of our offensive forces in place before the war, but by what the Soviets could foresee would be our “second-strike capability” to retaliate to their first strike. How much survivable destructive capacity would it take to deter them? That would depend on the circumstances and the alternatives, Wohlstetter suggested. Any potential alternatives to the Soviets’ own first strike might, at a particular moment, look very ominous to the USSR: perhaps crushing defeat in a regional war, or a possible U.S. first strike in escalation of a conflict in Europe. Like us, the Soviets might be presented with a choice among grave risks. In the conclusion of RAND’s Top Secret “vulnerability study” R-290, of which he was the principal author in 1956, Wohlstetter asserted that our then-programmed strategic force

  cannot ensure a level of destruction42 as high as that which Russia sustained in World War II—a destruction from which it has more than recovered in a few years. This is hardly the “crystal clear” deterrent we might need in some foreseeable circumstance.

  The implication—never questioned by anyone at RAND while I was there—was that adequate deterrence for the United States demanded a survivable, assured second-strike capability to kill more than the twenty million Soviet citizens who had died in World War II. That meant we were working to assure the survival under attack of a capability for retaliatory genocide, though none of us ever thought of it in those terms for a moment. Truly, in view of my strong feelings against the indiscriminate bombing of cities by both sides in World War II, there was a terrible irony to my working for the Air Force on studies aimed at threatening the Russians with the ultimate in terror bombing if they should attack us. But there was a consistent logic to it. From the analyses by men who became my mentors and closest colleagues, I had come to believe—like Szilard and Rotblat a generation earlier—that this was the best, indeed the only way, of increasing the chance that there would be no large nuclear war in the near future.

  When my former Harvard faculty advisor heard in 1959 that I was going back to RAND as a permanent employee, he told me bitterly that I was “selling out” (as an economist) for a high salary. I told him that after what I had learned the previous summer at RAND, I would gladly work there without pay. It was true. I couldn’t imagine a more important way to serve humanity.

  CHAPTER 2

  Command and Control

  Managing Catastrophe

  For my own contribution at RAND, reflecting my long-term focus on decision theory, I chose to specialize in a subject that seemed up to this point understudied in relation to its importance: the command and control of nuclear retaliatory forces by senior military officers and especially by the president.

  Most of my colleagues were studying the vulnerability, and how to reduce it, of strategic nuclear weapons, bases, and vehicles. I joined a few others43 who were examining the vulnerability and reliability of the military’s “nervous system”: command posts, information and decision-making processes at different levels, communications, warning systems, and intelligence.

  It was widely accepted that the decision whether and when to initiate launch of U.S. nuclear forces against the Soviet Union under any circumstances should be made by the president, or the highest surviving authority. How he might arrive at that decision and how it would get implemented were concrete questions that demanded highly secret empirical knowledge. Nevertheless, I was especially drawn to study this particular command problem not only because of its obvious importance but also because it exemplified and drew on everything I had analyzed in my graduate study of decision-making under uncertainty. It would be the transcendent, and conceivably the last, decision under uncertainty ever made by a national leader.

  Moreover, in my initial reading of the key RAND study R-290, “Protecting U.S. Power to Strike Back in the 1950’s and 1960’s,” a word leaped out at me that had been the focus of my own thinking at Harvard that year: “ambiguity.” The study, whose principal author was Albert Wohlstetter, alongside Harry Rowen and Fred Hoffman, noted that some of our plans depended on our having “strategic” warning of an imminent enemy attack—an intelligence warning received prior to any enemy weapons having been launched against us.

  But planning on strategic warning is dangerous,44 and this cannot be overemphasized.… If we are to be realistic and accurate before the event, the most positive answer we can ever expect to the question, “Are the Soviets going to attack us?” is “Perhaps.” And the answers to the other important but vexing questions, “When?” and “Where?” will be even more uncertain.… The real question, however, is not only how early we will have these signals but how unambiguous they will be. We can state, unequivocally, that they will be equivocal.… The ambiguity of strategic warning complicates the problem of decision.

  No other formulation of a decision problem—this one, the most important in human history!—could have caught my attention so forcefully. “Ambiguity” was not a term then used in academic discussions of risk and uncertainty. I was especially struck to see it in a classified study, because I was in the process of introducing it academically as a technical term, referring to subjective uncertainty when experience was lacking, or information was sparse, the bearing of evidence was unclear, the testimony of observers or experts was greatly in conflict, or the implications of different types of evidence was contradictory. (I conjectured—as was later borne out45 in many laboratory experiments—that such uncertainty could not be represented by a single, precise numerical probability distribution, either in subjects’ minds or as reflected in their behavior, even though they did not regard it as “totally uncertain.”)

  The uncertainty of strategic warning described here seemed likely to fall into that category. And Wohlstetter went on to point out that the same problem arose even in the context of “tactical” warning: indications from long-distance ground radars or infrared satellites that enemy planes or missiles had left their launch sites, headed for the United States, before any of them had arrived on target.

  The radars of the Arctic Distant Early Warning Line (DEW Line) had more than once, I soon learned, been fooled by a flock of high-flying geese into warning that Soviet bomber planes were coming toward us over the North Pole. In the pre-ICBM era, that still allowed hours in which to discover the error, and meanwhile to get our planes on alert off the ground. But just a year after I joined RAND, the higher-tech radar and computer system Ballistic Missile Early Warning System (BMEWS), designed to detect incoming ICBMs, in its first week of operation reported that a missile attack was under way. That called for decisions in under fifteen minutes.

  On October 5, 1960, some of the highest industry officials associated with the Air
Force’s technical systems, including Tom Watson, head of IBM, were visiting North American Aerospace Defense Command (NORAD), inside Cheyenne Mountain, Colorado. Pete Peterson, later Nixon’s secretary of commerce and at this time executive vice president of Bell & Howell, was sitting in the commander’s chair on the command balcony confronting the huge world map. That bit of role playing was a little treat for honored visitors. In his book Command and Control, Eric Schlosser tells what happened that day, pretty much as I heard it at the time (as highly classified gossip) along with everyone else working on the issue.

  The first BMEWS radar complex,46 located at Thule Air Base, Greenland, had come online that week, and the numerical threat levels of the new warning system were being explained to the businessmen.

  If the number 1 flashed in red above the world map, unidentified objects were traveling toward the United States. If the number 3 flashed, the threat level was high; SAC headquarters and the Joint Chiefs of Staff had to be notified immediately. The maximum threat level was 5—a computer-generated warning, with a 99.9 percent certainty, that the United States was under attack. As Peterson sat in the commander’s chair, the number above the map began to climb. When it reached 4, NORAD officers ran into the room. When it reached 5, Peterson and the other executives were quickly escorted out and put in a small office. The door was closed, and they were left there believing that a nuclear war had just begun.

  One of the businessmen in the room, Chuck Percy, then president of Bell & Howell and later a three-term senator from Illinois, “recalled a sense of panic at NORAD.” That’s the way I heard the story that month, from Air Force colonels—contrary to Pentagon assurances, when word leaked out, that the warning hadn’t been taken seriously. One thing that led some at NORAD to find the warning somewhat more “ambiguous” than the computer’s 99.9 percent certainty was that Khrushchev was in New York for the United Nations that week. It turned out that the BMEWS radar signals were bouncing off the moon as it rose over Norway. The designers, as I heard it in the Pentagon, had figured that the radar would reach the moon, but they didn’t think the return echo would be so strong as to look like incoming missiles. Everyone makes mistakes.

 

‹ Prev