Book Read Free

The Cold War

Page 52

by Robert Cowley


  The Soviets got off to a more measured start, developing and fielding V-2 derivatives that culminated in the 725-mile-range R-5, the first Soviet missile to carry a nuclear warhead. It was dubbed SS-3 Shyster by NATO. (The actual model designations were unknown to Western intelligence, and NATO identified Soviet systems by an elaborate alphanumeric system: “SS” stood for ballistic missile; “Shyster” was the code name of the third such system identified, and so on. The names were chosen arbitrarily, and many were whimsical or ironic.) Like the Americans, the Soviets hedged their bets by developing rocket-boosted, ramjet-powered, supersonic cruise missiles, one by the Myasishchev design bureau and the other by Semyon Lavochkin. The more advanced Lavochkin design followed a trajectory strikingly parallel to that of the Navaho, pioneering the use of novel materials, notably titanium, and producing much useful test data but no operational vehicle. As with Navaho and Snark, both programs were canceled when ICBMs made them superfluous.

  Following the R-5's success, Korolev, by now in effective charge of Soviet rocket design, pressed for the development of a genuinely intercontinental missile, the R-7, nicknamed Semyorka (a diminutive of seven) and named SS-6 Sapwood by NATO. The R-7's boosted fission warhead was considerably heavier than the Atlas's thermonuclear weapon, and the thrust requirement correspondingly greater. Moreover, Korolev's team was handicapped by a lack of alloys suitable for the hundred-ton-thrust engine their calculations called for. They responded by clustering rockets in groups of four, using common oxidizer and fuel turbopumps, and surrounding a core cluster with four booster clusters. Each rocket produced only about twenty-three tons of thrust, somewhat less than that of the V-2, but they were lighter and far more efficient, and the total thrust was more than sufficient (by comparison, the Atlas's booster rockets, direct descendants of the Navaho engine, produced sixty tons of thrust each). The first successful R-7 launch was in August 1957, two months later than that of the Atlas.

  A comparison of the two missiles is instructive. Both used liquid oxygen and kerosene for propellant. Both had short careers as operational ICBMs, a consequence of the extended launch sequence that liquid oxygen demanded. Both would enjoy spectacular careers as space boosters, but there the similarities end. Paraphrasing Sovietologist Steven Zaloga, if Atlas's engineering was Gothic in lightness, Semyorka's exhibited Romanesque raw strength. Atlas had just three engines; the central rocket elegantly gimballed for control. Semyorka had twenty and was controlled by comparatively primitive vernier rockets: The thruster nozzles were fixed, and adjustments in pitch, roll, and yaw were provided by small auxiliary rockets, a simple but inherently inefficient solution, since the vernier rockets required independent fuel and control systems. The Russian missile's skin would support a man's weight; that of Atlas was so thin that the internal pressure of the propellants was all that kept it from collapsing.

  More telling is a common design feature that underlines the incredible urgency with which both missiles were developed, an urgency rendered more terrifying by each power's ignorance of its rival's capabilities and intentions. Both missiles were stage-and-a-half vehicles; that is, they rose from the pad with all engines firing, Semyorka discarding the four outer rocket clusters and their fuel tankage, and Atlas jettisoning two of its three engines after the initial boost phase. This was less than optimal for reasons basic to rocket engineering. Maximum thrust is required at liftoff when weight is greatest; then, as fuel is expended and as atmospheric drag falls off with increasing altitude, the thrust requirement diminishes. It therefore makes sense to lift off with a separate lower stage that drops off when its fuel is expended, leaving its weight behind. The upper stage or stages then proceed to apogee powered by smaller, more efficient engines. The advantages of staging are enhanced by the fact that optimum rocket-nozzle length varies inversely with air density. Maximum thrust at liftoff calls for a long bell-shaped nozzle to permit the propellant gases to fully expand in the dense air of the lower atmosphere, but such a nozzle becomes increasingly inefficient at higher altitudes. It is for this reason that high-perfor-mance satellite and space boosters, designed to loft the maximum payload by minimizing airframe and fuel weight, use multiple stages, typically three, with each successive stage's engines having progressively shorter nozzles. Solidfueled ICBMs also use multiple stages, but for somewhat different reasons, as we shall see.

  Both Semyorka and Atlas were magnificent designs, sufficiently reliable for manned space flight: Yuri Gagarin rode Semyorka into orbit, and Atlas orbited John Glenn (and Ham the chimpanzee before him). But both rockets were first and foremost ICBMs, and their designs were finalized so early that the engineers were not certain liquid oxygen and kerosene would ignite spontaneously in the hard vacuum of space. They therefore accepted substantial penalties of weight and nozzle inefficiency to place hardware on the launch pad capable of lofting a nuclear warhead to intercontinental ranges at the earliest possible date: Atlas in September 1959 and Semyorka that December. Further underlining the haste with which the early ICBMs were developed, the American Titan I—a two-stage missile, still using liquid oxygen and kerosene but stored belowground in a reinforced concrete silo and fueled by high-speed pumps that reduced launch time to fifteen minutes—became operational at about the same time as Semyorka. That all of these competing and highly expensive systems, any one of which could have filled the strategic bill, were rushed to deployment at the same time speaks for itself. Both the Soviet Union and the United States had to be absolutely certain that they had at least one system that actually worked, reliably and on short notice. It is worth noting that the strategic backdrop to this period of frenzied technological development included, to hit the high points, the 1948 Berlin Blockade, the Korean War, and the Hungarian uprising and Suez crisis of 1956. The last of these provided the stage for Nikita Khrushchev's famous and, as we now know, empty “missile rattling” threats.

  Atlas and Semyorka, in any case, were recognized as interim solutions long before they achieved operational status. Their extended launch sequences rendered them vulnerable to preemptive attack, and they could be held ready to fire only briefly. Their silo-based successors depended on propellants that did not require refrigeration and could be stored in the missile's tanks for extended periods. The American Titan II, which entered service in 1964, burned a mixture of unsymmetrical dimethyl hydrazine (a volatile and environmentally nasty fuel) and nitrogen tetroxide (an even more volatile oxidizer that was worse); the Soviet R-16 (NATO designation SS-7 Saddler), which entered service the same year, was powered by hydrazine and red fuming nitric acid, both dangerous and difficult to work with. Both fuel-oxidizer combinations are highly volatile and hellishly toxic; their sole virtue was that they could be stored in the missile's tanks without refrigeration. That they were used at all speaks volumes for the strategic imperative.

  The ideal, of course, was a missile that could be stored indefinitely in a silo, ready to fire, a requirement that no liquid fuel combination could satisfy. The answer lay in solid propellants, rocket fuels that could be poured into a mold to solidify and remain inert until ignited, requiring no pumps, no corrosive liquids, and no refrigeration. The solution was found in asphalt-stabilized, per-chlorate-based solid propellants, discovered by engineers at the Guggenheim Aeronautical Laboratory of the California Institute of Technology in 1942 and subjected to accelerated development as the Cold War got under way. By the late 1950s, these propellants were sufficiently stable and had sufficiently long shelf lives for operational use. Their main drawback, like that of black powder, was low specific impulse: They produced impressive initial acceleration but yielded significantly less thrust per pound of fuel per second than liquid propellants. The solution was to use multiple stages. The deployment in 1962 of the first operational solid-fueled ICBM, the three-stage Minuteman I, was a major Cold War benchmark, in part because it heralded the creation of an ICBM force that could be kept indefinitely on a high state of alert, and in part because it underlined a qualitative lead for the
U.S. in strategic weaponry: The Soviets would not field their first solid-fueled ICBM, the SS-13 Savage, until 1969. It is worth noting that the Minuteman I and the Minuteman III, which replaced it beginning in 1970, had relatively modest throw weights; that is, the warhead, reentry vehicle, and guidance system were relatively light, positively diminutive when compared with their Soviet equivalents. The compensatory mechanism was the superior accuracy produced by miniaturized guidance systems.

  Rapid advances in missile capabilities forced equally swift changes in basing modes and in the way crews lived and worked. The first ICBMs were erected on open pads, then filled with liquid oxygen and kerosene, a process that consumed hours and left the missile and crew horribly vulnerable to a counterstrike. Next came pits in which the missile could be stored horizontally below ground level, with underground fuel storage, launch control centers, and living facilities for the crew. Called coffin shelters, these provided limited protection against a preemptive strike; constructed in haste, they were only partially hardened (that is, made of steel-reinforced concrete thick enough to withstand the force of a nuclear blast). Next came underground reinforced concrete silos in which the missile could be stored and fueled vertically, then raised above ground for launch. The definitive basing mode was a hardened underground silo fitted with vents and cooling systems so that the missile could be launched from underground. The silos were sealed with massive blast doors mounted on tracks and opened and closed by electric motors. The underground launch complex, itself hardened, was typically reached by elevators and isolated from the surface and the silo by multiple blast doors. Equipped with electric generators, air filtering and conditioning systems, and living quarters, the underground complexes were home for missile crews during their periodic alert tours. In them, the crews faced a combination of claustrophobic confinement and boredom on the one hand, and the awful possibility of nuclear war on the other, something that can only be imagined by those who have not experienced it.

  So far, I have said little about accuracy, but it was a key element in the strategic equation. ICBM accuracy was commonly (and mistakenly) expressed as circular error probable—by definition the diameter of the smallest circle that can be drawn around half the impact points of a series of firings at the same target. The other component of accuracy was bias, the distance of the center of the circle from the target. The mistake lies in the common assumption that the center of the circle defining CEP was, in fact, the target. From the beginning of the missile race, there was a direct relationship between perceived accuracy and warhead size: The larger the CEP, the larger the warhead. Also from the beginning, American ICBMs had significantly smaller CEPs than their Soviet opposites. The Soviets compensated by installing larger warheads, and—frighteningly—the disparity in warhead size and yield increased as the Cold War dragged on.

  Another pivotal factor was intelligence or, perhaps more to the point, the lack thereof, particularly early on. Both the Soviets and Americans focused on engineering development of the ICBM, but the interplay between missile range and accuracy, warhead size, and intelligence had an impact on strategic decisions that was at least as great as the outcome of the race to be first on the pad. These factors came together in crisis or near-crisis proportions on at least two occasions. The first was the Cuban Missile Crisis of 1962; the ensuing confrontation was rendered more dangerous by the characteristics of the missiles in question: kerosene and red fuming nitric acid-fueled R-4 IRBMs (NATO designation SS-4 Sandal) with a range of only 1,200 miles, indifferent accuracy, and an extended and vulnerable launch sequence. Lacking the range and accuracy to pose a credible threat to the American ICBM or bomber force, the missiles made sense only as vehicles for a preemptive strike on American cities. Fortunately, heads cooler than Khrushchev's prevailed.

  The second came in the 1980s, most specifically in the war scare of 1983, as U.S. intelligence, tracking Soviet missile tests by electronic monitoring and satellite imagery, noted a steady shrinkage of CEPs from miles to hundreds of meters. Soviet launch trajectories, tracked by American radar and radiomonitored telemetry, became steadier, and the dispersion of impact points of dummy warheads in the Soviet Pacific test range, tracked through satellite photographs, became smaller. In American eyes, the stability of mutual nuclear deterrence at that point depended in large measure on the ability of the silobased ICBM force to ride out, and therefore deter, a Soviet first strike. As CEPs shrank, the number of warheads needed to target a Minuteman or Titan silo with a high expectation of destroying it steadily diminished, placing a preemptive Soviet first strike within the realm of possibility, or so it seemed. The appearance of mobile, solid-fueled Soviet ICBMs, deployed operationally in 1985–87 but detected earlier, heightened concern, accelerated development of the MX—the last American ICBM—and led to serious discussion of elaborate and expensive basing modes. One option actually endorsed by the Carter administration involved a network of mobile launchers and multiple shelters covering most of Utah and Nevada. Another option seriously considered, at least by newspaper columnists, political analysts, and their sources in the Pentagon, was to concentrate the bulk of the silo-based ICBM force in a single, tightly packed missile field to exploit the fact that multiple nuclear warheads cannot be used simultaneously against targets close to one another, since the first warhead to detonate will trigger subcritical reactions in the others, causing them to fizzle. The doomsday finality of this scheme, termed “dense pack” with unintended appropriateness, speaks for itself and is all the more intimidating for having been based on a false premise: that CEP and accuracy were one and the same.

  But the impact of ICBM technology went far beyond dry strategic considerations to embrace the human dimension of war at its most elemental level. The interaction took place at the critical intersection between nuclear warhead and command and control. For the ICBM deterrent to be credible, the missiles had to be launched on extremely short notice. They also had to be launched by human decision, for no machine could be trusted to unleash Armageddon. But neither could a single human, and from that awareness arose an elaborate series of safeguards and procedures. In American practice, a missile crew could initiate the launch sequence only after receiving a coded launch order from the commander in chief or his designated representative. After the crew authenticated the message with headquarters and a computerized database, the code would be loaded into the launch console. Then, and only then, could the launch crew commander and his (or, later, her) deputy insert two launch keys into firing locks, physically separated so that a single individual could not turn both keys. Launch was accomplished by turning the two keys at the same time, typically within two seconds, a short enough interval so that one person could not reach the second key after turning the first, and long enough to accommodate normal reflexes. If any step in the procedure was missed or botched, the missile would not fire. Soviet practice was probably similar—we can still only speculate—with the second key belonging to a member of the KGB assigned to ensure political control. In American practice, and surely Soviet practice as well, crews were carefully screened for psychological stability and repeatedly drilled to ensure that they would execute a launch order if one ever came. Mercifully, none did.

  A final point emphasizes the awesome threat posed by ICBMs, as well as their central place in the deterrent balance of terror at the strategic heart of the Cold War: Both the Soviet Union and the United States were so terrified of the threat of an ICBM attack and, paradoxically, so determined to preserve the stabilizing deterrent power of the threat that—at least so far as the public record shows—no missile was ever launched from an operational ICBM silo, nor was an ICBM ever fired with a live nuclear or thermonuclear warhead aboard.

  The War Scare of 1983

  JOHN PRADOS

  For four decades and more, we lived in perpetual fear of war. The nuclear doomsday clock appeared to be forever stuck at one minute to midnight, the witching hour for the world. Several times the long hand moved forward by sever
al alarming seconds. To put the feeling another way, it was like walking around with collective aneurysms in our brains that could burst all at once and kill without warning. We might forget about the danger as we went about our daily routines; we knew with certainty that it would never go away. The confrontation between East and West had taken on the aura of permanence.

  Those of us who lived through the Cold War—in this age of terrorism, it already feels like ancient history—can number the moments when the clock inched visibly ahead toward the sinister joining of its hands. Most (but not all) of those moments happened during the 1950s and the 1960s, the period of maximum peril. Did we come closest to nuclear catastrophe with the Cuban Missile Crisis of 1962—three or four seconds short of it, let us say? There were other dicey confrontations, too many of them and too close for comfort: the Berlin Blockade, when a new land war in Europe seemed imminent just when another had been concluded; the GDR's Soviet-approved erection of the Berlin Wall in 1961, when, for a few tense hours, American and Soviet tanks faced off muzzle-to-muzzle at Checkpoint Charlie; the Suez Crisis of 1956, when Nikita Khrushchev threatened to nuke Paris and London if the British and French troops occupying the Suez Canal didn't withdraw; the simultaneous Soviet invasion of Hungary, when Russian tanks rolled over the democracy of a week; the Soviet-backed Chinese intervention in Korea during the fall of 1950 that David Holloway in his preeminent study, Stalin and the Bomb, calls “the most dangerous point in postwar international relations.”

 

‹ Prev