by Peter Byrne
The other side of Looking Glass16
In his Memoirs (1990), the Soviet physicist and political dissident, Andrei Sakharov, reveals how nuclear scientists lived the Cold War in the bowels of the Soviet military-industrial complex. Sakharov was a primary designer of the U.S.S.R.’s first hydrogen bomb, which was exploded in a test on August 12, 1953. He was also awarded the Nobel Peace Prize in 1975 for his work on human rights. In his later years, he often spoke about the “appalling waste of the arms race.”
But after the Second World War had devastated Russia (with its 27 million dead), Sakharov undertook to work on The Bomb for several reasons. Deliberately echoing Enrico Fermi’s rationale for enlisting for the Manhattan Project, Sakharov said he welcomed the chance to do “superb physics.” But “what was most important for me at the time, and also [for] the other members of our group, was the conviction that our work was essential.”17
Later in life, Sakharov came to admire Oppenheimer for arguing that if the U.S. did not make the H-bomb, the U.S.S.R. would not.
If the Americans had not initiated the whole chain of events, the U.S.S.R. would have pursued the development of a thermonuclear bomb only at a much later date, if at all. A similar scenario has been repeated with other weapons systems, including nuclear-powered submarines and MIRVS.
But in the 1940s and 1950s my position was much closer to Teller’s18 practically a mirror image (one had only to substitute ‘U.S.S.R.’ for ‘U.S.A.,’ ‘peace and national security’ for ‘defense against the communist menace,’ etc.)—so that in defending his actions, I am also defending what I and my colleagues) did at the time.19
Sakharov’s attitude could have been expressed by any number of American scientists working in the business of thermonuclear warfare. He said,
We saw ourselves at the center of a great enterprise on which colossal resources were being expended. We shared a general determination that the sacrifices made by our country and people should not be in vain; I certainly felt that way myself. We never questioned the vital importance of our work. And there were no distractions: the rest of the world was far, far away, somewhere beyond the … barbed wire fences. High salaries, government awards, and other privileges and marks of distinction contributed to the psychological atmosphere in which we lived.20
Sakharov and Everett mirrored each other in their appreciation of “cybernetics” and the work of Norbert Wiener, Claude Shannon, and John Von Neumann.”21
One day, Sakharov asked the much-feared Politburo member, Lavrenti Beria, “Why are our new projects moving so slowly? Why do we always lag behind the U.S.A. and other countries, why are we losing the technology race?” This question was asked of Beria at the same time that American experts were baselessly touting Soviet superiority in nuclear technology. “Beria gave a pragmatic answer: ‘Because we lack R and D and a manufacturing base. Everything relies on a single supplier, Elektrosyla. The Americans have hundreds of companies with large manufacturing facilities.’”22
After the initial Soviet hydrogen bomb test, Sakharov told his celebrating colleagues that he hoped such a weapon would never be exploded over a city. His comment was not well received. A general “squelched” Sakharov’s naïve “pacifist sentiment.” It was then that Sakharov realized the insidious danger of the “idea of mutual deterrence based in military parity which is only one step away from preventive war.”23
The halting problem
Everett and his colleagues were well-paid to embrace the claim that nuclear war can be prevented by preparing to wage it. But Pentagon computers were incapable of reducing the complex world in which the cold warriors meddled to a manageable model of reality—and the claim remained unproveable, yet motive.
In his cyber-history of the Cold War, The Closed World, Paul Edwards notes that, “the historical trajectory of computer development cannot be separated from the elaboration of American grand strategy during the Cold War.”24 Edwards emphasizes the impossibility of fully modeling a system from within that system. In computerese, this is called the halting problem: no program can decide how long it will take to find a solution (to halt) in a time shorter than it takes to run the whole program. In other words, no computable system can contain a complete model of itself that is separable from itself.
Edwards depicts the melding of mind and machine inside the closed, paranoid, self-mirroring realm of operations research, Everett’s world:
A ‘closed world’ is a radically bounded scene of conflict, an inescapably self-referential space where every thought, word, and action is ultimately directed back toward a central struggle…. Inside the closed horizon of nuclear politics, simulations became entirely more real that the reality itself, as the nuclear standoff evolved into an entirely abstract war of position…. Simulations—computer models, war games statistical analyses, discourse of nuclear strategy—had, in an important sense, more political significance and more cultural impact than the weapons that could not be used….
The object for each nuclear power was a winning scenario—a theatrical or simulated win, a psychological and political effect—rather than to actually fight such a war. Actual outcomes no longer mattered, since the consequences had become too enormous to be comprehended and too dangerous to be tested.
The world of nuclear arms became by its very grossness and scale a closed world, a lens through which every other political struggle must be seen. For those who contemplated its strategy, nuclear war could only be understood as a many-leveled game.25
And, it must be said, as long as it remained a game, all was not lost. The problem with games, however, is that they are part of the real world, and that world is not controllable. No model can tame the chaos of the real world, nor predict the fate of what it models inside a closed system. But, depending on the normative and statistical assumptions built into it, a model can point to possible futures, even desirable futures.
But inside the closed world of Everett’s WSEG the possible futures were explored by inputting data about particular weapons systems into programs set up to predict “pay-offs” in terms of numbers of people killed, and cities buried, in limited or general nuclear “exchanges.” Disarmament was not an option—neither input, nor pay-off. And the language of operations research was loaded with dualism. “Defensive” systems, such as anti-ballistic missile batteries, were really offensive systems; as researchers were well aware that strengthening defense caused the other side to strengthen offense. Deterrence was predicated upon maintaining hair-trigger offensive capabilities, and, when tested in war simulations, its halting point was incomputable.
22 Fallout
Theoretical physics forces atomic energy on us; the successful production of the fission bomb forces upon us the manufacture of the hydrogen bomb. We do not choose our problems, we do not choose our products; we are pushed, we are forced—by what? By a system which has no purpose and goal transcending it, and which makes man its appendix.
Erich Fromm, 19551
Making a mark
After earning his PhD in nuclear physics at MIT in the mid 1950s, George Edgin Pugh was lured to WSEG by the high salary and exciting work. On his first day, he was assigned to work with Everett. The two physicists hit it off immediately over lunch in an executive dining room at the Pentagon. They remained friends and colleagues for the next 15 years.2
The newcomers were assigned to an ongoing WSEG project analyzing the effects of radioactive fallout from a U.S. attack on the Soviet Union. The project employed two dozen specialists in nuclear physics, physical chemistry, biochemistry, and meteorology. Using mechanical calculators, the scientists laboriously plotted the lethal consequences of fallout patterns based on prevailing weather patterns and total megatonnage unleashed over “Oblast” areas. Kill ratios were functions of the probability that bombers would reach their targets within a certain margin of error. “Overkill” was built into the attack plans as a percentage of the American bombers would inevitably be disabled by bad weather, or electronic counter-mea
sures, or shot down before reaching ground zero.
Pugh recalls that the Air Force had originally estimated that the most likely scenario for a nuclear attack by the United States on the Soviet Union would result in a few hundred thousand fatalities from blast effects. The WSEG fallout team determined that the initial attack would kill at least 4 million people. But when the effect of radioactive fallout was included, Soviet fatalities shot upwards of 100 million!
Everett and Pugh designed a study of fallout kill ratios applicable to any large nuclear campaign, including one against the United States. Everett came up with the notion of maximizing radiation fatalities as a function of the total megatonnage unleashed in the overall attack, but subject to multiple constraints. WSEG kept population data on 60,000 target locations. So, optimizing the allocation of airplanes and atom bombs to different kinds of targets for maximum lethality was an extraordinarily hard calculation.
In those days, large number-crunching OR projects were usually performed by hand, on mechanical calculators by mathematically skilled women, known as “computers.” Everett, who headed WSEG’s mathematics division, was lobbying his bosses to replace the women with high-speed digital machines. He started thinking about how to construct an algorithm capable of optimizing solutions for the large sets of constraints that he was dealing with in the fallout project, but, in the meantime, the women continued to sweat over their calculating machines.
Questions addressed by the fallout study were formulated in chilling operational terms. “For a given population, distributed geographically in some known manner, how should one distribute a fixed number of weapons in order to maximize the expected casualties.”3 Not too surprisingly, the mathematicians determined that casualties from fallout rose in tandem with the energy of the nuclear blast at ground level—sucking radioactive dirt into the mushroom cloud for dispersal by winds in the upper atmosphere is the most effective way to broadly distribute fallout.
After determining an ideal curve for maximizing fallout fatalities, Everett and Pugh addressed the problem of optimizing the distribution (and cost) of a fixed number of weapon systems to achieve the maximum death curve. In the final section of their report, they showed how fallout fatalities would fluctuate under a variety of targeting doctrines, ranging from the random distribution of bombs inside large, circular areas of countryside to deliberately hitting the most densely populated areas to targeting only military airbases. Lastly, they showed how fatalities from radiation could be limited by civil defense preparedness, primarily a national network of fallout shelters.
Central to the calculation of lethality was setting the fatal dose of gamma rays per person. Not surprisingly, targeting military and industrial hubs in or near densely populated cities was the most cost-effective way to deliver the bad medicine—prevailing winds would waft rural people their share, so it was a waste of “fission yield” to bomb farmers. But for a more expensive, “extremely large-yield” campaign, the kill ratio per weapon was maximized by uniformly dropping bombs over an entire country, thereby avoiding “excessive overkilling” in any one spot.4
The authors noted that their formulas only calculated the death rate for 60 days out from a nuclear holocaust and
may not be indicative of the ultimate casualties. The delayed effects such as the disorganization of society, disruption of communications, extinction of livestock, genetic damage, and the slow development of radiation poisoning from the ingestions of radioactive materials may significantly increase the ultimate toll.5
The sanitized version of “Simple Formulas for Calculating the Distribution and Effects of Fallout in Large Nuclear Weapon Campaigns (With Applications)” concluded that any large scale use of nuclear weapons would result in a huge proportion of the population being disabled or killed by fallout. It effectively discredited the prevailing assumption among operations researchers that causalities would be “manageable” in a nuclear exchange. So shocking was this finding to the military high command that Pugh was detailed to brief Eisenhower at the White House in July 1957:
After I had finished, Sherman Adams [Eisenhower’s senior advisor] asked the president if he thought he had understood the presentation. The president responded, ‘Yes, it seemed quite clear. In some ways, the effects of radioactivity are like an artillery bombardment. It doesn’t matter much where you aim, the important thing is the total fire power that’s delivered.’ I left with the feeling that we had successfully delivered our message to the president and his staff.6
Despite the glory of briefing the president, Pugh was peeved at Everett. He believed that he had done the most work on the fallout project and should receive top billing as co-author. But Everett insisted that their names be listed in alphabetical order: “It was the first of many events in which Everett’s brazen grab for recognition and power succeeded in distorting subsequent outcomes,” Pugh complained.7
Nobel Peace Prize!
So valuable was their fallout report that Everett and Pugh were authorized to sanitize it, stripping out references to specific targets and other top secrets. It was made public during hearings before the Special Sub-Committee on Radiation of the Joint Congressional Committee on Atomic Energy. And the March 1959 issue of Operations Research published the study with the authors’ optimistic foreword:
It is the hope of the authors … that the results here indicated will illustrate the catastrophic effects of a large nuclear campaign, regardless of specific targeting doctrine. Perhaps the public release of this information will serve to reduce the probability that such conflicts will ever occur.8
Linus Pauling credited Everett and Pugh by name in his Nobel Lecture upon receiving the 1962 Nobel Peace Prize for his work on nuclear disarmament, (in 1954, he had been awarded the Noble Prize for chemistry). Referring to their study, Pauling estimated,
that 60 days after the day on which the war [between the U.S. and the Soviet-Chinese blocs] was waged 720 million of the 800 million people in these countries would be dead, 60 million would be alive but severely injured.
Pauling pointed out,
There is no defense against nuclear weapons that could not be overcome by increasing the scale of the attack…. The only sane policy for the world is that of abolishing war.9
The fallout study was one of the first studies in a growing body of research showing that even a small nuclear war would be lethal beyond all imagining. In 1983, a distinguished panel of scientists determined that the smoke and fires from burning cities caused by exploding the 1,000 bombs SAC planned to drop on the Soviet Union as early as 1953 would have triggered a “nuclear winter” that “enshrouded the earth in darkness and eventually extinguishing all life.”10
Different worlds
While working with Pugh on the fallout project, Everett explained the premises of his as yet unedited thesis. Pugh was impressed with its brilliance. He asked him if he believed in the reality of the branching universes. Everett replied,
It is really hard to say what I really believe. In reality, all that we can ever know about any theory is the extent to which it seems to correspond to the real world observations we can make, and to the experiments we can do. Beyond that we never know the extent to which any of our theories capture the real reality of the universe, to the actual content of what really is out there. We hardly have a clue about what may really be out there. So we have no way of guessing how close any of our theories are to what may really be out there. All we can do is postulate our theoretical idea and then ask how well they correspond to experiments.11
Pugh accused him of dodging the question.
Everett replied that he would give it a “70 percent probability” that the multiple universes are physically real.12 He said that his concept might be ahead of its time. For his part, Pugh thought the theory made sense, but that it was irrelevant because it did not make any real difference to decision making. He suggested that Everett pursue the idea of using his universal wave function to reconcile quantum mechanics and general relati
vity. Everett agreed that eliminating the idea of quantum jumps (which his theory did) would make that reconciliation easier. But he said that he currently had his hands full with his interpretation of quantum mechanics, and was not anxious to undertake another “theoretical monster.” He was more interested in commercializing his mathematical concepts than tackling another fundamental problem in physics.13
In fact, Everett and Pugh made a written agreement in November 1956 that,
the Patent and copyright agreements with the Institute for Defense Analyses, signed by us this day, pertains only to inventions which are both conceived during our tenure of employment, and connected with the work on which we are employed.14