Book Read Free

Annihilation from Within

Page 5

by Fred Charles Iklé


  In Western democracies there would surely be urgent demands that the development of this system be controlled by the United Nations: to “make it legally binding” that it be used only for benign, peaceful purposes, “for the benefit of mankind.” Anyone predisposed to think this might be a feasible policy—a policy that could truly be implemented and enforced—would do well to review the history of biological weapons or to recall the lessons of the nuclear age.

  Bio-weapons and Politically Correct Illusions

  The most frequently mentioned dark side of the life sciences is their misuse to enhance the lethality of biological weapons. A rich literature is now available on this threat, so I need not dwell on it at great length. The biological weapons that have been used sporadically in the past employed naturally occurring pathogens or toxins—bubonic plague, anthrax, botulism, and many others. Sometimes the effectiveness of these natural agents has been increased—for instance, by converting anthrax spores into an aerosol form. But in the future, a nation or a terrorist organization could employ genetically engineered agents that are far more lethal than natural ones, or that have been made resistant to all currently available vaccines and remedial medications. To unleash an epidemic, a small amount of such an agent would suffice and could be delivered clandestinely—hidden in a bottle, a fountain pen, or pillbox. Add to this threat the possibility that the terrorist might “reload” his means of attack. As former Secretary of the Navy Richard Danzig correctly points out: “Biological terrorism affords the possibility of repeated attack, undermining confidence and forcing ever-escalating investments of resources to achieve a modicum of defense.”18 These frightening prospects prompt people to call for international agreements to ban such weapons.

  Unfortunately, the history of arms control is disfigured by an ever-expanding list of broken treaties. So we must expect that international agreements will at best provide only partial protection. On a few occasions, to be sure, a treaty might have kept a vicious dictator from using prohibited bio-weapons; but because he feared retaliation, not because he wanted to be law-abiding. The treaty, in essence, drew a red line that the dictator hesitated to cross. Thus, the Geneva Protocol of 1925 (a one-page treaty banning the first use of poison gas and biological weapons) might have reinforced Adolf Hitler’s hesitation to risk the kind of poison gas warfare he himself had witnessed in World War I.

  In 1969 President Nixon ordered the destruction of U.S. biological weapons, and two years later he submitted to the Senate the new convention prohibiting the production and stockpiling of bio-weapons. Peripheral issues delayed ratification until 1974, when it became my job as Director of the Arms Control and Disarmament Agency to urge the Senate Foreign Relations Committee to consent to this BW Convention. I warned the senators up-front that “verification of compliance with this convention in countries with relatively closed societies is difficult.” After a couple of minutes’ discussion, the senators nonetheless agreed the Convention should be ratified. Yet later on, the Convention’s unverifiability became a nagging issue. In the 1990s it was discovered that the Soviet Union had massively violated the Convention from the first day it signed it. And after Saddam Hussein had lost the Gulf War in 1991, he was also forced to admit to massive violations.

  To assess what arms agreements can do to prevent biowarfare, we need to keep in mind the “dual use” problem. It makes detection well-nigh impossible in authoritarian nations and dictatorships—precisely the countries where violations are most likely to take place. Advances in the life sciences spread throughout the world because, almost without exception, they are intended for peaceful uses. But the boundaries between destructive and beneficial purposes are easily blurred. For instance, a new pharmaceutical vector that helps to transmit a medication to the diseased tissue might be indistinguishable, for practical purposes, from vectors that can be used to magnify the lethality of a biological weapon. The very fact that one of the two uses is beneficial, and hence considered humanitarian, would make it politically difficult to impose stringent controls on the worldwide transfer of such pharmaceutical vectors.

  Unfortunately, the BW Convention offers little protection because biological weapons can be developed under the guise of peaceful use and are easy to deliver clandestinely. At least two signatories of the Convention—the Soviet Union and Iraq—admitted that they violated it. In 1992, Russia’s President Boris Yeltsin revealed that the Soviet Union has been developing biological weapons, an illicit program that apparently started right after Moscow had signed the Convention. In 1995, when the head of Iraq’s military industries defected, Saddam Hussein was forced to admit his massive violations of the Convention. Oblivious to these stubborn facts, the UN arms control conference in Geneva was tasked to write a new treaty, a “Protocol” to the BW Convention that would deter such violations.

  By 2001, when this Protocol had grown 200 pages long, the Bush administration called a halt to the negotiations. The diplomats who had enjoyed their many pleasant sojourns in Geneva understandably reacted with outrage and insisted the negotiations had to be resumed. Less understandable was their rationale for negotiating this Protocol. It would be “legally binding,” they explained, and therefore effective. But if a dictator is willing to violate the BW Convention—presumably also a legally binding treaty—why on earth would he suddenly feel “legally bound” not to violate this Protocol as well? Evidently, as long as an illusion is politically correct it remains impervious to logic and evidence.

  3

  FIVE LESSONS OF THE NUCLEAR AGE

  Those who’ve governed America throughout the nuclear age and we who govern it today have had to recognize that a nuclear war cannot be won and must never be fought.

  —RONALD REAGAN (1982)

  THE DRAMA OF THE NUCLEAR AGE teaches painful lessons. The continuing spread of nuclear technology is turning into a disaster of unimaginable proportions. It is moving beyond the control of any national policy or international agreements. It is the quintessential expression of mankind’s cultural split—the inability of institutions to rein in runaway science. How did we get pulled into this awful maelstrom? Specifically, how has the United States, originally the possessor of a nuclear monopoly, ended up facing a crisis of extreme vulnerability, a world where ruthless dictators, terrorist organizations, even doomsday cults and anarchists can some day possess a few nuclear bombs?

  Eleven American presidents—from Harry S. Truman to George W. Bush—tried to prevent this from happening. At the beginning, the United States assumed the principal responsibility for the nuclear question, appropriately so since it emerged from the Second World War as the strongest power and the only nation that had built and used atomic bombs. Since then, Americans have devoted an immense effort to the nuclear problem—an intellectual, political, and military endeavor that has no parallel in all of military history. As a longtime participant in this effort, both inside and outside the Pentagon, I feel free to state that much of it took the form of an abstract and cold-blooded theorizing of an eerily academic nature. Nonetheless, when all is said, a stellar accomplishment spans the entire period from 1945 to date. Nuclear war, and indeed any destructive use of nuclear bombs, has been averted.

  Lesson One: Benevolence Is Not Enough

  Drawing the most useful lessons from the nuclear age will require immersion in the rich and complex history of the last sixty years. I shall select only the most instructive episodes, but to convey the essence I need to start at the beginning.

  During the Second World War, public opinion had become inured to devastating bombing attacks on cities—until the nuclear destruction of Hiroshima and Nagasaki. That event thrust a new emotive impulse upon strategic thinkers everywhere. Just one single bomb, oh Lord, could now destroy a major city! The wrenching revelation that one of nature’s most powerful forces had been unlocked slashed like a flaming sword into people’s consciousness, prompting statesmen and military leaders to search out a new approach to war and peace. For months to come, a flow
of information deepened the emotive impact of the atomic bomb: first the gruesome photographs, then harrowing tales from survivors, later a series of studies by scientists working for U.S. and Japanese authorities. What endowed these clinical reports with political salience were the tales of human victims—of instantly incinerated neighborhoods, of skin burned off the living flesh, of strange and fatal illnesses. The enormity of this unfolding story gripped the moral imagination of people throughout the world.

  I want to stress here the link between witnessing the human impact of the atomic bomb and the will to act boldly in forestalling nuclear warfare. The emotional experience of a dramatic, real-life event is a far more potent motivator for choosing an audacious policy, or a benevolent policy, than are theoretical forecasts. During the first year after the A-bomb attacks on Japan, such emotional hindsight emboldened leading statesmen, hard-nosed politicians, and military strategists to seek an unprecedented transformation of the international order. It was as if the sudden emotional comprehension had inspired them to seek salvation through a generous offer for total reform.

  Consider Dean Acheson, whose views were expressed in a memorandum for President Truman six weeks after Hiroshima. Acheson, bear in mind, was no woolly-eyed disarmer; a couple of years later he was to lead the effort to create the Atlantic Alliance as a bulwark against Soviet expansion. Yet in September 1945, this tough-minded, illusionless policymaker wrote for his equally tough-minded political master that nuclear weaponry was “a discovery more revolutionary than the invention of the wheel,” and that “if the invention is developed and used destructively there will be no victor and there may be no civilization remaining.” He recommended approaching the Soviet Union to explore international controls for a global ban on nuclear weapons. British Prime Minister Clement Attlee was even more anxious to stop further development of atomic bombs. In a handwritten memorandum in the fall of 1945, he noted that “the only hope for the world is that we should … strive without reservation to bring about an international relationship in which war is entirely ruled out.”1 He was not alone in that belief.

  By November, U.S.-British discussions had led to a remarkable decision: International controls of nuclear weapons must be a responsibility of the United Nations—a still untested organization. To develop a U.S. position, Truman established a committee chaired by Dean Acheson, and its conclusions (which became known as the Acheson-Lilienthal report) called for an international authority that would confine the use of atomic energy entirely to peaceful purposes. This benevolent idea gained the support of leading nuclear scientists, including (it is worth noting) Edward Teller, the famed physicist who became one of the principal proponents and a key inventor of thermonuclear weapons. Teller called the report “the first ray of hope that the problem of international control can, actually, be solved.”2

  This was truly a wondrous episode in the history of nations. At a time when only the world’s most powerful nation could have produced these weapons, it sought instead a radical, yet generous solution—to prevent all countries, itself included, from working on nuclear weaponry. To monitor this universal self-denial, the international authority advocated by Acheson-Lilienthal would have been given “exclusive jurisdiction to conduct all intrinsically dangerous operations [regarding nuclear materials].”3 How could this amazing episode have taken place? A major reason was the advice given by the atom bomb’s creators. America’s nuclear scientists justifiably enjoyed enormous prestige after their extraordinary accomplishment became known in August 1945. Although they had their differences, the scientists agreed on three forecasts, all absolutely essential for the President of the United States to bear in mind: first, that the information necessary to design nuclear weapons would not long remain an American secret; second, that the Soviet Union and several other countries would build their own nuclear bombs in the not-too-distant future unless constrained by a new international regime; and third, that it would become possible for advanced industrial nations to build nuclear weapons vastly more destructive than the first atomic bombs. Within a decade, all three of these forecasts had been proven correct.

  Scientific forecasts are rarely sufficient to bring about a fundamental innovation in the political sphere, no matter how clamant the predicted problem. It was clearly the searing experience—history’s first destruction of a city by a single bomb—that made possible America’s gamble on nuclear policy in the autumn of 1945. Without the emotionally reinforced forecast, Washington would almost certainly have regarded its monopoly on nuclear weapons as an asset to be tucked away.

  Within six months after Hiroshima, the well-intentioned project to restrict nuclear technology to peaceful uses had reached a dead end. As policymakers in the West had feared, Soviet opposition blocked all progress.4 The Soviet Union conducted its first test of an atomic bomb in 1949, and in 1950 North Korea’s attack on South Korea led to a huge expansion of the U.S. and the Soviet nuclear arsenals. The fading of “emotional hindsight” during the second half of the twentieth century goes a long way to explain the horrendous accumulation of nuclear weapons and the perversities of nuclear strategy.

  Lesson Two: “Deterrence” Was Oversold

  Mercifully, the ever more menacing volcano remained dormant. After August 1945, the only nuclear weapons detonated were for testing, and after 1962 even tests became rather furtive, most of them hidden deep underground. Without new pictures of the mushroom cloud, the world grew accustomed to nuclear nonuse. Many strategic thinkers attribute the nonuse during the Cold War to mutual deterrence between East and West. But if one reexamines the evolution of nuclear doctrine and deployments, as well as the growing size of the nuclear arsenals and the responses to major crises, it becomes clear that the explanation is more complicated.

  An enormous literature on deterrence has been written—top-secret government reports, public testimony for Congress, thousands of books and articles. As noted by Robert Jervis (himself a creative contributor to this field): “many of the best ideas [on nuclear strategy] are old and … not all of the new ideas are good.” That many of the best ideas are old is confirmed by an article written four months after Hiroshima. At that early date, Jacob Viner explained essentially all the benefits and problems of deterrence, and anticipated with astonishing foresight and beautiful clarity the transformation of international affairs that nuclear proliferation would cause. Since then, nuclear strategists became more and more fixated on “deterrence” as if it were a concrete, empirically observable entity. The American philosopher Alfred North Whitehead called such an excessive reliance on an abstraction “the fallacy of misplaced concreteness.”5

  “Deterrence” theorizing postulates how a “potential” aggressor calculates whether to attack—a hypothesis hard to test empirically. By contrast, the nonuse of nuclear weapons is not a hypothesis; it has been evident since 1945. Indeed, it has prevailed even in wars that nuclear-armed powers have lost against non-nuclear nations. By the end of the Cold War, many strategists came to see nonuse and deterrence as two sides of the same coin: nonuse became the proof of successful “deterrence” and “deterrence” the strategy guaranteeing nonuse. Alas, 9/11 casts anguishing doubts on this proposition.

  Since war never broke out in Central Europe during the forty years when the largest military confrontation in history divided the continent, it is tempting to assume nuclear deterrence preserved the peace. This view remains unproven; Soviet archival documents released so far do not provide an answer. It is, however, unquestionable that America’s nuclear superiority did not keep the peace on the Korean peninsula. Released Soviet documents confirm that Stalin authorized Kim Il Sung (the North Korean dictator) to attack South Korea in June 1950, although the United States had some 350 atomic bombs at that time and the Soviet Union only about five. Stalin even remained undeterred after the United States had come to the defense of South Korea: he supported North Korea with Soviet fighter pilots. Had nuclear superiority really been effective in deterring aggression, the Unit
ed States would not have suffered 33,000 fatalities and 92,000 wounded in defending South Korea against North Korea and China, neither of which had nuclear weapons at the time. Nor was this the only occasion on which nuclear-armed nations have accepted stalemate or defeat in wars with enemies that did not have a single nuclear bomb. The United States did so in Vietnam, the Soviet Union in Afghanistan, and in 1979 a nuclear-armed China withdrew from its cross-border aggression into Vietnam. Then there is the case of the 1962 Cuban missile crisis. It would be a fallacy of misplaced concreteness to believe that nuclear deterrence safely prevented war on that occasion. That crisis was caused by the Soviet Union’s provocative deployment of nuclear missiles in Cuba and nearly triggered a full-scale, yet unintended nuclear war.6 Let us note, Cuba 1962 was not a crisis that could have been prevented by the “deterrent” effect of nuclear weapons—it was caused by nuclear weapons.

  Lesson Three: We Were Lucky—So Far

  Deterrence can ward off only deliberate attacks, it cannot dissuade an accident from happening or a madman from detonating a nuclear bomb. The world seems to have been close to an accidental nuclear war during the Cuban missile crisis, as well as on several less visible occasions. What averted the accidental Armageddon? Was it the prudence of all those managing the nuclear arsenals, was it the intervention of extraordinary good luck, or—seen in a transcendental way—the intervention of Providence?

  I shall open the problem of accidental nuclear war by describing my own professional encounters with it. During the 1950s, I worked at the RAND Corporation, a think tank in Santa Monica, California. That decade was RAND’s most creative period—a time when its scientists, sustained by generous contracts with the Air Force, could pursue a range of innovative ideas. Many of us at RAND worked on nuclear strategy, elucidating the need for survivable retaliatory forces, credible response options, and other critical issues. Much of our thinking was recorded only in top-secret documents. Then, one morning in March 1955, the newspapers brought us Winston Churchill’s grand speech on deterrence (his last major speech in the House of Commons). I recall vividly our astonishment at RAND that day. Here the British Prime Minister had expressed all the key ideas of our secret work—and with such eloquence! Who had authorized him to go public?

 

‹ Prev