Book Read Free

Mad Science: The Nuclear Power Experiment

Page 7

by Joseph Mangano


  No Public Monitoring of Early Bomb Tests. The AEC did not operate a system to monitor fallout from bomb tests for a dozen years, until the US Public Health Service started one in 1957, after a dozen years of testing. This lack of information enabled the AEC to deflect any concern over fallout. Perhaps the most blatant incident took place in spring 1953 in southwestern Utah, where several thousand sheep were found dead by farmers soon after a large bomb test in nearby Nevada. Measurements of high radiation levels in sheep thyroid glands and notations that many sheep had burns on their wool by the Public Health Service were suppressed by the AEC in secret meetings, and irate farmers were told that the large number of deaths were due to malnutrition. The case was taken to court, where it continued into the 1980s.

  Silence and Denial Among Health Officials on Fallout. For a decade after the first atomic bombs, there was virtual silence by health officials, at the local and national level, on potential health risks of fallout. Not until the mid-1950s, with the bomb test program generating momentum and public fears mounting, did some officials publicly state their concerns. The initial statements made their way into the election campaign of 1956, where concerns were cited by Presidential candidate Adlai Stevenson and others. But many scientists wouldn’t budge from the party line that fallout was harmless. In 1963, during Senate testimony over the treaty to ban aboveground tests, Dr. Edward Teller declared that current levels of fallout represented no danger. Not until 1999 did US government officials acknowledge health risks from bomb tests.

  Marshall Islands Residents. The US tested 106 nuclear weapons above the ground or in the waters of the south Pacific, the majority of which were in the Marshall Islands, a remote set of atolls in the Pacific far from any populated area. These tests included the H-bomb explosions, deemed too powerful to use in Nevada. The largest was Bravo on March 1, 1954, which had a yield of 15,000,000 tons of TNT, or 1,000 Hiroshima bombs. US officials notified residents of the nearby island of Rongelap before the blast, but assured them no evacuation was needed. Fallout from the blast, which looked like falling snow, covered the ground to a depth of one inch. Not until two days after, with a number of natives reporting various illnesses, was the island evacuated. These and other residents were allowed to return to their homes even though much dangerous radiation remained. Not until decades later did the federal government set up a fund to compensate south Pacific victims suffering from health problems due to the atomic tests.

  Nuclear Weapons Workers. The denials of health risk for workers exposed to radiation on the job preceded the bomb building program. The previous chapter discussed the case of the luminous watch dial painters who became ill after ingesting radium by licking the tips of their brushes, only to meet resistance and denial by the company. The workers at the various plants that contributed to atomic bomb production suffered the same fate, only at the hands of the federal government and its contractors. The case of workers at Hanford resulted in decades of contentious fighting between sick workers and independent researchers on one side, versus government officials and their contractors on the other. The standard practices of government during the arms race was to assure workers that exposure levels were harmless, and keep records of exposures and health status under lock and key. Only the end of the Cold War brought a belated admission by the Energy Department that harm had been caused, and sick or deceased workers were entitled to compensation.

  Atomic Soldiers. During the period of aboveground nuclear weapons tests, about 250,000 military personnel were stationed close to the blasts, and exposed to fallout. In some cases, the soldiers were those stationed in test areas, conducting routine activities, such as maintaining security and measuring yields. Some were on ships in the Pacific that subsequently were laced with fallout with no health precautions being taken. In some instances, soldiers were called to Nevada from other posts, not being told of the mission until they arrived, to simulate how much time and how many soldiers it would take to secure an area destroyed by a nuclear weapon. Soldiers were stationed just several thousand feet from the blasts, without any protection for their eyes and lungs, and ordered toward the epicenter of the blast just minutes after explosion. Some tests had unexpectedly high radiation yields in unexpected directions, and planned maneuvers had to be scrapped while soldiers were hastily evacuated. Once again, veterans suffering from various radiation-related diseases were met with a consistent set of denials.

  Spying Programs. Spying operations, by both the US and USSR, were a natural and necessary aspect of the arms race. In 1956, the American spy program using U-2 planes capable of flying out of Soviet radar surveillance began. The program, kept secret from all but the CIA and the military, did pick up considerable evidence confirming the Soviets were far behind the American in its nuclear capabilities. But political and military leaders persisted in spreading fears that the Soviets stockpiles had reached or exceeded those of the Americans. Ultimately, the U-2 program proved dangerous. The first U-2 flight directly across the Soviet Union either failed or was shot down on May 1, 1960. Pilot Francis Gary Powers was taken prisoner by the Soviets, the voluntary moratorium on all bomb tests observed by both sides since October 1958 was doomed to a quick end, and Cold War hostilities ignited. The Khrushchev regime told the world about the program, and Washington was forced to admit its existence.

  Backlash to Health Studies. In the midst of Cold War-era official denials that radiation exposure caused harm, some brave scientists defied the party line by presenting research that suggested a radiation–cancer link. One of these, the study of Hanford workers by Thomas Mancuso, was described in the previous chapter. Mancuso’s colleague in the Hanford study “Alice Stewart” had entered the contentious area in 1956 when she found that pelvic X-rays to pregnant women nearly doubled the risk that the fetus would die of cancer by age ten. Both Mancuso and Stewart became the object of hostilities from those in government, and leaders in the medical and physics fields. “Everyone in America who took our side in the years subsequent to the Mancuso incident lost their funding. They don’t burn you at the stake anymore, but they do the equivalent, in terms of cutting you off from your means to work, your livelihood.”

  Human Radiation Experiments. During the era of nuclear arms development, the AEC and Defense Department funded a series of experiments in which thousands of Americans were administered radioactive substances, without their consent. The purpose of these experiments, which occurred in a number of esteemed medical centers, was purely to measure uptake of radioactive substances in the body, and the body’s ability to function until its organs became damaged – topics of great interest in those architects of the bomb program. Subjects included babies, pregnant women, mentally disabled children, poor patients, prisoners, and the terminally ill. Perhaps the most disturbing example was the injection of radioactive plutonium-239, possibly the deadliest substance on earth, to terminally ill hospital patients. A 1993 series of articles in the Albuquerque Tribune exposed these sordid practices, which were conducted from the mid-1940s to the mid-1970s. Subsequently, President Bill Clinton authorized an Advisory Commission on Human Radiation Experiments to investigate and inform the public.

  The reason(s) for the culture of secrecy and deception that marked the emergence of the American nuclear weapons program is a matter that has been debated extensively. A detailed discussion here would be repetitive. In summary, much of the debate’s focus has been on a mixture of policy reasons and psychological factors, which of course are subject to interpretation. Some believe that government officials deliberately and needlessly brainwashed the American public to meet military objectives. Others state that secrecy and deception was necessary to avoid a nuclear war, or to avoid further world domination by the Soviet Union. Still others contend that any secrecy and deception was relatively benign and kept to a minimum, and that any misinformation by US leaders was presented in the name of maintaining world peace.

  Whatever the reasons for the clandestine atmosphere, it existed. Moreover, it di
rectly led to how leaders and the public addressed nuclear power. The ideals embodied in Eisenhower’s 1953 “Atoms for Peace” speech were certainly noble. Nobody could argue with the dangers of nuclear weapons, and the potential for the atom to serve non-destructive purposes. But from the moment Eisenhower left the podium at the United Nations after his speech, secrecy and deception dominated the atomic discussion.

  By all means at its disposal, the government attempted to hide the weaknesses and problems posed by nuclear power. The 1954 Atomic Energy Act gave private companies special breaks that encouraged the development of nuclear power. The 1957 Price-Anderson Act was perhaps the biggest cover, when it limited liability of nuclear companies in case of a catastrophic accident to just a small portion of costs. This government cover for private utilities was not just a matter of a few original reactors quickly working out some early kinks; today, the chance of a meltdown is still very much a reality, especially as reactors become older and their parts corrode. The Price-Anderson Act is still in existence today, and must be as long as reactors operate, for no insurance company would ever write a policy for a utility without these limits being in effect. Government and industry tried to downplay the Act, and virtually ignored the 1957 Brookhaven report that put the cost of a meltdown far beyond Price-Anderson’s liability limits, but the Act has stood as a monument to denials of the dangers of atomic power.

  In addition to legislative actions, government officials boosted nuclear power with an enthusiastic program of public education. The discussion of the benefits of nuclear power was exaggerated. Other actions were more image than substance, such as Eisenhower flipping switches to mark the start of the first US nuclear power reactor. Government made its pitch to private businesses that produced energy, and did it well, as proposals and orders for hundreds of reactors were made from the late 1950s to the early 1970s. (Of course, government financial incentives did not hurt either.) Virtually all scientists, even those who had opposed bomb testing, were convinced that the new reactors had great potential.

  Part of the early promotion of reactors was that they could be built quickly. Utility companies estimated that only several years would be required for construction, giving the impression that reactors were basic, uncomplicated machines. The reality was just the opposite, and actual time between the initial application to regulators and startup stretched in years, even decades. This underestimation of time also translated into much higher costs than had been initially promised. Politicians and the public took note of this, but maybe the greatest disturbance was on Wall Street, which was loaning billions of dollars to develop reactors, while utilities took ten or twenty years without taking in a single penny of revenue to pay back the loans.

  Another early promise that failed was that reactors would immediately generate large amounts of power. The first thirteen US reactors that began operations from 1957 to 1968 were small units, averaging only 163 megawatts electrical (not until the early 1980s did the large reactors averaging 1,000 megawatts begin coming on line). These early reactors only accounted for a very small percent of US electricity consumed (about 2% by 1970). Each of the thirteen closed permanently after only an average of eighteen years in service, even though federal licenses allowed forty years of operation.

  The hype over nuclear reactors also suggested that they would be smooth-running operations. The reality was quite different, as many reactors were frequently closed due to mechanical problems. The “operating factor” (the percent of time that reactors operated) was dismal for many years. Through the 1960s, 1970s, and into the late 1980s, the factor only stood at just above 50%, a highly disappointing performance that was never addressed as the reactor program was promoted. Even though this percentage has risen since, reactors did not account for 20% of the nation’s electrical production until 1992, where it has remained ever since.

  The issue of a meltdown was first swept under the rug through the Price-Anderson Act, which severely limited corporate liability and saddled taxpayers with this burden. But as reactors were planned, the delusion continued. AEC regulations required that nuclear plant operators would have to develop an evacuation plan in case of a meltdown – which would cover only the ten mile radius around the reactor. The ten mile number was selected arbitrarily, and grossly underestimated the area affected by a meltdown. Airborne radioactive particles would enter the atmosphere and be propelled by prevailing winds, and would be breathed by humans and animals. If a prevailing wind was ten miles an hour, as it often is in many areas, it would take roughly that amount of time for the poisonous radioactivity to travel ten miles. It would be impossible to evacuate all residents in this small window of time. Many large cities today have operating reactors within fifty miles, and thus radioactivity released from a meltdown would enter the city’s air within hours. Radiation would travel even further than ten miles, making the evacuation requirements even more meaningless.

  Perhaps most importantly, there was the issue of assuring that reactors were operating in a safe manner. Early practices governing the planning and monitoring of new reactors were shoddy, and reminiscent of the experience with weapons plants and bomb tests. Before reactors were built, federally mandated environmental impact statements were prepared – all concluding that they posed no risk, without citing any health studies (because they weren’t being conducted). The AEC also required utilities to monitor reactor emissions and local levels of radioactivity, and report this information publicly. Again, the conclusion of all monitoring programs was “no risk” (with, conveniently, an absence of health studies).

  But there were major problems with monitoring programs, which would be exposed as many reactors were ordered and began producing power. First, the federally set limits of human exposure from reactor emissions were arbitrary ones. Officials erroneously jumped to the conclusion that these “permissible” limits were “safe” limits. They also failed to recognize that some humans are much more sensitive to a dose of radiation than others; for example, fetuses, infants, and children suffer much more damage than adults due to their underdeveloped immune system.

  The first challenge to “permissible” limits from exposure to reactor emissions came in 1969. The AEC had set up a program six years earlier to better understand radiation’s effects on humans, plants, and animals, and appointed Dr. John Gofman of the Lawrence Livermore Lab in California as its chair. Gofman was the co-discoverer of several new radioactive chemicals, and had helped develop the bomb. This giant in the nuclear field stunned the AEC, and the entire science world, when after carefully studying reactor operations and effects of emissions, he declared that the government should reduce “permissible” limits by 90%, and that permissible emissions would result in as many as 32,000 cancer deaths to Americans each year. Suddenly, the concept of permissible equalsed safe was challenged. The AEC was furious, and struck back at Gofman and his colleague Dr. Arthur Tamplin by cutting their budget and staff. Tamplin resigned in frustration from the AEC post in 1973, while Gofman did the same two years later.

  The next challenge to monitoring programs focused on how effectively they were being carried out. The first US nuclear reactor was Shippingport, just outside Pittsburgh, which began operating in 1957. The experience of the small reactor encouraged the utility to build two new and much larger reactors on the site in the early 1970s. As the proposed new reactors were considered, Professor Ernest Sternglass of the University of Pittsburgh presented figures on emissions from Shippingport and what he contended were unusually high local infant death rates. A furor over the Sternglass paper erupted. Pennsylvania Governor Milton Shapp assembled a commission, which found that the monitoring system was inadequate and careless, which also was an indictment of the AEC regulators that were supposed to ensure an effective monitoring system. In time to come, government requirements for monitoring emissions and environmental radiation levels were reduced. Despite these hassles, the two new reactors, known as Beaver Valley 1 and 2, were built.

  The giddiness am
ong nuclear power backers a half century ago was at epic proportions. There was no energy problem that nukes couldn’t solve, they were safe and clean, and they would save people a lot of money. Plans and proposals flew off drawing boards, and into public statements, with little to no regard of any potential risk. These proposals were made across the nation, but one area that merits a detailed review is the New York City metropolitan area.

  New York had long been the most populated city in the US. By 1950, its population had reached nearly eight million, and suburban counties in New York, New Jersey, and Connecticut within thirty miles of Manhattan added over four million more. Its energy needs were large and getting larger. It also was facing an environmental problem, as were all large cities, as dirty forms of energy released large amounts of visible pollution from homes, cars, and industries. With a pro-nuclear federal government, the 1958 election of an equally gung-ho Governor Nelson Rockefeller, and the interests of the dominant utility in the city (Consolidated Edison), the conditions were ripe for nukes to make their move in New York – even though the risks of a dangerous technology were higher than anywhere else, due to the dense population.

 

‹ Prev