Fukushima: The Story of a Nuclear Disaster

Home > Other > Fukushima: The Story of a Nuclear Disaster > Page 27
Fukushima: The Story of a Nuclear Disaster Page 27

by David Lochbaum


  In the years following Three Mile Island, the Japanese closely studied the NRC’s regulatory reforms, and in many cases emulated them. Japan’s Nuclear Safety Commission identified fifty-two lessons learned from Three Mile Island that it recommended for adoption in Japan’s own safety regulations. Japan also began to develop severe accident countermeasures after Chernobyl. Among those that TEPCO incorporated at Fukushima were hardened vents, modifications to allow use of fire-protection pumps to cool the core if needed, and measures for coping with station blackouts of modest length, including loss of DC power.

  The Japanese also developed severe accident guidelines, referred to as accident management (AM) measures, using the results of probabilistic risk assessments conducted by research organizations. In short, there were many similarities between actions taken in the United States and those in Japan.

  Japan’s severe accident management measures also shared many of the defects of the U.S. approach. All of the AM measures were rooted in the belief that the possibility of severe accidents was so low as not to be “realistic from an engineering viewpoint”; hence these steps were not considered essential. Consequently, the NSC concluded that “effective accident management should be developed by licensees on a voluntary basis,” and the utilities accordingly developed AM measures on their own.

  As a result, no regulator assessed whether the plant owners’ assumptions were realistic regarding the ability of workers to carry out AM measures like hardened vent operation and alternate water injection. In particular, no one asked TEPCO why its AM procedures were designed to cope with a station blackout that would last only thirty minutes and affect only one reactor at a site. If someone had, perhaps TEPCO would not have had to concede after Fukushima that the tsunami and flood resulted in “a situation that was outside of the assumptions that were made to plan accident response.”

  Suppose that decades ago the NRC staff had succeeded in pushing through a much more aggressive approach for dealing with Mark I core damage and containment failure risks, including the challenges of a prolonged station blackout. There is no guarantee that the Japanese would have followed suit, but they would have been hard-pressed to ignore the NRC’s example. The NRC staff in the 1980s had all but predicted that something like Fukushima was inevitable without the fixes it prescribed, but the agency’s timidity—or perhaps even negligence—contributed to the global regulatory environment that made Fukushima possible. The NRC’s reliance on the flawed assumption that severe accident risks are acceptably low helped to perpetuate a dangerous fallacy in the United States and abroad. Ultimately, the NRC must bear some responsibility for the tragedy that struck Japan. And the commissioners must acknowledge that unless they fully correct the flawed processes of the past, they cannot truthfully testify before Congress that a Fukushima-like event “can’t happen here.”

  10

  “THIS IS A CLOSED MEETING. RIGHT?”

  It was the last session of the NRC’s twenty-third Regulatory Information Conference. The RIC, as it is known, is an annual gathering that attracts regulators, utility executives, industry representatives, the media, and others for discussions of new and ongoing initiatives by the NRC. More than three thousand people from the United States and around the globe, including a team of seismic experts from Japan, had descended on a Marriott conference center across Rockville Pike from the NRC’s White Flint headquarters for the three-day event.

  Now, as the conference was winding down, a few dozen people had gathered to hear a panel discuss the latest results of an NRC research project entitled State-of-the-Art Reactor Consequence Analyses, or SOARCA, as it was called in the NRC’s acronym-rich environment. The takeaway message from the panel: even if a severe nuclear power plant accident were to happen—say, an extended station blackout at a Mark I boiling water reactor—it wouldn’t be all that bad.

  The date was March 10, 2011.

  By all accounts the RIC had been a great success, a reflection of how the NRC’s stature had grown along with the improving fortunes of nuclear power in the United States. After decades without a new reactor order being placed, the United States in recent years had begun to see a resurgence of interest in nuclear energy, spurred on by policy makers, pundits, and industry boosters addressing a public that had largely forgotten the nuclear fears of three decades earlier. They argued that the atom was the only realistic alternative to greenhouse gas-belching fossil fuel plants for delivering large amounts of power to an increasingly energy-hungry world. That message was gaining traction, even among some longtime nuclear skeptics.

  Nuclear energy’s prospects were boosted by Congress in 2005. That year’s Energy Policy Act (EPAct) contained energy production tax credits and loan guarantees to help insulate utility investors from the formidable financial risks that had crippled many past nuclear projects.

  Thanks to incentives such as these, the NRC was soon besieged by more nuclear plant license applications than it could handle. To cope with the increase in its workload, the agency needed to expand significantly for the first time in decades.

  By 2011, some of the momentum had been siphoned off by the persistent recession, which froze credit markets and reduced energy demand, as well as by the ultracheap natural gas made available by hydraulic fracturing. But the so-called nuclear renaissance was very much alive in the nation’s capital. Interest remained high among many in Congress and within the Obama administration.1

  Turnout for the 2011 RIC reflected the renewed support. The conference reported the highest attendance in its history, and sessions such as those devoted to the technology du jour—small modular reactors that could be installed in all sorts of unlikely places around the world—generated so much excitement that auditoriums filled to capacity and people were turned away at the doors.

  As for the nagging issue of safety? That no longer seemed a showstopper, thanks in large measure to some deft messaging by the nuclear industry, led by the NEI. The long-ago accident at Three Mile Island represented the nuclear industry of old; the accident at Chernobyl was irrelevant to Western designed and operated nuclear plants. An entire generation of Americans had reached adulthood without encountering a major nuclear mishap. Perhaps things had changed when it came to nuclear safety.

  This was all good news for the NRC.

  Despite its official status as a neutral regulator, the NRC had been doing its part to promote the image of nuclear power as safe. SOARCA was a key element in that campaign. The goal was to supplant older NRC studies that estimated the radiological health consequences of severe reactor accidents. Many in the nuclear power community, both inside and outside the NRC, believed that those studies, dating back more than twenty years, grossly exaggerated the potential danger. Antinuclear groups were misusing old information to frighten the public, they argued. It was time for a new counteroffensive.

  In the 1980s, the industry had asserted—via the findings of its own Industry Degraded Core Rulemaking (IDCOR) program—that the NRC was wildly overestimating the radiation releases that could result from nuclear accidents. At the time, the NRC staff, bolstered by the independent review of the American Physical Society, did not concur. However, times had changed. Now the NRC itself was leading the charge to reduce source terms. That gave rise to a new state-of-the-art study: SOARCA.

  But in 2011, after spending five years and millions of dollars on the project, the NRC had a new problem: the numbers SOARCA was generating weren’t cooperating with the safety message agenda. It was déjà vu for the NRC, which has grappled with how to explain away inconvenient facts about nuclear power risks over its history.

  In the RIC’s final hours, a panel of NRC experts clicked open their PowerPoint presentations and provided a SOARCA update. Only the most attentive would have noted a subtle change in the language describing the study’s findings—an attempt, perhaps, to glide past some of SOARCA’s unwelcome results.

  No one in the room could know that these findings, the outcome of computer simulations, were ab
out to be put to the test.

  If the conference had taken place a month later, with Fukushima’s devastated reactors still held in check by seawater while thousands of refugees lamented their poisoned homes, SOARCA’s message would have been far different. The panelists would have known by then that the accident scenarios they had analyzed were no longer just theoretical constructs, but instead described real-world events with real-world consequences.

  It was now clear that the release of even a small fraction of the radioactive material in a reactor core was enough to wreak havoc around the world and fundamentally disrupt the lives of tens of thousands of people. This was something SOARCA, designed to calculate only numbers of deaths, was not capable of predicting.

  Nearly three decades earlier, in November 1982, Representative Edward Markey of Massachusetts held a press conference in Boston with Eric van Loon, executive director of the Union of Concerned Scientists, to disclose troubling information: the NRC was suppressing the results of a study that estimated the consequences for human health and the environment of severe accidents for every nuclear power plant site in the United States.

  The NRC staff had drafted a report on the study for public consumption, but the commission had been sitting on it for over six months. At the time, three and a half years after Three Mile Island, antinuclear sentiments in the United States were running high. The possibility that the NRC was engaging in some sort of cover-up about risks confirmed the suspicions of many about the pronuclear bias of the agency.

  The study, performed by Sandia National Laboratories and given the bland title “Technical Guidance for Siting Criteria Development,” soon became known as the CRAC2 study, after the computer code it employed (“Calculation of Reactor Accident Consequences”). Among the calculations in the study was a projection of the dispersal of large radioactive plumes from a severe accident with containment failure and an estimation of the resulting casualties.

  Like a civilian version of the models used by Cold War–era military strategists that ranked the outcomes of thermonuclear conflicts in impersonal terms like “megadeaths,” CRAC2 was used to quantify the damage from nuclear accidents: the numbers of radiation injuries, “early” fatalities from high levels of radiation exposure, and “latent” cancer fatalities from lower and chronic exposures.

  CRAC2 and other radiological assessment codes, like Japan’s SPEEDI and the NRC’s RASCAL, utilize complex models to estimate doses to individuals by simulating the way radioactive plumes released by a nuclear accident are transported through the atmosphere and the biosphere. CRAC2 went beyond those other codes by using more detailed models of the ways people could be exposed: external irradiation by radioisotopes in the air and on the ground, inhalation, and consumption of contaminated food and water.

  CRAC2 also had a crude model for estimating the economic consequences associated with land contamination, addressing issues such as lost wages and relocation expenses for evacuees, and costs of cleanup or temporary condemnation of contaminated property. What it couldn’t estimate were nonquantifiable consequences such as the psychological impacts on people forced to leave their contaminated homes and businesses either temporarily or permanently.

  Radiological assessment codes like CRAC2 must incorporate many moving parts—plumes are traveling, radioactive particles are being deposited, and the population itself is not sitting still. Each calculation requires the input of hundreds of parameters, from source terms to types of building materials to the movement of evacuees and, eventually, even to the long-term effectiveness of decontamination efforts and the radiation protection standards governing people’s return to their homes. Consequently, the results are very uncertain. Far too often, however, the tendency among regulators has been to endow these rough estimates with more authority than they deserve.

  One of the largest sources of uncertainty is weather. Some types of weather could be much more hazardous than others, depending, for example, on whether the wind was blowing toward heavily populated areas and whether there was precipitation. But the NRC’s as yet unreleased CRAC2 draft report contained only averages for weather conditions.

  What Ed Markey and Eric van Loon presented to reporters that November day were not just the averages but the “worst case” results for the most unfavorable weather, such as a rainstorm washing out the plume as it passed over a large population center. In these projections, the “peak” early fatalities, as they were called, were far greater than the average values in the NRC report. The numbers were in fact shocking: for the Indian Point plant, thirty-five miles from midtown Manhattan, a worst-case accident could cause more than fifty thousand early fatalities from acute radiation syndrome. In contrast, the average value for early fatalities was 831.

  The NRC had held on to the draft CRAC2 report for over half a year, presumably because officials worried that even the average-value casualty figures would be too much for the public to swallow. Markey’s disclosure of the worst-case spreadsheet forced the NRC’s hand, and it finally released the report that same day. The commission was quick to defend its decision not to include the worst-case results, offering a rationale that would become familiar over the years: the chances of an accident severe enough to produce such death and destruction were so slight as to be hardly worth mentioning. Or, as the NRC’s head of risk analysis, Robert Bernero, said at the time, the likelihood of worst-case conditions was “less than the possibility of a jumbo jet crashing into a football stadium during the Superbowl.”

  For the next two decades, this line of reasoning formed the backbone of the NRC’s strategy for addressing the threat of severe accidents—namely, that events threatening major harm to the public were so unlikely that they didn’t need to be strictly regulated, a view shared by Japanese authorities and other members of the nuclear establishment worldwide.

  In its risk assessments, the NRC was careful always to multiply high-consequence figures by tiny probabilities, ending up with small risk numbers. That way, instead of having to talk about thousands of cancer deaths from an accident, the NRC could provide reassuring-sounding risk values like one in one thousand per year. The NRC was so fixated on this point that it insisted that information about accident consequences also had to refer to probabilities.2

  However, critics argued that the probability estimates were so uncertain—and there was so little real data to validate them—that the NRC could not actually prove that severe accidents were extremely unlikely. Therefore, accident consequences should be considered on their own terms.

  In any event, the low-probability argument became less relevant in the aftermath of the September 11 aircraft attacks, when the public began to wonder what might have happened had al Qaeda decided to attack nuclear power plants that day instead of the World Trade Center and the Pentagon. No longer could one say with a straight face that a jumbo jet crashing into the Super Bowl was a one-in-a-billion event—if the pilot were intent on doing it deliberately. There was no credible way to calculate the probability of a terrorist attack and come up with a meaningful number. The NRC had long acknowledged this, and consequently did not incorporate terrorist attacks into its probabilistic risk assessments or cost-benefit analyses.

  No longer able to hide behind its low-probability fig leaf, the NRC struggled to reassure Americans that they had nothing to fear from an attack on a nuclear power plant. While maintaining that nuclear reactors had multiple lines of defense, from robust containment buildings to highly trained operators, the NRC also had to concede that the reactors were not specifically designed to withstand direct hits from large commercial aircraft, and that it was not sure what would happen if such an attack occurred. The industry steered the public discussion toward the straw-man issue of whether or not the plane would penetrate the containment—it couldn’t, according to the NEI—even though many experts pointed out that terrorists could cause a meltdown by targeting other sensitive parts of a plant.

  To learn more about what could happen in an attack, the NRC commis
sioned a series of “vulnerability assessments” from the national laboratories, but the results remained largely classified for security reasons. Aside from a series of carefully constructed and vaguely reassuring talking points, the NRC provided few details beyond “Trust us.” Communities near nuclear plants would get few tangible answers about the vulnerability of reactors in their midst.

  Meanwhile, the 9/11 disaster had provided an opening for environmental and antinuclear groups to once again raise the safety concerns that had faded from view since Chernobyl. In the vacuum of new public information from the NRC, activists found ample fodder in the old CRAC2 study and its references to “peak fatality” and “peak injury” zones. Among them was the organization Hudson Riverkeeper, campaigning to shut down the Indian Point plant. Interpreting the CRAC2 results liberally, the organization’s leader, Robert F. Kennedy Jr., spoke of dangers to the many millions of people within what he referred to as Indian Point’s “kill zone.”

  Such talk was deeply upsetting to one NRC commissioner in particular: Edward McGaffigan. A voluble, intellectual, and pugnacious former diplomat and Senate defense aide, McGaffigan began his tenure on the NRC in 1996 by extending open channels of communication to the public. But after 9/11 he became openly hostile toward anyone he believed was exaggerating the dangers of nuclear power or misinterpreting the results of NRC technical studies.

  “The media holds us to a very high standard, that what we say is factually true . . . but the antinuclear groups . . . basically get away with saying almost anything, however factually untrue it is,” McGaffigan told a gathering of NRC staff in 2003, adding, “The way we fix it is we work aggressively to get our story out.” The story, in his view, was that nuclear power was safe. Those who argued otherwise were misinformed and misguided.

 

‹ Prev