Fukushima: The Story of a Nuclear Disaster

Home > Other > Fukushima: The Story of a Nuclear Disaster > Page 28
Fukushima: The Story of a Nuclear Disaster Page 28

by David Lochbaum


  McGaffigan was not alone in his frustration; other commissioners also accused critics of scare tactics. But McGaffigan went further, mocking members of the public who expressed the views he disdained.

  McGaffigan’s views worried nuclear watchdog groups. After all, how could a regulator be trusted to make the decisions necessary to protect public health if he had such absolute faith in the benign nature of the facilities he oversaw and did not worry about the effects of low-level radiation?

  But there was more to McGaffigan’s crusade; he accused the NRC staff itself of overstating the hazards of nuclear accidents. In his view, disinformation was coming from inside the agency as well as from hostile critics elsewhere. The staff’s technical analyses, he said, were making unrealistically dire assumptions. One case in point was the risk posed by spent fuel pools, a subject that would surge to relevancy in little more than a decade at Fukushima Daiichi.

  Edward McGaffigan, who was an NRC commissioner from 1996 to 2007. U.S. Nuclear Regulatory Commission

  In January 2001, the NRC staff released a report, “Technical Study of Spent Fuel Pool Accident Risks at Decommissioning Nuclear Power Plants,” or NUREG-1738. The report evaluated the potential consequences of an accident, such as a large earthquake, leading to the rapid draining of a spent fuel pool. NUREG-1738 estimated that within hours such an event could cause a zirconium fire throughout the pool that would result in melting of the fuel and release of a large fraction of its inventory of cesium-137 as well as significant amounts of other radionuclides. The study found that dozens of early fatalities and thousands of latent cancer fatalities could result among the downwind population.

  Data from that study was later incorporated by outside experts into a technical paper, which was published in 2003 in the respected journal Science and Global Security.3 The paper concluded that the U.S. practice of tightly packing spent fuel in pools was risky. It called on utilities to move most of the fuel to safer dry storage casks. In an angry response, McGaffigan called the publication, based at Princeton University, a “house journal” of “antinuclear activists.” He fumed that “terrorists can’t violate the laws of physics, but researchers can.” But he also denounced NUREG-1738 itself, calling it “the worst” of excessively pessimistic staff studies on spent fuel vulnerabilities.

  McGaffigan was so sure the Princeton study was wrong that, in a March 2003 public meeting, he appeared to direct the NRC staff to rebut the study before the staff had completed its own analysis. Such interference by a commissioner was practically unheard of. The NRC’s inspector general investigated and concluded that McGaffigan had tried to exert inappropriate influence on the research staff.4

  It was in this overheated political environment that the SOARCA study was conceived. The early results of the nuclear plant vulnerability assessments that the NRC had been conducting since shortly after the 9/11 attacks indicated, in the agency’s view, that the radiological releases and public health consequences resulting from terrorist-caused meltdowns generally wouldn’t be as catastrophic as previous studies, including CRAC2, had found.

  Unfortunately for the NRC, it could not broadcast this good news because the vulnerability studies, being related to terrorist threats, were considered “classified” or “safeguards” information. But some inside the NRC reasoned that if the agency applied the same analysis methods to accidents instead of terrorist attacks, it might be able to dodge some of the security restrictions and get the information out to the public. SOARCA was the result.

  There was a downside. Opening up the analytical process would also expose the staff’s methodology and assumptions to unwelcome scrutiny by outsiders. So the NRC planned to keep a veil of secrecy over the SOARCA program itself, stamping the staff’s proposal for how to conduct the study, as well as the commission’s response, as “Official Use Only—Sensitive Internal Information.” The NRC would control all information about the study and report the results only when it was ready, and in a manner that could not be—in its judgment—misinterpreted or misused. From the outset, one commissioner, Gregory Jaczko, objected, arguing that the study guidelines and other related documents should be publicly released. He was outvoted.

  The NRC’s concern about managing the information coming from SOARCA was evident from the beginning. The commissioners wanted the staff to develop “communication techniques” for presenting the “complex” results to the public. Although the technical analysis had barely begun, the first draft of the communications plan asserted that nuclear power plants were safe and had been getting safer for more than two decades. Even so, the commissioners rejected the draft and continued to micromanage the message. The communications plan would go through at least six revisions before they were satisfied.

  One theme the NRC was determined to emphasize was SOARCA’s scientific rigor. As the name suggested, the project was to be all about using “state-of-the-art information and computer modeling tools to develop best estimates of accident progression and . . . what radioactive material could potentially be released into the environment.” But the NRC Office of Research, try as it might to be an independent scientific body, could never truly be free from the commission’s policy objectives. The research office had faced accusations in the past of trying to influence the results of studies performed by its contractor personnel.5 Now, the clear direction from McGaffigan and other senior officials would make it difficult to produce a completely objective study.

  Although by all appearances the purpose of SOARCA was to reassure the public that nuclear power was safe, the nuclear industry did not enthusiastically jump on board. Perhaps company executives did not relish the prospect of another CRAC2-like spreadsheet making an appearance, listing potential accident casualty figures for every nuclear plant in the country—a recipe for bad publicity no matter how low the numbers. After all, Ed Markey was still in Congress, waging battles over nuclear safety.

  The Nuclear Energy Institute interceded, sending the NRC a list of forty-four questions about the project, including a suggestion that a fictional plant be used instead of a real one. The SOARCA researchers soon found that very few utilities were interested in cooperating with the NRC on the study. (For added measure, the NEI hinted that any volunteers would want the right to review how their plants were portrayed.) The initial plan to analyze the entire U.S. nuclear fleet of sixty-seven plant sites was whittled down to eight and then to five; ultimately, only three were willing to participate. In the end, the NRC staff analyzed just two stations: Peach Bottom in Pennsylvania, a two-unit Mark I BWR, and Surry in Virginia, a two-unit PWR.6

  With a vast, complex, and uncertainty-ridden study like SOARCA, it wasn’t necessary to commit scientific fraud to guide the process to a desired outcome. There were plenty of dusty corners in the analysis where helpful assumptions could be made without drawing attention. The NRC employed a number of maneuvers to help ensure that the study would produce the results it wanted, selectively choosing criteria—in effect, scripting the accident.

  It discarded accident sequences that were considered “too improbable,” screening out events that would produce very large and rapid radiological releases, such as a large coolant pipe break. It only evaluated accidents involving a single reactor, even though some of the events it considered, such as earthquakes, could affect both units at either Peach Bottom or Surry. It considered its “best estimate” to be scenarios in which plant personnel would be able to “mitigate” severe accidents and prevent any radiological releases at all; it analyzed scenarios in which mitigation was unsuccessful but pronounced them unlikely. Perhaps most curious was the NRC’s decision to assume that lower doses of radiation are not harmful—an assertion at odds not only with a broad scientific consensus but with the NRC’s own regulatory guidelines.

  The fog grew even thicker when the time came to decide how the study results would be presented. First, the commissioners decreed that figures such as the numbers of latent cancer fatalities caused by an accident should not
appear. Instead, the report would provide only a figure diluted by dividing the total number of cancer deaths by the number of all people within a region. For instance, if the study predicted one hundred cancer deaths among a population of one million, the individual risk would be 100 ÷ 1,000,000, or one in ten thousand. So rather than saying hundreds or even thousands of cancer deaths would result from an accident—guaranteed to grab a few headlines—the report would state a less alarming conclusion. And since the NRC’s probabilistic risk assessment studies estimated that the chance of such an accident was only about one in one million per year, the current risk to an individual—probability times consequences—would be far less. To use the same example, it would be one million times smaller than one in ten thousand, or a mere one in ten billion per year: a number hardly worth contemplating. The communication strategy for SOARCA appeared to be taking its inspiration from the old Reactor Safety Study and its discredited comparisons of the risks of being killed by nuclear plant accidents versus meteor strikes.

  But there was more obfuscation. The NRC would only reveal the values of these results for average weather conditions, and not the more extreme values for worst-case weather; this was the same strategy of evasion that had gotten the agency in hot water with Congressman Markey and the media back in the days of the CRAC2 report.

  The commissioners also told the researchers to drop their original plans to include calculations of land contamination and the associated economic consequences. Earlier, the project staff had carried out such calculations for terrorist attacks at two reactor sites with high population density—Indian Point, north of New York City, and Limerick, northwest of Philadelphia—but apparently decided they did not want that kind of information to be made public. According to a staff memo, the models that had been used produced “excessively conservative” results—meaning, in NRC parlance, that the researchers thought the damage estimates were unrealistically high.7 The staff said the models needed to be updated to obtain a “realistic calculation.”

  Some issues that emerged as the study progressed did not fit into the predetermined narrative. For instance, it was hard to explain why an earthquake or a major flood striking the Peach Bottom and Surry sites, each featuring side-by-side reactors, could be assumed to damage only one unit and leave the other unscathed. Logically, an accident involving both units would not only increase the source term, or amount of radioactive materials released to the environment, but also force operators to deal simultaneously with two damaged reactors. And, as the analysts noted, “a multiple-unit SBO [station blackout] may require more equipment, such as diesel-driven pumps and portable direct current generators, than what is currently available at most sites.”

  The analysts calculated that these scenarios had alarmingly high probabilities. But instead of following the study guidelines and including the scenarios, the staff decided in 2008 to recommend that the case of dual units be considered as a “generic issue”—a program where troublesome safety concerns are sent to languish unresolved for years. (The NRC was still pondering the recommendation three years later when Fukushima demonstrated that multiple-unit accidents were not merely a theoretical concern.)

  The NRC’s independent review group, the Advisory Committee on Reactor Safeguards, was not amused by what appeared to be a blatant attempt to bias the SOARCA study. In particular, it objected to the staff’s seemingly arbitrary approach for choosing accident scenarios to analyze. It pointed out that SOARCA’s good news safety message could be less the result of improved plant design or operation and more the result of “changes in the scope of the calculation.” Simply put, SOARCA had analyzed different accident scenarios from those used in earlier studies like CRAC2, and therefore it could not be directly compared with them. Although the final SOARCA results might look better, that was because SOARCA was deliberately excluding the very events that could cause a large, fast-breaking radiation release of the kind CRAC2 had evaluated.

  The NRC rebuffed its Advisory Committee’s criticism and continued on the course the commissioners had set. In deference to public complaints that the study was being conducted in secret with no independent quality control, the NRC agreed to form a peer review committee. However, the NRC chose all the members, and the committee’s meetings were also held in secret. The public would just have to trust that the committee was doing a good job.

  When the NRC staff presented preliminary results of the study to the Advisory Committee in November 2007, it appeared that the staff had successfully obtained the conclusions its bosses wanted. First, the staff judged that all the identified scenarios could reasonably be mitigated—that is, plant workers, using B.5.b measures and severe accident management guidelines (SAMGs), would be able to stop core damage or block radiation releases from the plant. Even if they failed to prevent the accident from progressing, the news would not be too dreadful: the release of radioactive material would occur later and likely be much smaller than past studies had assumed, resulting in “significantly less severe” off-site health consequences. And finally, the NRC staff was so confident that it stopped the simulations after forty-eight hours, assuming that by then the situation would have been stabilized.

  The results that the NRC staff presented to the Advisory Committee were striking. While CRAC2 had found that following a worst-case or “SST1” release,8 acute radiation syndrome would kill ninety-two people at Peach Bottom and forty-five at Surry, SOARCA found the number of deaths to be exactly zero at both sites. There was no magic—or fundamental improvement in reactor safety—behind this stunning difference. The NRC had just fiddled with the clock. In the CRAC2 study, the radiation release began ninety minutes after the start of the accident, before most of the population within ten miles of the plant had time to evacuate, putting many more at risk. But the NRC had chosen accidents for SOARCA that unfolded more slowly. As a result, for most of the SOARCA scenarios, analysts assumed that the population within the ten-mile emergency planning zone would be long gone before any radiation was released. That way, by the time a release did occur, people would be too far away to receive a lethal dose, This was not an apples-to-apples comparison to the earlier study.

  Harder to understand were the far lower numbers of cancer deaths projected by the SOARCA analysis, because even people beyond the ten-mile emergency planning zone could receive doses high enough to significantly increase their cancer risk. Whereas CRAC2 estimated 2,700 cancer deaths at Peach Bottom and 1,300 at Surry for this group, SOARCA project staff told the Advisory Committee that they had instead found twenty-five and zero cancer deaths, respectively.

  That, too, involved sleight of hand—and some shopping around to find a convenient statistic. Despite a widespread scientific consensus that there is no safe level of radiation, the NRC staff decided to assume that such a level indeed existed: no cancers would develop until exposures reached five rem per year (or ten rem in a lifetime). Any exposure below that would be harmless.

  At a 2007 Advisory Committee briefing closed to the public, Randy Sullivan, an emergency preparedness specialist for the NRC, let slip one reason why the SOARCA staff saw the need to use such an unconventional assumption. Apparently, the staff didn’t like the numbers it would get if it used the widely endorsed linear no-threshold hypothesis (LNT), which assumes that any dose of radiation, no matter how low, has the potential to lead to cancer. It was an easy choice: assume a high threshold, predict many fewer cancers. Otherwise, the number of cancer deaths predicted by SOARCA would be so large it could frighten people.

  At the briefing, Sullivan acknowledged: “We could easily do LNT, just go ahead, issue the source term, calculate it out to 1,000 miles, run it for four days, assess the consequences for, I don’t know, 300 years and say 2 millirem times [the population within] 1,000 miles of Peach Bottom. What is that? Eighty million people. . . . We’re going to kill whatever. This is a closed meeting. Right? I hope you don’t mind the drama.

  “So then we’ll say that our best estimate is that there
will be many, many thousands . . . you’ll have 2 millirem times 80 million people and you’ll claim that you’re going to kill a bunch of them.”

  Considering a five rem per year threshold ultimately proved too misleading even for the SOARCA team itself to tolerate, so it eventually evaluated a range of thresholds, including the LNT assumption of zero. But other optimistic assumptions enabled the team to keep the numbers small. A 2009 update to the commissioners informed them that the study continued to find off-site health consequences “dramatically smaller” than those projected by CRAC2.

  SOARCA was supposed to be a three-year project. But by the time of the SOARCA session at the March 2011 Regulatory Information Conference it had dragged on for nearly six years. (Commissioner McGaffigan would not live to see the fruits of the project’s labors—he died in 2007.) Addressing the methodological problems that the Advisory Committee had criticized, running new analyses requested by the peer review panel, coping with problems with contractors, and project mismanagement all contributed to repeated postponements of the completion date. But perhaps the biggest time sink was the need to address more than one thousand comments from other NRC staff, who also had trouble swallowing some of the SOARCA methodology.

  The unanticipated volume of staff comments was a clear indication of internal discomfort with the study. In January 2011, after being informed of yet another delay requested by the staff, Office of Research director Brian Sheron wrote in an e-mail, “[I]f we miss this date, I suggest we all start updating o[u]r resumes.”

  One of the major internal disagreements had to do with SOARCA’s assumptions regarding so-called mitigated scenarios—in plain English, how fast and successfully operators could use the emergency tools at hand to wrestle an accident to a safe conclusion and avoid a radiation release. Could workers really start and operate the RCIC system at Peach Bottom without generator or battery power, as the SOARCA project staff had confidently concluded? Could they hook up and run portable pumps and generators to run safety systems for forty-eight hours (the limit of the SOARCA analysis)?

 

‹ Prev