Fukushima: The Story of a Nuclear Disaster

Home > Other > Fukushima: The Story of a Nuclear Disaster > Page 34
Fukushima: The Story of a Nuclear Disaster Page 34

by David Lochbaum


  One final issue to consider is the risk of land contamination, something that could be an enormous problem even if all evacuation measures were successful. In the past the NRC has assessed potential accident consequences solely in terms of early fatalities and latent cancer deaths, but Fukushima showed that widespread land contamination, and the economic and social upheaval it creates, must also be counted.3

  The NRC staff proposed to the commissioners that the NRC address this issue in revising its guidelines for calculating cost-benefit analyses, but the commissioners did not show much interest. It, too, appears to be on a slow track.

  As for the fate of the NTTF’s Recommendation 1—revising the regulatory framework? That seems to have slipped not only off the front burner but possibly off the stove. In February 2013, the NRC staff failed to meet the commissioner’s deadline for a proposal related to the recommendation and asked for more time. As of this writing, Recommendation 1 continues to fade as a priority, with the NRC staff contending that “it is acceptable, from the standpoint of safety, to maintain the existing regulatory processes, policy and framework.”

  Safety IOUs are worse than worthless. They represent vulnerabilities at operating nuclear plants that the NRC knows to exist but that have not yet been fixed. They are, simply put, disasters waiting to happen.

  The NRC’s practice of identifying a safety problem and accepting a non-solution continues. The post-Fukushima proposals are just the latest example—albeit the most worrisome.

  Severe reactor accidents will continue to happen as long as the nuclear establishment pretends they won’t happen. That thinking makes luck one of the defense-in-depth barriers. Until the NRC acknowledges the real possibility of severe accidents, and begins to take corrective actions, the public will be protected only to the extent that luck holds out.

  Of course trade-offs are inevitable. It would be ideal if every defense-in-depth barrier were fully and independently protective against known hazards, but realistically that price tag would likely be prohibitive. The nuclear industry is quick to oppose new safety rules on the basis of cost, which is hardly surprising given the concerns of shareholders as well as ratepayers.

  On the other hand, one must consider the price tag, in both economic and human terms, of an accident like Fukushima. Ask TEPCO’s shareowners—as well as the Japanese public—today what they would have paid to avoid that accident.

  So how safe is safe enough? In that critical decision, the public has largely been shut out of the discussion. This is true in the United States, in Japan, and everywhere else nuclear plants are in operation. Nuclear development, expansion, and oversight have largely occurred behind a curtain.

  Nuclear technology is extremely complex. Its advocates, in their zeal to promote that technology, have glossed over unknowns and uncertainties, thrown up a screen of arcane terminology, and set safety standards with unquantifiable thresholds such as “adequate protection.” In the process, the nuclear industry has come to believe its own story.

  Regulators too often have come to believe that there is a firmer technical basis for their decisions than actually exists. Officials, in particular, must grapple with overseeing a technology that few thoroughly understand, especially when things go wrong. Fukushima demonstrated that.

  Meanwhile, average citizens have been lulled into believing that nuclear power plants are safe neighbors, needing no attention or concern because the owners are responsible and the regulators are thorough. Yet it is those citizens’ health, livelihoods, homes, and property that may be permanently jeopardized by the failure of this flawed system.

  The public needs to be fully informed of uncertainties, of risks and benefits, and of the trade-offs involved. Scientists and policy makers must be candid about what they know—and don’t know; about what they can honestly promise—and can’t promise. And once full disclosures are made, the people must be given the final voice in setting policy. They must be the arbiters of what is acceptable and how the government acts to ensure their protection.

  What that decision-making process will look like is unclear. One thing is certain: we’re nowhere close to it now.

  This chapter has focused on two questions inspired by Fukushima: who is to blame for the accident and what can be done to prevent the next one? By now it should be clear that the entire nuclear establishment is responsible, rather than just TEPCO and its regulators. Even if indicted, however, the nuclear establishment likely could not be convicted. For it is sheer insanity to keep doing the same thing over and over hoping that the outcome will be different the next time.

  What is needed is a new, commonsense approach to safety, one that realistically weighs risks and counterbalances them with proven, not theoretical, safety requirements. The NRC must protect against severe accidents, not merely pretend they cannot occur.

  How can that best be achieved? First, the NRC needs to conduct a comprehensive safety review with the blinders off. That must take place with what former NRC Commissioner Peter Bradford calls regulatory “skepticism.” The commission staff—and the five commissioners—should stop worrying so much about maintaining regulatory “discipline” and start worrying more about the regulatory tunnel vision that could cause important risks to be missed or dismissed.

  That safety review must come in the form of hands-on, real-time regulatory oversight. Every plant in the United States should undergo the kind of in-depth examination of severe accident vulnerabilities that the NRC contemplated in the 1980s but fell short of implementing.

  The first step is adoption of a technically sound analysis method that takes into account the deficiencies in risk assessment that critics have noted over the decades, particularly the failure to fully factor in uncertainty. Issues that are not well understood need to be included in error estimates, not simply ignored. Setting the safety bar at x must carry the associated policy question “What if x plus 1 happens?”

  Fortunately, the NRC does not have to start from scratch to do a sound safety analysis. Each nuclear plant that has applied for a twenty-year license renewal from the NRC—around 75 percent of all U.S. plants—has conducted a study called a “severe accident mitigation alternatives” (SAMA) analysis as part of the environmental review required under the National Environmental Policy Act.

  A SAFETY FRAMEWORK READY FOR ACTION

  A SAMA analysis entails identifying and evaluating hardware and procedure modifications that have the potential to reduce the risk from severe accidents, then determining whether the value of the safety benefits justifies their cost.

  Oddly enough, even though the plant owners and the NRC have identified dozens of measures that would pass this cost-benefit test and thus might be prudent investments, none have had to be implemented under the law. That’s because the NRC has thrown into the equation its contorted backfit rule. The rule means that for the changes to be required they also must represent a “substantial safety enhancement”—a standard very hard to meet given the low risk estimates generated by the industry’s calculations.

  Thus, the SAMA process has been merely an academic exercise. But the upgrades identified in the SAMA analyses provide a comprehensive list of changes that could reduce severe accident risk at each plant.

  The SAMA changes are a starting point. They should be reevaluated under a new framework, one that better accounts for uncertainties and the limitations of computer models, improves the methodology for calculating costs and benefits, and allows the public to have a say in the answer to the question “How safe is safe enough?” This process would produce a guide for plant upgrades that could fundamentally improve safety of the entire reactor fleet and provide the public with a yardstick by which to measure performance.

  Another tool for assessing severe accident risks would be a stress test program—an analysis of how each plant would fare when subjected to a variety of realistic natural disasters and other accident initiators. (As for industry’s much-touted but untested FLEX “fixes,” they could be taken into ac
count, but their limitations and vulnerabilities would be fair game in any analysis.)

  Before the testing process begins, another change is essential: the public, not the industry, must first determine what is a passing grade.

  These SAMA analyses represent an unvarnished checklist of the changes needed at each nuclear plant in the United States to drive down the risk of an American Fukushima. Using that information as a roadmap for enhanced regulation and operations is not the entire answer, but it is a first step in better understanding the risks of nuclear power and how to control them.

  Once the question “How safe is safe enough?” is answered, a second question must be asked and resolved, again with input from the public. That question is: “How much proof is enough”?

  In other words, how best to prove that industry and regulators are actually complying with the new rules, both in letter and in spirit? The NRC’s regulatory process is among the most convoluted and opaque; just as in Japan, public trust has suffered as a result.

  In the end, the NRC must be able to tell the American public, “We’ve taken every reasonable step to protect you.” And it must be the public, not industry or bureaucrats, who define “reasonable.”

  As Japan was marking the second anniversary of the Fukushima Daiichi accident, the NRC held its annual Regulatory Information Conference, the twenty-fifth, once again attracting a large domestic and international crowd of regulators, industry representatives, and others.

  During the two-day session, many presentations were devoted to the lessons learned from Fukushima, including technical discussions about core damage, flooding and seismic risks, and regulatory reforms. But by now it was apparent that little sentiment existed within the NRC for major changes, including those urged by the commission’s own Near-Term Task Force to expand the realm of “adequate protection.” The NRC was back to business as usual, focused on small holes in the safety net, ignoring the fundamental lesson of Fukushima: This accident should have been no surprise, and without wholesale regulatory and safety changes, another was likely.

  One of the final events of the conference was a panel discussion featuring the agency’s four regional administrators and two nuclear industry officials, who fielded questions from an audience of other nuclear insiders. The subjects ranged from dealing with the public in the post-Fukushima era to the added workload of inspectors at U.S. reactors. The mood was upbeat, the give-and-take friendly.

  But amid the camaraderie, one member of the panel seemed impatient to deliver a message to the audience. When it came to addressing the overarching lessons of Fukushima—for regulators and industry people alike—he brought unique credentials to the task. The speaker was the NRC’s own Chuck Casto, who had arrived in Japan in the first chaotic days of the accident and remained there for almost a year as an advisor.

  Now, as the conference wound to a close, Casto was eager to offer some words of advice. The public does not understand the NRC’s underlying safety philosophy of “adequate protection,” he cautioned. “They want to see us charging out there making things safer and safer, to be pro-safety. If this degree is safe, a little bit more is more safe,” he told the audience.

  A short time later, his voice filling with emotion, Casto spoke of the brave operators at Fukushima, whom he called “an incredible set of heroes.”

  “This industry over its fifty-some years has had a lot of heroes,” he continued. He spoke about Browns Ferry, Three Mile Island, and Chernobyl. Those and other events, Casto told his audience, make the way forward clear. “We honor and we respect the heroes that we’ve had in this industry over fifty years—but we don’t want any more.

  “We have to have processes and procedures and equipment and regulators that don’t put people in a position where they have to take heroic action to protect the health and safety of the public,” Casto said. “What we really have to work on is no more heroes.”

  APPENDIX

  THE FUKUSHIMA POSTMORTEM: WHAT HAPPENED?

  Accident modelers from TEPCO, the U.S. national laboratories, industry groups, and other organizations gathered in November 2012 at a meeting of the American Nuclear Society (ANS) to present the results of their attempts to simulate the events at Fukushima and reproduce what was known about them.

  Like any postmortem, the goal was to glean as many answers as possible about the causes of the events at Fukushima Daiichi. But answers proved troublingly elusive.

  One of the first difficulties encountered by the analysts was the lack of good information about the progression of the accidents. The damaged reactors were still black boxes, far too dangerous for humans to enter, much less conduct comprehensive surveys of, and reliable data on their condition was sparse. In some cases, analysts had to fine-tune their models using trial and error, essentially playing guessing games about what exactly had happened within the reactors.

  Even so, the computer simulations could not reproduce numerous important aspects of the accidents. And in many cases, different computer codes gave different results. Sometimes the same code gave different results depending on who was using it.

  The inability of these state-of-the-art modeling codes to explain even some of the basic elements of the accident revealed their inherent weaknesses—and the hazards of putting too much faith in them.

  Sometimes modelers were frustrated by a lack of essential data. For example, when water-level measurements inside the three reactors were available, they were usually wrong. The readings indicated that water levels were stable when they were actually dropping below the bottom of the fuel. This happened because the gauges were not calibrated to account for the extreme temperature and pressure conditions that were occurring. Although the problem should have been obvious at the time, TEPCO didn’t question the erroneous data and publicly released it.

  The lack of reliable water levels meant that analysts in Japan and elsewhere really did not know then or now how much makeup water was entering the reactor vessels at what times during the accident, a critical piece of information for understanding the effectiveness of emergency water-injection strategies. Different assumptions for “correcting” this unreliable data yielded significantly different results.

  At the ANS meeting, researchers from Sandia National Laboratories presented results they obtained using the computer code called MELCOR, designed by Sandia for the NRC to track the progression of severe accidents in boiling water and pressurized water reactors. For Unit 1, the event was close to what’s called a “hands-off” station blackout. From the time the tsunami struck, essentially nothing worked. The loss of both AC and battery power disabled the isolation condensers and the high-pressure coolant injection system (HPCI), as well as the instruments for reading pressure and water level. Counting down from the time of the earthquake, the core was exposed after three hours, began to undergo damage after four hours, and by five hours was completely uncovered.

  At nine hours, according to this analysis, the molten core slumped into the bottom of the reactor vessel, and by fourteen hours—if not sooner—it had melted completely through. By the time workers had managed to inject emergency water into the vessel at fifteen hours, much of the fuel had already landed on the containment floor and was violently reacting with the concrete.

  But even a straightforward “hands-off” blackout turned out to be too complex for MELCOR to fully simulate. For instance, although the code did predict that the containment pressure would rise high enough to force radioactive steam and hydrogen through the drywell seals and into the reactor building, its calculation of the amount of hydrogen that collected at the top of the building “just missed” being large enough to cause an explosion, according to Randy Gauntt, one of the study’s authors.

  A U.S. industry consultant, David Luxat, presented a simulation using the industry’s code, called MAAP5. His simulation also predicted that the conditions would not be right for a hydrogen explosion at the time when one actually occurred. His speculation: extra hydrogen leaked from the vent into
the reactor building. Ultimately, the explosion at Unit 1 remained something of a mystery.

  Another issue that experts disagreed on was what caused the Unit 1 reactor vessel to depressurize suddenly around six or seven hours into the accident. Sandia argued that it was probably a rupture in one of the steam lines leading from the vessel, or a stuck-open valve; others believed it was a failure of some of the tubes used to insert instruments into the vessel to take readings. But no code was capable of predicting one of these events over another; and no one knew what actually took place anyway. Such confirmation will have to await a time when it is safe to enter the containment and conduct forensic examinations. Even then, it is far from certain that the history of the accident will be fully reconstructed, or all its lessons revealed.

  The situation was even murkier when trying to understand the more complex events that led to the meltdowns at Unit 3 and then Unit 2.

  For Unit 3, which never fully lost battery power, operators were able to run the reactor core isolation cooling (RCIC) system until it shut down, and then the HPCI system until they deliberately shut it down. Although the analysts generally agreed that core damage occurred sometime before 9:00 a.m. on March 13, there was much disagreement about how extensively the core was damaged and whether it had in fact melted through the reactor vessel. The answers depended on the amount of water that actually got into the vessel from the operations of RCIC, HPCI, fire pumps, and fire engines. Various analysts questioned whether RCIC and HPCI operated well under suboptimal conditions, and whether the pumps ever had sufficient pressure to inject meaningful flows of water into the core. Assuming different amounts of water led to different conclusions. In the final analysis, no one could predict with confidence whether or not there was vessel failure.

  The explosion at Unit 3 was another puzzle. It appeared larger than the one at Unit 1, but under the assumptions for water injection rates provided by TEPCO, neither the Sandia simulation nor the industry’s found that enough hydrogen was generated to cause any explosion at all.

 

‹ Prev