In Three Mile Island’s aftermath, one NRC-launched review of the accident came to a damning conclusion about this regulatory philosophy: “We have come far beyond the point at which the design-basis accident review approach is sufficient.” In response, the NRC in October 1980 dutifully took up the question of whether it needed to amend its regulations “to determine to what extent commercial nuclear power plants should be designed to cope with reactor accidents beyond those considered in the current ‘design basis accident’ approach.” To set that process in motion, it issued an Advance Notice of Proposed Rulemaking.1
In the Advance Notice, the NRC requested comment on numerous proposals for addressing the risk of beyond-design-basis accidents. These included requirements that containment structures be equipped with systems that could prevent them from being breached, such as filtered vents, hydrogen control measures, and core catchers, structures that could safely trap molten cores if they did manage to breach reactor vessels. Another suggestion was the addition of “an alternate . . . self-contained decay heat removal system to prevent degradation of the core or to cool a degraded core”—in other words, an external emergency backup cooling system with independent power and water supplies. And the NRC raised the possibility that reactor siting and emergency planning requirements might need to be tightened to address the greater radiation releases from beyond-design-basis accidents.
The Advance Notice of Proposed Rulemaking sent shudders through the nuclear industry. Companies feared that the NRC was setting the stage for a sweeping new rule that would require all plants to be able to withstand accidents previously considered beyond design basis, compelling the installation of costly new systems to cope with them. Without a clear boundary for “how safe is safe enough,” such a regulation could open a Pandora’s box of new requirements. There would be no telling how far it could go.
Members of the industry quickly united under the leadership of their U.S. trade association, the Atomic Industrial Forum (a predecessor to today’s Nuclear Energy Institute), to head off the NRC by organizing a counter-campaign: the Industry Degraded Core Rulemaking program, or IDCOR. Funded with $15 million in contributions (more than $40 million in 2013 dollars) from nuclear utilities and vendors in the United States, Japan, Finland, and Sweden, IDCOR had as its goal to “assure that a rule, if developed, would be based on technical merits and would be acceptable to the nuclear industry.”
IDCOR’s extensive technical program included funding the development of a new computer code to simulate core melt accidents and support what were called “realistic, rather than conservative, engineering approaches.” Yet there was little doubt what the program hoped to accomplish: to block any new regulatory requirements. While the NRC vacillated for four years on what new rules, if any, were needed, IDCOR marched toward its foregone conclusion. In late 1984, the group released its findings. The industry had drawn its own line in the sand. Risks to the public from severe accidents had been vastly overestimated; the actual risks were already so low that more regulation was not needed.
From IDCOR’s perspective, even severe nuclear accidents posed little danger. That was because containment failure would take so long to occur that most fission products would have time to “plate out,” or stick to structures within the damaged reactor, and would not be released to the environment. Therefore, the quantity and type of radioactive material that could escape during a severe accident—the source term—would be far below what the NRC had been assuming in its analysis of health impacts. In reality, no one would die from acute radiation exposure after even the most serious accident, and the numbers of cancer deaths would be hundreds of times smaller than previous studies had shown.
The industry’s proposed reduction of the severe accident source term amounted to a bold jujitsu move to turn the NRC’s original effort to strengthen regulations on its head. One requirement the industry was particularly anxious to undermine was the recently imposed ten-mile emergency evacuation zone around every nuclear plant. At the time, the evacuation requirements were causing a firestorm in New York State, where state and local authorities were blocking operation of the newly constructed Shoreham plant on Long Island by refusing to certify the evacuation plan. (Critics claimed the roads of narrow Long Island couldn’t handle a mass exodus.) But if the amount of radiation that could escape the plant was so much smaller than previously believed, then perhaps a ten-mile evacuation zone wasn’t needed.
The NRC made no attempt to hide its skepticism about the industry’s source term recalibrations. At a 1983 conference, Robert Bernero, director of the agency’s Office of Accident Source Term Programs, called those involved “snake oil salesmen.” To help resolve the growing controversy, the NRC commissioned the American Physical Society, a respected professional association of physicists, to conduct a review of source term research. The physicists concluded that, although the evidence appeared to support reducing the assumed releases of certain radionuclides in certain accidents, there was no basis for the “sweeping generalization” made by IDCOR.
Ultimately, however, the industry’s counter-campaign had an effect. Although the NRC refused to accept the industry’s arguments, in 1985 the commission abandoned efforts to require protection against severe accidents and withdrew the Advance Notice of Proposed Rulemaking. In fact, the NRC went a step farther, issuing a Severe Accident Policy Statement that declared by fiat that “existing plants pose no undue risk to public health and safety.” In other words, there was no need to raise the safety bar to include beyond-design-basis accidents because the NRC’s rules already provided “reasonable assurance of adequate protection,” the vague but legally sanctioned seal of approval. The NRC had already addressed Three Mile Island issues, and that was enough.
However, in the face of a growing body of research that suggested the safety picture was not quite that rosy, this declaration raised questions more than it provided answers. In the time-honored tradition of government bureaucracies, the NRC resolved to continue studying the issue, kicking the can farther down the road and confusing matters even more. While asserting that there were no generic beyond-design-basis issues at U.S. reactors, the commission held out the possibility that problems might exist at individual plants and that it should take steps to identify them. Even this proved controversial, requiring three years of give-and-take between the NRC and the industry merely to set ground rules for the study.
When the smoke cleared in 1988, the scope of the proposed Individual Plant Examination (IPE) program had been diminished to a mere request that plant owners inspect their own facilities for vulnerabilities to core melting or containment failure in an accident. What happened if the inspections actually turned up something was less clear. Even if the plant owners found problems, the NRC could not automatically require them to be fixed. The agency would have authority to do so only if such fixes represented “substantial safety enhancements” and were “cost-effective”—that is, if they passed the strict tests required by the NRC’s recently revised backfit rule, which governed the changes it could require for existing plants.2
The 1988 backfit rule had its origins in the antiregulatory fervor of the Reagan administration. In 1981, President Ronald Reagan issued an executive order barring federal agencies from taking regulatory action “unless the potential benefits to society . . . outweigh the potential costs to society.” Although such a cost-benefit analysis approach sounded reasonable to those seeking a way to reduce government interference, it was controversial for its coldly reductionist attempt to convert the value of human lives into dollar figures that could be directly compared to the costs incurred by regulated industries.
Although the NRC, like other independent agencies, was exempt from this executive order, a majority of commissioners wanted to adopt cost-benefit requirements anyway to add what they characterized as “discipline” to the backfitting process (as if the NRC were staffed by some sort of renegade regulatory militia.)
In the past, when the NRC h
ad imposed new regulations, the industry complained that the resultant backfits were costly and often of little or no actual safety benefit. Proponents of cost-benefit analysis argued it would “address risks that are real and significant rather than hypothetical or remote.” The key to this would lie in the use of sophisticated mathematical modeling to quantify risk. At the time of Reagan’s executive order, the NRC’s regulations only allowed it to impose backfits if they would “provide substantial, additional protection which is required for the public health and safety or the common defense and security.” However, this standard was so vague that critics from both sides attacked it. Cost-benefit analysis in principle could help to solve that problem by providing a concrete, quantitative method for determining whether the benefits of a backfit—namely, the reduction in potential deaths or injuries following an accident—justified the costs.
Risk analysis had a receptive audience at the NRC. For many years, the NRC and its predecessor, the AEC, had engaged in a similar process. In the early 1970s, the AEC commissioned a pioneering project, the Reactor Safety Study, that attempted to use the tools of probabilistic risk assessment (PRA) to calculate the risk to members of the public of dying from acute radiation exposure or cancer as the result of a nuclear reactor accident. Risk was defined as the product of the likelihood of an occurrence and its consequences. One key conclusion was that even for nuclear accidents with very serious consequences, the “risk” each year to members of the public would be very low, since the probability of such accidents would be very low. That is, multiplying a large number by a very small number would yield a small number. The report, issued in 1975, famously came under blistering attack for its methodological problems and misleading implication that an average American had as much chance of being killed in a nuclear power plant accident as of being struck by a meteor.
One of the main criticisms of the Reactor Safety Study was that its calculations of probabilities “were so uncertain as to be virtually meaningless,” as recounted by Princeton professor Frank von Hippel in his book Citizen Scientist. Each calculation required the input of thousands of variables, many of which had very large margins of error. If these uncertainties were not properly accounted for, the final result would be misleading. Consequently, many critics, including an independent review panel commissioned by the NRC, argued that probabilistic risk assessments were not precise enough to be used for calculating the absolute value of anything, particularly the probability that a given reactor might experience core damage in a given year.
A major source of PRA uncertainty is what types of events should be included in the calculation in the first place. Like good engineers, the early PRA practitioners began by analyzing things that they knew how to do—relatively well-defined events such as a pipe break. These are called internal events because they begin with problems occurring within the plant. But addressing external events like earthquakes, flooding, tornadoes, or even aircraft crashes proved more challenging. First of all, such events are notoriously hard to predict. Second, their consequences could be complex and difficult to model. Trying to come up with numerical values that would accurately describe the risks from these events was an exercise in futility. But instead of acknowledging that the failure to address external events introduced huge uncertainties in the nuclear accident risks they calculated, PRA analysts sometimes pretended that the possibilities didn’t even exist—the scientific equivalent of reaching a verdict with crucial pieces of evidence missing.
Despite these technical challenges, the NRC eventually began to use PRA results more and more in its regulatory decisions—including the absolute values of accident risks that had been called “virtually meaningless.” Over time, the agency began to view PRA risk numbers as more precise than they actually were. They were put to heavy use in the cost-benefit analyses that some commissioners wanted the NRC to rely on. That had a troubling consequence: as the risk of severe accidents appeared to shrink, so did the NRC’s leverage to require plant improvements.
As if calculating the PRA risk values weren’t complicated enough, cost-benefit analyses required another parameter to be specified: the monetary value of a human life. The NRC had carried such a number on its books since the mid-1970s: $1,000 per person-rem, a term used to characterize the total radiation dose to an affected group of people. Based on today’s understanding of cancer risk, that put the value of a human life between $1 and $2 million. (The NRC failed to adjust for inflation for years, finally doubling the figure in the 1990s to about $3 million per life. That was about a half to a third the value placed on a human life by other federal agencies.)
Although a majority of commissioners embraced the cost-benefit approach, a key obstacle remained: could the NRC legally consider costs in making safety decisions? In the mid-1980s, the Union of Concerned Scientists and other public interest groups argued that the Atomic Energy Act did not allow cost to be considered at all; the NRC should base its decisions strictly on protecting public health. If a utility could not afford to build or operate a plant to meet that standard, then it would be out of luck. In response, the industry argued that the NRC had the right to consider the cost of backfits.
At first, the industry prevailed. In 1985, the NRC revised its rules to prohibit the commission from requiring any backfit unless it resulted in “a substantial increase in the overall protection of public health and safety . . . and that the direct and indirect costs of implementation . . . are justified in view of this increased protection.” These tests were not required for backfits needed to fix an “undue risk,” but the NRC refused to define what that meant.
Rather than simplify matters, the backfit test made them maddeningly unclear. In the end, it appeared that cost-benefit analyses would be required for essentially all proposed backfits, including any proposals for new regulatory requirements. Commissioner James K. Asselstine, a lawyer who headed the Senate investigation into the Three Mile Island accident before being appointed to the NRC, wrote a withering dissent to the new rule. “In adopting this backfitting rule, the Commission continues its inexorable march down the path toward non-regulation of the nuclear industry. . . . I can think of no other instance in which a regulatory agency has been so eager to stymie its own ability to carry out its responsibilities.”
Asselstine, voting against the rule, contended that it imposed unreasonably high barriers to increasing safety and required a determination of risk “based on unreliable . . . analyses.” He wasn’t done: “The Commission also fails to deal with the huge uncertainties associated with the risk of nuclear reactors. The actual risks could be up to 100 times the value frequently picked by the Commission. . . . There is no reference in this rule . . . to how uncertainties are to be factored into safety decisions.”
In 1987, the Union of Concerned Scientists, represented by attorneys Ellyn Weiss and Diane Curran, sued the NRC to block the rule, arguing that the commission could not legally consider costs in making backfit decisions. Later that year, an appeals court threw out the backfit rule, calling it “an exemplar of ambiguity and vagueness; indeed, we suspect that the Commission designed the rule to achieve this very result.”
But the court’s ruling created a peculiar two-tier system. In deciding Union of Concerned Scientists v. U.S. Nuclear Regulatory Commission, the court agreed that the Atomic Energy Act prohibited the NRC from considering costs in “setting the level of adequate protection” and required the NRC “to impose backfits, regardless of cost, on any plant that fails to meet this level.” However, the ruling further confused the “how safe is safe enough” issue by concluding that “adequate protection . . . is not absolute protection.” The NRC could consider the costs of backfits that would go beyond “adequate protection,” the judges ruled.
The NRC revised the backfit rule accordingly in 1988. The court, by tying its decision to the largely arbitrary “adequate protection” standard, had preserved the agency’s free hand to push safety in any direction it wanted. The NRC rebuffed calls to pro
vide a definition of “adequate protection.” The Union of Concerned Scientists failed to get the revised rule thrown out on appeal. Adequate protection would remain “what the Commission says it is.”
The court’s ruling essentially froze nuclear safety requirements at 1988 levels. If new information revealed safety vulnerabilities at operating plants, the NRC would have three options: conclude changes were needed to “ensure” adequate protection; redefine the meaning of “adequate protection” itself; or subject the proposed rules to the backfit test. (The NRC also kept a fourth option, an “administrative exemption,” in its back pocket.) In any of these cases, most new safety proposals would have to leap a high—perhaps impossibly high—hurdle.
The new backfit rule threw a monkey wrench into the NRC’s process for addressing severe accident risks. Because the NRC Severe Accident Policy Statement for the most part equated adequate protection with meeting the design basis, most new safety measures to deal with beyond-design-basis accidents were not needed for adequate protection. This meant that—unless the NRC were to admit that operating plants did not provide adequate protection, or to expand the definition of adequate protection, a step that could have major legal ramifications—it couldn’t issue new requirements without showing that they were “substantial” safety enhancements and that they met the cost-benefit test.
Fukushima: The Story of a Nuclear Disaster Page 25