President Eisenhower had launched the nuclear power industry with his Atoms for Peace speech in 1953. Twenty years later, although no comprehensive study of safety risks to the public or the environment had been made, private corporations owned and operated 50 nuclear power plants in the United States. When Congress began debating whether to absolve plant owners and operators of all liability for accidents, the U.S. Atomic Energy Commission finally ordered a safety study.
Significantly, as it turned out, the man appointed to lead the study was not a statistician but a physicist and engineer. Born in Harrisburg, Pennsylvania, in 1927, Norman Carl Rasmussen had served a year in the navy after the Second World War, graduated from Gettysburg College in 1950, and earned a Ph.D. in experimental low-energy nuclear physics at MIT in 1956. He taught physics there until MIT formed one of the first departments of nuclear engineering, in 1958.
When Rasmussen was appointed to assess the safety of the nuclear power industry, there had never been a nuclear plant accident. Believing that any such accident would be catastrophic, engineers designed the plants conservatively, and the government regulated them tightly.
Lacking any data about core melts, Rasmussen decided to do as Madansky had done at RAND when studying H-bomb accidents. He and his coauthors would deal with the failure rates of pumps, valves, and other equipment. When these failure rates did not produce enough statistics either, the Rasmussen group turned to a politically incendiary source of information: expert opinion and Bayesian analysis.
Engineers had long relied on professional judgment, but frequentists considered it subjective and not reproducible and banned its use. Further-more, the Vietnam War had ended America’s enchantment with expert oracles and think tanks. Confidence in leaders plummeted, and a “radical presumption of institutional failure” took its place. Faith in technology dropped too; in 1971 Congress canceled its participation in the supersonic passenger plane, the SST, one of the few times the United States has rejected a major new technology. “No Nukes” activists were demonstrating across the country.
Lacking direct evidence of nuclear plant accidents, Rasmussen’s team felt it had no choice but to solicit expert opinion. But how could they combine that with equipment failure rates? Normally, Bayes’ theorem provided the way. But Rasmussen’s panel already had enough controversy on its hands dealing with nuclear power. The last thing they needed was an argument over methods.
To avoid using Bayes’ equation, they employed Raiffa’s decision trees. Raiffa was a Bayesian missionary, and his trees had Bayesian roots, but that did not matter. Panel members avoided even the words “Bayes’ rule”; they called it a subjectivistic approach. They thought that keeping clear of Bayes’ theorem would absolve them of being Bayesians.
The committee’s final report, issued in 1974, was loaded with Bayesian uncertainties and probability distributions about equipment failure rates and human mistakes. Frequentists did not assign probability distributions to unknowns. The only reference to Bayes’ rule, however, was tucked into an inconspicuous little corner of appendix III: “Treating data as random variables is sometimes associated with the Bayesian approach . . . the Bayesian interpretation can also be used.”6
But avoiding the use of the word “Bayes” did not acquit the report of blame. Although several later studies approved of its use of “subjective probabilities,” some of the report’s statistics were roundly damned. Five years later, in January 1979, the U.S. Nuclear Regulatory Commission withdrew its support for the study. The Rasmussen Report seemed doomed to oblivion.
Doomed, that is, until two months later when the core of the Three Mile Island-2 nuclear generating unit was damaged in a severe accident. At almost the same time, Jane Fonda debuted a blockbuster movie, The China Syndrome, about the coverup of an accident at a nuclear power plant. The civilian nuclear power industry collapsed in one of the most remarkable reversals in American capitalism. Although approximately 20% of U.S. electric power came from 104 nuclear power plants in 2003, at this writing no new facility has been ordered since 1978.
Three Mile Island revived the Rasmussen Report and its use of subjectivistic analysis. After the accident the committee’s insights seemed prescient. Previous experts had thought that the odds of severe core damage were extremely low and that the effects would be catastrophic. The Rasmussen Report had concluded the reverse: the probability of core damage was higher than expected, but the consequences would not always be catastrophic. The report had also identified two significant problems that played roles in the Three Mile Island accident: human error and the release of radioactivity outside the building. The study had even identified the sequence of events that ultimately caused the accident.
Not until 1981 did two industry-supported studies finally employ Bayes’ theorem—and admit it. Analysts used it to combine the probabilities of equipment failures with specific information from two particular power plants: Zion Nuclear Power Station north of Chicago and Indian Point reactor on the Hudson River, 24 miles north of New York City. Since then, quantitative risk analysis methods and probabilistic safety studies have used both frequentist and Bayesian methods to analyze safety in the chemical industry, nuclear power plants, hazardous waste repositories, the release of radioactive material from nuclear power plants, the contamination of Mars by terrestrial microorganisms, the destruction of bridges, and exploration for mineral deposits. To industry’s relief, risk analysis is also now identifying so-called unuseful safety regulations that can presumably be abandoned.
Subjective judgment still bothers many physical scientists and engineers who dislike mixing objective and subjective information in science. Avoiding the word “Bayes,” however, is no longer necessary—or an option.
15.
the navy searches
Surprisingly, given Bayes’ success in fighting U-boats during the Second World War, the U.S. Navy embraced the method slowly and grudgingly during the Cold War. High-ranking officers turned to Bayes almost accidentally, hoping at first to garner only the trappings of statistics. Later, the navy would move with increasing confidence and growing computer power to fine-tune the method for antisubmarine warfare. Meanwhile, the Coast Guard eyed the method for rescuing people lost at sea. As was often the case with Bayes’ rule, a series of spectacular emergencies forced the issue.
The navy’s flirtation with the approach began at dusk on January 16, 1966, when a B-52 jet armed with four hydrogen bombs took off from Seymour Air Force Base near Raleigh, North Carolina. Each bomb was about ten feet long and as fat as a garbage can and had the destructive power of roughly one million tons of TNT. The jet’s captain, known for smoking a corncob pipe in the cockpit, and his six-man crew were scheduled to fly continuously for 24 hours and refuel several times in midair.
In a controversial program called Operation Chrome Dome, SAC under Gen. Curtis LeMay kept jets equipped with nuclear weapons flying at all times to protect against Soviet attack. In a costly and hazardous process, tankers refueled the jets in midair.
The jet made the scheduled rendezvous for its third refueling with a SAC KC-135 tanker jet on the morning of January 17. Bomber and tanker maneuvered in tandem over Spain’s southeastern coast, six miles above the isolated hamlet of Palomares, Spanish for Place of the Doves. They used a telescoping boom that required the two planes to fly three or four meters apart at 600 miles per hour for up to half an hour. In a split-second miscalculation, the tanker’s fuel nozzle struck the metal spine of the bomber, and at 10:22 a.m. local time 40,000 gallons of fuel burst into flame. Seven of the planes’ 11 crew members perished.
Airmen, the four bombs, and 250 tons of aircraft debris rained down from the sky. Fortunately, it was a holiday, and most of the area’s 1,500 residents were taking time off from working their fields, so no one was hit. Even more important, no nuclear explosion occurred; the bombs had not been “cocked,” or activated. However, the parachutes on two of them failed to open, and when the bombs hit the ground their conventional explo
sives detonated, contaminating the area with an aerosol of radioactive plutonium. Three of the bombs were located within 24 hours, but the fourth was nowhere to be found.
Adding to the crisis was the fact that, unknown to the public, the incident at Palomares was at least the twenty-ninth serious accident involving the air force and nuclear weapons. Ten nuclear weapons involved in eight of these accidents had been jettisoned and abandoned at sea or in swamps, where they presumably remain to this day. The missing weapons, none of which involved a nuclear detonation, included two lost over water in 1950; two nuclear capsules in carrying cases in a plane that disappeared over the Mediterranean in 1956; two jettisoned into the Atlantic Ocean off New Jersey near Atlantic City in 1957; one left at the mouth of the Savannah River off Tybee Beach in Georgia in 1958; the one that fell in Walter Gregg’s garden near Florence, South Carolina, in 1958; one in Puget Sound in Washington State in 1959; uranium buried in Goldsboro, North Carolina, in 1961; and a bomb from a plane that rolled off an aircraft carrier into the Pacific in 1965. It was an unenviable record that was only slowly attracting media attention.
When it became obvious that the latest H-bomb to fall from a SAC jet must have landed in the Mediterranean Sea, the Defense Department phoned John Piña Craven, the civilian chief scientist in the U.S. Navy’s Special Projects Office.
Craven had a bachelor’s degree from Cornell University’s naval science training program and a master’s in physics from Caltech. While working on a Ph.D. in applied physics at the University of Iowa he spent his spare time taking advanced courses of every kind, from journalism and philosophy of science to partial differential equations. Notably, in view of what was to come, he took statistics and got a C. In 1951 Craven graduated “sort of educated in everything.”1 These were the years when the military was developing crash programs for using navigational satellites and for building ballistic missiles and guidance systems to counter the Soviets. In such an atmosphere, the Pentagon regarded any Caltech grad as a technological whiz kid.
At 31, Craven became what he called the navy’s “Oracle at Delphi, . . . an applied physicist advising the Navy whenever they have mission or equipment problems that they cannot handle.” His first job was inventing technology to locate Soviet mines blocking Wonson Harbor during the Korean War. Three years later he became chief scientist of the Special Projects Office developing the Polaris Fleet Ballistic Missile Submarine System. When the nuclear submarine USS Thresher burst and sank off Cape Cod in 1963 with 129 men on board, he was ordered to develop ways to find objects lost or sunk in deep water. To the military looking for an H-bomb in the Mediterranean Sea, Craven sounded like the man for the job.
“We’ve just lost a hydrogen bomb,” W. M. “Jack” Howard, the assistant secretary of defense for atomic energy, said when he telephoned Craven.
“Oh, we’ve just lost a hydrogen bomb,” Craven recalls saying. “That’s your problem, not mine.”
The deputy secretary persisted: “But one of the bombs fell in the ocean, so we don’t know how to find it; three others are on land.”
Craven shot back, “You called the Navy all right but you called the wrong guy. The Supervisor of Salvage is the guy responsible for that.” Within hours, though, Craven and the salvage chief, Capt. William F. Searle Jr., formed a joint committee of rejects: Craven had failed twice to get into the naval academy, and Searle graduated from Annapolis before poor vision shunted him into underwater salvage work, where, wits said, everyone is more or less blind.
“Craven, I want a search doctrine,” Searle barked. He needed the doctrine—naval-ese for a plan—so he could start work the next morning to send ships and other materiel to Spain. That night Craven kept telling himself, “Jesus, I’ve got to come up with a search doctrine.”
Craven already knew something about Bayesian principles. His mine-sweeping mentor during the Korean War in 1950–52 was the navy physicist and applied mathematician Rufus K. Reber, who had translated Bernard Koopman’s Bayesian antisubmarine studies into practical but classified tables for sea captains planning mine-searching sweeps. Craven had also learned about Bayes while visiting MIT professors doing classified research for the government. Most important, he had heard about Howard Raiffa, who was pioneering the use of subjective probabilistic analyses for business decision making, operational analysis, and game theory at the Harvard Business School.2
As Craven understood it, Raiffa used Bayesian probability to discover that horse race bettors accurately predict the odds on horses winning first, second, and third place. For Craven, the key to Raiffa’s racetrack culture was its reliance on combining the opinions of people “who really know what’s going on and who can’t verbalize it but can have hunches and make bets on them.” Later, Raiffa commented that he was pleased if he had influenced Craven to assess subjective probabilities and pool them across experts. But he emphasized that Bayes does not get into the act until those subjective views are updated with new information. Furthermore, he remembered talking about weather prediction, not horse races.
“I’m very good at grasping concepts,” Craven explained later. “I’m lousy on detail. I got the betting on probabilities and also got the connection with Bayes’s conditional probabilities. But I also understand the politics of getting things done in the Navy, and I say I’ve got to get a search doctrine.”
Craven had experts galore at his disposal. Some knew about B-52s while others were familiar with the characteristics of H-bombs; bomb storage on planes; bombs dropping from planes; whether the bomb would stay with the plane wreckage; the probability that one or both of a bomb’s two parachutes deploys; wind currents and velocity; whether the bomb will be buried in sand; how big it would look wrapped in its chute, and so on. Craven figured that his experts could work out hypotheses as to where the bomb would fall and then determine the probability of each hypothesis.
Most academic statisticians would have thrown in the towel. They would have believed, with Fisher and Neyman, that sources of information should be confined to verifiable sample data. Craven, of course, had no wish to repeat the experiment. He needed to find the bomb. “At that point, I wasn’t looking at the mathematics, I was just remembering what I got from Raiffa.”
Then reality intervened. With only a few hours and the assistance of one technician, Craven was forced to be “the guy who interviewed each one of these experts to make the bets. I’m the guy who decides who the bettors are, and I’m also the guy—let’s be honest about it—who imagines what I’d say if I were the guy I can’t get in touch with. So I’m doing a lot of imagineering. . . . I didn’t have time to call together these people.” Craven’s use of expert guessing would be spectacularly subjective.
Blending hurried phone calls to experts, reports of on-site witnesses, and his own “imagineering,” Craven came up with seven hypotheses he called scenarios:
1. The missing H-bomb had remained in the bomber and would be found in the bomber’s debris.
2. The bomb would be found in bomb debris along the path of the collision.
3. The bomb had fallen free and was not in the plane’s debris.
4. One of the bomb’s two parachutes deployed and carried it out to sea.
5. Both of the bomb’s parachutes deployed and carried it farther out to sea.
6. None of above.
7. A Spanish fisherman had seen the bomb enter the water. (This hypothesis came later, after naval commanders talked with one Francisco Simo Orts.)
Ideally, at this point, Craven would have gotten “all these scenarios and all these cats [his experts] in a room and have them make bets on them.” But with only one night before the search doctrine was needed, Craven realized, “I’m going to invent the scenarios myself and guess what an expert on that scenario would bet.”
The emergency forced Craven to cut through years of theoretical doubts about building a Bayesian prior and estimating the probability of its success: “As I did this, I knew immediately that I wouldn’t be able to sell
this concept to any significant operator in the field. So I thought what the hell am I going to do? I’m going to tell them that this is based on Bayes’s subjective probability. And second, I’m going to hire a bunch of mathematicians, tell them I want you to put the cloak of authenticity on using Bayes’s theorem. . . . . So I hired Daniel H. Wagner, Associates to do this.”
Daniel H. Wagner was a mathematician so absent-minded that his car once ran out of gas three times in one day. He had earned a Ph.D. in pure mathematics—none of it applied—from Brown University in 1957. Several years of working for defense contractors convinced him that rigorous mathematics could be applied to antisubmarine warfare and to search and detection work. The fact that both involved innumerable uncertainties made Bayes’ rule appealing. As Wagner put it, “Bayes’ rule is sensitive to information of all kinds . . . but every clue has some error attached because, were there no error, there would be no search problem: You would just go to the target and find it immediately. The problem is that . . . you will rarely be given the value of the expected error and so you will have to deduce the location error from other information.”3
Operations research was new, but Wagner came recommended by two authorities: Capt. Frank A. Andrews (ret.), the officer who had commanded the Thresher search, and Koopman, by then an influential division head of the Institute for Defense Analyses, the campus-based organization for academics doing secret military research.
Going to Craven’s office to learn more about the missing H-bomb, Wagner took along the youngest and greenest of his three-man staff, Henry R. (“Tony”) Richardson, who had earned a Ph.D. in probability theory from Brown all of seven months earlier. He would be Bayes’ point man at Palomares.
As Wagner reconstructed the scene, Craven showed the mathematicians an interesting chart of the waters off Palomares. The seabed had been divided into discrete rectangular cells, and after interrogating air force experts Craven postulated the first six of his seven scenarios. Then he drew on statistical theory to weight each scenario as to its relative likelihood. His ideas were not quantitative; he had drawn a contour map with mountains of high probabilities and valleys of unlikely regions. Nor was he forthcoming about the reasons for each hypothesis. Richardson realized that, as far as Craven was concerned, he and Wagner were just number crunchers.
The Theory That Would Not Die Page 24