In 2002, almost four decades after the Palomares accident, Spanish authorities said they had found no danger in the area from surface radiation. Spanish and U.S. health officials reported that no radiation-related cancers had been detected in Palomares’s residents. They also said the 1,600 air force personnel who shipped 1,000 cubic meters of Palomares soil in 4,810 metal drums for burial in South Carolina had been exposed to insignificant amounts of radiation, 1/10 the current limit for radiation workers. Even though the public perceives plutonium as being extremely hazardous, government studies showed that its alpha rays are so weak they do not penetrate skin or clothing and, if ingested, pass out of the body in feces. The greatest danger posed by plutonium occurs when it is inhaled. Despite more than 30 years of living and working in a plutonium-contaminated environment, official reports say, the residents of Palomares have inhaled far less than the maximum safe dose identified by the International Committee on Radiological Protection.
Radioactive snails discovered in 2006, however, prompted fears of dangerous levels of plutonium below ground. A joint Spanish–U.S. study was announced, and children were warned not to play in fields near the explosion sites or to eat the snails, a local delicacy.
But what about Bayes? What did it contribute to the H-bomb search? Richardson concluded that “the numbers I computed were coverage numbers so that [Guest] could say we’d covered these areas. . . . Scientifically, the big thing in my mind was that Bayes was a sidelight to the H-bomb search.”11
The H-bomb hunt could have been a full-blown Bayesian exercise. The prior probabilities of Craven’s prehunt scenarios could have been updated with Richardson’s shipboard data to guide the search. However, they were never combined in time to be of any use in locating the lost bomb. And without updating, there was no Bayes. Instead of Bayes, the heroes were Orts and the Alvin. The H-bomb search did develop the methodology for computing SEPs (later called LEPs, for “local effectiveness probability”), but Richardson could not get an article about using probability to find the H-bomb published in an academic journal. The H-bomb hunt was a striking demonstration of how difficult it would be to win operational support for Bayes’ rule, even when something as tangible and terrifying as a missing thermonuclear bomb was concerned.
Still, although Bayesian updating was not used at Palomares, the success of the search strengthened Craven’s faith in scientific searches and the potential of Bayes’ rule. He and his team had learned how to compute subjective presearch hypotheses and weight their importance. They realized that the future of Bayesian search methods depended crucially on computer power and the portability of computerized information. This was not an insignificant realization. Richardson had been the only member of his graduate class in pure mathematics to take a computer course, and computer computation was still thought of as cowardly. Within months, though, Wagner, Associates acquired a punched-tape terminal, its first direct access to electronic computation. The next time they were called, the Bayesians would have better tools.
The navy got a dramatic opportunity to use Bayes’ rule two years later, in the spring of 1968, when two attack submarines, one Soviet and the other U.S., disappeared with their crews within weeks of each other. As head of the Deep Submergence Systems Project, Craven was responsible for the search for both subs. Despite Bayes’ limited role in the H-bomb search, both Craven and Richardson remained convinced the method was scientifically valid.
The first submarine to disappear was a diesel-fueled and missile-armed Soviet K-129, the source of Tom Clancy’s fictionalized bestseller The Hunt for Red October. The U.S. Navy was alerted to its loss by a massive Soviet search in the Pacific off the Kamchatka peninsula along a major route frequented by its submarines. About the same time, U.S. underwater sensors recorded a “good-sized bang.” The noise was far less than the sound of a sub imploding upon itself, but it occurred at a curious place, far from the Soviet search operation and on the International Date Line, at 40 degrees north and precisely 180 degrees longitude. Because the date line is a human artifact, the noise suggested a man-made event. Craven, one of a handful of U.S. personnel who knew of the “extremely classified” affair, hired Wagner, Associates for a full-scale probability analysis without ever telling them what they were looking for. Forty years later, Richardson still did not know he had worked on the search for the Soviet submarine.
Craven could think of only three plausible scenarios for the K-129’s disappearance: “First, that the sound had nothing to do with the lost submarine. Second, that the sound was made by the submarine but that it did not sink and like Jules Verne’s Nautilus was still gliding beneath the sea.” Third, that the sub’s watertight compartments were open when the crisis occurred and the vessel flooded so fast it did not collapse. Craven reasoned that if the sound recorded on the International Date Line came from the submarine, “then it was indeed not where it was supposed to be, which was why the Soviets could not find it.”
Johnson, distracted during the tumultuous last months of his presidency, authorized a search for the Russian sub on the hypothesis that it might be a rogue, even though the prospect of finding it was slim. Eventually Craven concluded that the sub—armed with ballistic missiles and crewed by about 100 people—was indeed “a rogue, off on its own, in grave disobedience of its orders . . . [and possibly planning to attack Hawaii]. Since the Soviets didn’t know how far off course their sub had been, the Soviets would have had no idea that their ship was a rogue unless we told them.”12 U.S. authorities informed the Soviet leader, Leonid Brezhnev, of the bang’s location, and in the face of evidence that his military might be out of control he could see détente as an attractive option. Later, Americans photographed the K-129 but were unable to raise it.
In May 1968, a few weeks after the Soviet sub sank, the U.S.S. Scorpion, a nuclear-powered attack sub, disappeared with its crew of 99 in the Atlantic Ocean. The Scorpion was cruising west, toward home, somewhere along a 3,000-mile submarine route between Spain and the East Coast of the United States. It was reportedly armed with two nuclear torpedoes. According to a study made in 1989, Scorpion’s reactor and torpedos would be among at least eight nuclear reactors and 50 nuclear warheads to have been lost at sea; of these, 43 were on sunken Soviet submarines and eight originated with U.S. military activities. With the Scorpion’s final resting place unknown, the military launched a full-scale search.
Craven and Andrews, by now the world’s leading search experts, rapidly reassembled their H-bomb search crew. At first, the hunt stretched across the Atlantic Ocean. After some bureaucratic sleuthing, though, Craven learned that an ultrasecret listening post for “an unnamed agency” had recorded mysterious “blips” in extremely deep water about 400 miles southwest of the Azores. The location of the blips corresponded with the sub’s expected itinerary and drastically narrowed the search area from a 3,000-mile-long rectangle to three or four square miles. Thanks to Craven, the investigation took a spectacular leap forward.
Craven organized a full-blown Bayesian hunt for Scorpion from the very beginning. When the H-bomb was lost off the Spanish coast, Craven had turned almost accidentally to Bayes in the hope of deflecting Congressional displeasure in case of failure. This time, the navy moved tentatively but with growing faith to exploit the method.
“Craven had confidence in the scientific approach from the very beginning but, to put it mildly, that wasn’t everybody’s thing,” Richardson said. “Tough” is the word most often used to describe Craven, and for the next five months, from June to October 1968, he staunchly defended Bayes against skeptics. Although the H-bomb search in Palomares had failed to combine Bayesian priors and SEPs, Craven was enthusiastic when Richardson proposed doing so this time. A powerful computer in the United States would compute the probabilities of the various presearch hypotheses. Then this prior was to be combined and updated on shipboard with daily search results.
Shortly after the sub disappeared, Richardson was flown to the Azores to observe the surface sea
rch for Scorpion and to visit the USNS Mizar, a research ship conducting underwater operations. Personnel from the Naval Research Laboratory, the Navy Oceanographic Office, and various equipment makers were aboard the Mizar, working 12-hour shifts around the clock. Over the next five months, they cruised through the area for weeks at a time, dragging across the ocean bottom a sledlike platform covered with a wide-angle camera, sonars, and magnetometers. The chief scientist on board the Mizar, Chester L. “Buck” Buchanan, had originally designed the equipment to find the Thresher and had improved it greatly since. He vowed not to shave until the Scorpion was found.
Scorpion searchers faced even more uncertainties than the H-bomb hunters had along the Mediterranean coast: a remote location 400 miles from land-based navigation systems, an ocean floor two miles down, and no eye-witness accounts pinpointing the Scorpion’s location. Navigational systems also introduced large errors and uncertainties. Two land-based radio networks, Loran and the new global Omega, were too imprecise to be useful, satellite fixes were available only irregularly, and transponders anchored to the ocean bottom were often indistinguishable from one another.
When Richardson arrived on board the Mizar, he found the ship following orders from Washington to search off Point Oscar—over and over again. Craven’s early analysis of acoustic data suggested that the Scorpion might have settled near Oscar. Using Bayes, however, Richardson tried to show graphically that they had oversearched Point Oscar and that very little probability remained of finding Scorpion there. Despite his brilliant demonstration, the search around Oscar continued. Washington would have to issue orders to change operations, and this would require persuasion based upon calculation of a detailed probability map, that is, a Bayesian prior.
“In all the operations that I’ve ever operated in you have strong personalities with their own ideas, and you have to argue—unless somebody [like Craven] in Washington shoves it down their throat,” Richardson said. “Otherwise, you have to convince people. And they have to come to their own conclusions that it’s the right way to do it.” Besieged by Craven, authorities in Washington later ordered that the prior probability map be treated as an important factor in the search.
On July 18, 1968, a month after the Scorpion’s disappearance, Craven gave “a brain dump” for Richardson and a new Wagner employee, Lawrence D. (“Larry”) Stone. Craven reported everything he had learned from his experts, and Captain Andrews presented a submariner’s view of what a sub might do under various circumstances. Working in Washington, Craven and Andrews outlined nine scenarios that might explain how Scorpion sank. Then they assigned a weight to each according to how believable it was. This was the same approach Craven had used in the H-bomb search. Each scenario simulated Scorpion’s movements and multiple uncertainties as to its course, speed, and position at the time of the blip.
One high-priority scenario was based on a mysterious piece of bent metal found by the Mizar during a quick survey of the region before the start of systematic searching. The metal was so shiny that it could not have lain very long on the sea bottom, and it was far away from the overly investigated Point Oscar.
Richardson and Stone carried their copious notes to Wagner, Associates’ headquarters to quantify Craven and Andrews’s assumptions and compute a prior “probability map” of the sub’s location on the ocean floor. First, they established a search grid around the blip that Craven had identified as the probable location of the Scorpion’s explosion. Each cell in the grid measured one mile north–south and 0.84 miles east–west, for a total of 140 square miles.
At Richardson’s suggestion, the stateside search team made a key decision to use Monte Carlo methods to model the sub’s movements before and after the accident. Physicists on the Manhattan Project had pioneered Monte Carlo techniques for tracking the probable paths of neutrons in a chain reaction explosion. Richardson substituted “little hypothetical submarines” for the neutrons. Academic Bayesians would not adopt Monte Carlo methods for another 20 years.
Starting with Craven’s probable location of the explosion (the blip), a mainframe computed the probabilities that, in its death throes, the submarine changed course and traveled, for example, another mile in any of several random directions. Using Thomas Bayes’ simplification, Richardson began by considering each of those directions as equally probable. Then, making a point at each new possible location, the computer repeated the process to produce new points, reiterating the procedure 10,000 times to make 10,000 points on the seafloor where the sub might have settled.
The use of Monte Carlo simulation to generate numbers based on Craven’s presearch scenarios and weighting represented a big advance in search work. According to Richardson, “The nice thing with Monte Carlo is that you play a game of let’s pretend, like this: first of all there are ten scenarios with different probabilities, so let’s first pick a probability. The dice in this case is a random number generator in the computer. You roll the dice and pick a scenario to work with. Then you roll the dice for a certain speed, and you roll the dice again to see what direction it took. The last thing is that it collided with the bottom at an unknown time so you roll dice for the unknown time. So now you have speed, direction, starting point, time. Given them all, I know precisely where it [could have] hit the bottom. You have the computer put a point there. Rolling dice, I come up with different factors for each scenario. If I had enough patience, I could do it with pencil and paper. We calculated ten thousand points. So you have ten thousand points on the bottom of the ocean that represent equally likely positions of the sub. Then you draw a grid, count the points in each cell of the grid, saying that 10% of the points fall in this cell, 1% in that cell, and those percentages are what you use for probabilities for the prior for the individual distributions.”
The 10,000 points were calculated on a mainframe computer in a small Princeton company that encoded classified data and punched it into paper tape. Such computers were available only on the mainland in the 1960s. Still in the future were so-called portable modems, 45-pound backbreakers for dialing into phone lines.
As cumbersome as it seems today, the time-shared mainframe made Bayes’ repetitive calculations feasible. It calculated the coordinates of Scorpion’s 10,000 possible locations and then counted the number of points that fell in each cell of the search grid. Lacking any sort of display screens, the computer printed out the numbers on ordinary teletype paper tape. Then the data were relayed over insecure public telephone lines to Richardson and Stone in Paoli. It was the only practical way of incorporating all the presearch data accumulated by Craven and Andrews in Washington into a detailed probability map.
Richardson later felt guilty about calculating only a “skimpy” 10,000 points, but at the time it seemed like a big number. Today’s computers refine detail even in low-probability areas. When the 10,000-point map was finished, it described the initial probabilities in 172 cells covering 140 square miles. Two cells, E5 and B7, stood out like rock stars under spotlights. With presearch simulation “hits” of 1,250 and 1,096, respectively, they were by far the most likely resting places for Scorpion and its crew. The next 18 most likely cells had far lower probabilities, between 100 and 1,000; and most cells (which had scores below 100) seemed almost irrelevant. The map was based on hours of conversations with Craven and Andrews, their scenarios, and their weighting. Unlike the H-bomb analyses of two years before, this map represented a real scientific advance, primarily because of the Monte Carlo calculations of the Scorpion’s possible movements.
By late July, the map was ready, and Washington ordered that it be treated as an important factor in the search. It was now time for Bayesian updating, with data on the effectiveness of the fleet’s search effort in each cell.
Aboard the Mizar, mathematicians from Wagner, Associates accumulated and recorded the effectiveness of each day’s search. Stone and, later, two young students—Steven G. Simpson, a mathematics Ph.D. candidate from MIT, and James A. Rosenberg, a Drexel University undergr
aduate co-op student—worked on the area identified by Craven’s scenarios about 2,000 yards from Buchanan’s shiny piece of metal. Calculating by hand, they estimated the capabilities of the fleet’s cameras, sonars, and magnetometers and combined them into a single number expressing the effectiveness of the search conducted in each cell of the sea-bottom grid. These numbers would eventually become the second component in Bayes’ formula. Each morning the students had the unenviable task of tactfully advising a succession of naval commodores on the effectiveness of their searches: “Well, sir, I think it would be better if you did this rather than that.”
Psychologically, a search can be difficult. Until the target is found, every day represents a failure. As Stone put it, “Bayes says that the longer you search without finding the target, the worse your prospects are, because the remaining time to detect the target gets longer, not shorter.”13 On the other hand, those with confidence in Bayes’ rule could trace their progress. “Areas that you search go down in probability,” Richardson explained, “and areas that you haven’t searched go up. So your updated probabilities get higher where you haven’t been looking. . . . And generally, it’s always most optimal to keep looking in the highest probability area. The next day you have a high probability area somewhere else, probably not where you searched, and they pop up somewhere else the third day, and you just keep doing it and doing it and doing it. And unless you’ve made some drastic mistake, you’ll eventually find what you’re looking for.”
As it was during the H-bomb hunt, the biggest problem turned out to be overestimates of the sensors’ capabilities. Most of them had never been tested or evaluated systematically for how well they could detect a piece of metal to the left or right of their detectors. Thinking over the problem, Richardson realized, “So you had two uncertainties and, if your objective is to come up with the mathematical expression for how to allocate resources optimally, that’s an interesting point.”
The Theory That Would Not Die Page 26