Turing's Cathedral

Home > Other > Turing's Cathedral > Page 30
Turing's Cathedral Page 30

by George Dyson


  “By the end of 1950,” reported Bigelow, “it was now possible to put a program into the machine and get results out. During the spring of 1951, the machine became increasingly available for use, and programmers were putting their programs on for exploratory runs, debugging, etc. and the machine error rate had become low enough so that most of the errors found were in their own work.”

  During the summer of 1951, “a team of scientists from Los Alamos came and put a large thermonuclear calculation on the IAS machine; it ran for 24 hours without interruption for a period of about 60 days,” Bigelow continues. “So it had come alive.”63

  The digital universe and the hydrogen bomb were brought into existence at the same time. “It is an irony of fate,” observes Françoise Ulam, “that much of the high-tech world we live in today, the conquest of space, the extraordinary advances in biology and medicine, were spurred on by one man’s monomania and the need to develop electronic computers to calculate whether an H-bomb could be built or not.”64

  Von Neumann, a member of the Institute for Advanced Study, spent much of his time working on weapons, whereas Ulam, a member of the Los Alamos weapons laboratory, spent most of his time on pure mathematical research. While von Neumann began working on Intercontinental Ballistic Missiles, or ICBMs, Ulam, in contrast, began thinking about how to use bombs to launch missiles, instead of how to use missiles to launch bombs.

  “The idea of nuclear propulsion of space vehicles was born as soon as nuclear energy became a reality,” he explains. While others who visited the Trinity test site marveled at how the shot tower had been vaporized by the explosion, Ulam observed that the steel reinforcement at the base of the tower had survived the explosion intact. Perhaps objects caught within the fireball could survive the explosion and even be propelled somewhere else. The question of whether the energy produced by a small fission explosion could be channeled outward to drive the propulsion of a space vehicle was similar to the question of whether this energy could be channeled inward to drive the implosion of a thermonuclear bomb. Ulam’s idea was the hydrogen bomb turned inside out.

  In 1955, with Cornelius Everett, Ulam produced a classified Los Alamos report, “On a Method of Propulsion of Projectiles by Means of External Nuclear Explosions,” suggesting that “repeated nuclear explosions outside the body of a projectile are considered as providing a means to accelerate such objects to velocities of the order of 106 cm/sec … in the range of the missiles considered for intercontinental warfare and even more perhaps, for escape from the earth’s gravitational field.”65

  This report lay idle for two years and then, after the launch of the Soviet Sputnik, the idea was adopted by Ted Taylor, who developed it into plans for a real spaceship from where Ulam had left off. Project Orion, funded at first by the Department of Defense’s Advanced Research Projects Agency (ARPA) and later by the air force, was pursued seriously for the next eight years. “It is almost like Jules Verne’s idea of shooting a rocket to the moon,” Ulam testified before Senator Albert Gore in early 1958.66 On April 1, Ulam issued another Los Alamos report, “On the Possibility of Extracting Energy from Gravitational Systems by Navigating Space Vehicles,” describing how a spacecraft might operate as a gravitational “Maxwell’s demon,” amplifying a limited supply of fuel and propellant by using computational intelligence to select a trajectory that harvested energy from celestial bodies as it passed by.

  In 1871, James Clerk Maxwell, the namesake for both Maxwell’s equations formalizing the concept of an electromagnetic field and the Maxwellian distribution of kinetic energy among the particles of a gas, conceived an imaginary being—termed “Maxwell’s demon” by William Thomson (Lord Kelvin) in 1874—“whose faculties are so sharpened that he can follow every molecule in its course.”67 The demon appears to defy the second law of thermodynamics by heating a compartment in an otherwise closed system, without the expenditure of physical work, by opening and closing a small trap door, at exactly the right time, to let high-velocity molecules in and low-velocity molecules out. A Maxwellian distribution of energy describes how, without supernatural intelligence, kinetic energy tends to equalize across a population of particles over time. Light particles end up moving faster at the heavier particles’ expense. A 4,000-ton spaceship will end up moving faster than a planet—given enough time. Maxwell first developed these ideas, later adapted to thermodynamics, to explain the distribution, by size and velocity, of particles that make up Saturn’s rings.

  “As examples of the situation we have in mind,” explained Ulam, “assume a rocket cruising between the Sun and Jupiter, i.e., in an orbit approximately that of Mars.… The question is whether, by planning suitable approaches to Jupiter and then closer approaches to the sun, it could acquire, say, 10 times more energy.… By steering the rocket, one can to some modest extent acquire the properties of a Maxwell demon … to shorten by many orders of magnitude the time necessary for acquisition of very high velocities.”68

  “I remember Stan talking about being able to make a Maxwell’s demon, that it could be a possible physical thing,” Ted Taylor recalls. The required computational intelligence, viewed as a major obstacle in 1958, would be the least of the obstacles today. “The computations required to plan changes in the trajectory might be of prohibitive length and complication,” Ulam warned.69

  Ulam himself appeared to violate the second law of thermodynamics by performing useful work, with no visible expenditure of energy, simply by opening doors to the right ideas at the right time. Whether over coffee in Lwów or poker at Los Alamos, he let good ideas in and kept bad ideas out. “My incredible luck,” he bragged to von Neumann from Los Alamos in February 1952, “was evident in poker (8 successive + earnings) this year.”70 Four of the twentieth century’s most imaginative ideas for leveraging our intelligence—the Monte Carlo method, the Teller-Ulam invention, self-reproducing cellular automata, and nuclear pulse propulsion—originated with help from Stan. Three of the four proved to be wildly successful, and the fourth was abandoned before it had a chance.

  Monte Carlo was the realization, through digital computing, of what Maxwell could only imagine: a way to actually follow the behavior of a physical system at its elemental levels, as “if our faculties and instruments were so sharpened that we could detect and lay hold of each molecule and trace it through all its course.”71 The Teller-Ulam invention invoked a form of Maxwell’s demon to heat a compartment to a temperature hotter than the sun by letting a burst of radiation in, and then, for an equilibrium-defying instant, not letting radiation out. Ulam’s self-reproducing cellular automata—patterns of information persisting across time—evolve by letting order in but not letting order out.

  When Nicholas Metropolis and Stanley Frankel began coding the first bomb calculations for the ENIAC, there was room for only a one-dimensional universe—represented by a single line, in our universe, extending outward from the center of the bomb. By assuming spherical symmetry, what was learned in that one-dimensional universe could be used to predict three-dimensional behavior in ours. Ulam began imagining how, in a one-dimensional universe, cosmology might evolve. “Has anybody considered the following problem—which appears to me very pretty,” he wrote to von Neumann in February 1949. “Imagine that on the infinite line –∞ to +∞ I have occupied the integer points each with probability say ½ by material point masses—i.e. I have this situation,” and he sketched a random distribution of points on a line. “This is a distribution at time t=0.”

  “Now between these points act 1/d2 forces (like gravitation),” he continued. “What will happen for t > 0? I claim that condensations will form quickly—assume, for simplicity when points touch they stick—with nice Gaussian-like distribution of masses. Then—the next stage—clusters of these condensations will form—somewhat slower but surely (all statements have probability = 1!).” Ulam explained how this simple one-dimensional universe would start to look “somewhat like the real Universe: stars, clusters, galaxies, super-galaxies etc.,”
and then considered what might happen in two dimensions, and even three dimensions, by introducing range forces, thermal oscillation, and light. He concluded by suggesting “that ‘entropy’ decreases—an abnormal ‘order’ applies.”72

  Ulam then began thinking about a two-dimensional, cellular universe, taking cues from the two-dimensional hydrodynamics codes that were being used in the work on bombs. “I discussed the cellular model with von Neumann in the late 1940’s,” he later wrote to Arthur Burks, and he evidently had similar discussions with Nick Metropolis as well. “I am coming to Los Alamos after all!!!,” Metropolis wrote to Ulam in June 1948. “Hope you will have a chance to do more about the geometry of phase space because it is something. And your two dimensional world.”73

  Meanwhile, in our three-dimensional universe, at 07 hours, 14 minutes, and 59 seconds local time on November 1, 1952 (October 31 in the United States), the Teller-Ulam invention, the Monte Carlo algorithm, the IAS computer, the resources of Los Alamos, and the efforts of some 11,652 people assigned to Task Force 132 in the South Pacific resulted in the detonation of Ivy Mike, the first hydrogen bomb.

  The size of a railroad car, with its nonnuclear components built by the American Car and Foundry Company of Buffalo, New York, Mike weighed 82 tons, much of that being a massive steel tank of liquid deuterium, cooled to minus 250 degrees Kelvin and ignited by a TX-5 fission bomb. Exploded at the surface of a small island in Enewetak Atoll, Mike yielded 10.4 megatons—some 750 Hiroshimas—vaporizing 80 million tons of coral to leave a crater 6,300 feet in diameter and 160 feet deep, “large enough to hold 14 buildings the size of the Pentagon,” as it was put in one of the official reports. A thought that had first crossed Ulam’s mind while staring out into the garden less than three years previously had now removed the entire island of Elugelab from the map.

  Enewetak Atoll, comprised of some thirty-nine small islands distributed around a central lagoon, was, like the Valles Caldera, the remnant of a former volcano whose collapse had left behind not a meadowed valley ringed by mountains, but a sheltered lagoon ringed by a coral reef. Remote even among the Marshall Islands, the island, and its seafaring inhabitants, had been left undisturbed until the island was claimed by the Japanese after World War I, and then occupied by the United States after a fierce battle during World War II. All native residents were exiled in 1947 to Ujelang, an uninhabited atoll 140 miles away, when Enewetak was selected as a site for nuclear tests. It was at first assumed that Enewetak was too close to Kwajalein and a number of “small atolls populated by natives” for a test of the super bomb, but according to the notes of a meeting held on August 25, 1951, “it was Edward Teller’s opinion that a shot at Eniwetok [sic] was not out of the question … if one chooses a time when the wind was in the opposite direction and made advance preparations to evacuate Kwajalein.”74

  “Accompanied by a brilliant light, the heat wave was felt immediately at distances of thirty to thirty-five miles,” reports the official record of the test. “The tremendous fireball, appearing on the horizon like the sun when half-risen, quickly expanded … and a tremendous conventional mushroom-shaped cloud soon appeared, seemingly balanced on a wide, dirty stem … due to the coral particles, debris, and water which were sucked high into the air … around the area where the island of Elugelab had been.”75

  “The bomb went off at 7:15 a.m. in a partially cloudy sky, streaked with color from the rising sun,” noted Lauren Donaldson, a forty-nine-year-old fisheries biologist from the University of Washington who collected samples before and after the test. A week later he was still finding terns that “had their feathers burned, white feathers seemed to have been missed but dark were scorched,” and fish whose “skin was missing from a side as if they had been dropped in a hot pan.” He and his crew had made their own viewing goggles by attaching darkened welding glass to their diving masks, and from thirty miles away, aboard the Oakhill, “the fire ball as it developed seemed to boil at first and fold in like fruit boiling in a kettle. There were great blackened chunks that seemed to be included in the mass.”76

  Walter Munk and Willard Bascom, young oceanographers working for Scripps Institution of Oceanography, were dropped off by the converted tug Horizon on plywood rafts supported by truck tire inner tubes, 83 miles from ground zero, to measure the surface wave and, if there was any sign of a tsunami, to signal an alarm. The two rafts, stationed 2 miles apart, were anchored by piano wire to a pair of San Diego trolley wheels lowered to the summit of a seamount 4,500 feet below. “Wet and cold, I put on my high-density goggles,” Munk remembers. “An instant heat blast signaled the explosion; at 0721 a 5 millibar air shock arrived, a sharp report followed by angry rumbling. I will not forget the boiling sky overhead. None of the photographs I have ever seen captured this.”77

  After about an hour, the cloud, now some 60 miles in diameter, had, in the words of one observer, “splashed” against the tropopause at over 100,000 feet. A series of air force pilots, flying specially configured F-84G sampling aircraft and wearing lead-lined flight suits, were sent in to sample within the mushroom cloud. The first group went in 90 minutes after detonation, at 42,000 feet.

  “Immediately upon entering the cloud, RED LEADER was struck by its intense color,” the official history reports.

  The hand on the Integron, which showed the rate at which radioactivity was being accumulated, “went around like the sweep second hand on a watch.… And I had thought it would barely move!” The combination of most instruments indicating maximum readings and the red glow like the inside of a red-hot furnace was “staggering” and Colonel Meroney quickly made a 90-degree turn to leave the cloud.

  Meroney just made it, out of fuel, back to the airstrip at Enewetak. Jimmy Priestly Robinson, piloting the fourth F-84, was not as fortunate. “For reasons unknown, RED-4 spun out shortly after entering, but managed to regain control at 20,000 feet,” the report continues. At 19,000 feet Robinson reported his gauges showing empty but engine still running. His next transmission reported that his engine had flamed out and he was at 13,000 feet. A rescue crew was scrambled by helicopter to prepare to retrieve him. His final transmission was from 3,000 feet: “I have the helicopter in sight and am bailing out.” The aircraft flew into the water on a level glide, under control, and flipped over before it sank. No body was ever found.78

  Jimmy Priestly Robinson was the first person to be killed by a hydrogen bomb.

  The test remained top secret, and news of its success was embargoed from the public until an announcement by outgoing president Truman (just before Eisenhower’s inauguration) on January 7, 1953. More than 6,706 background checks were conducted on individuals involved with the test, and on November 14, J. Edgar Hoover was personally enlisted to try to ferret out the source of information that had leaked to reporters from Time and Life magazines. Ulam, who was on leave from Los Alamos at Harvard, came down to New York City to meet with von Neumann in early November, probably to receive the news firsthand. They evidently had a long conversation on a bench in Central Park, leaving no record of their discussion of the Ivy Mike test, but a subsequent exchange of letters hinted at the conversation having extended to the possibility of a digital universe being brought to life.

  “Only because of our conversation on the bench in Central Park I was able to understand…[that] given is an actually infinite system of points (the actual infinity is worth stressing because nothing will make sense on a finite no matter how large model),” noted Ulam, who then sketched out how he and von Neumann had hypothesized the evolution of Turing-complete (or “universal”) cellular automata within a digital universe of communicating memory cells. The definitions had to be made mathematically precise:

  A “universal” automaton is a finite system which given an arbitrary logical proposition in form of (a linear set L) tape attached to it, at say specified points, will produce the true or false answer. (Universal ought to have relative sense: with reference to a class of problems it can decide.) The “arbitrary” mea
ns really in a class of propositions like Turing’s—or smaller or bigger.

  “An organism (any reason to be afraid of this term yet?) is a universal automaton which produces other automata like it in space which is inert or only ‘randomly activated’ around it,” Ulam’s notes continued. “This ‘universality’ is probably necessary to organize or resist organization by other automata?” he asked, parenthetically, before outlining a mathematical formulation of the evolution of such organisms into metazoan forms.

  Suppose that the states for each cell are only two, the cells of the same type, connections between neighbors inducing only the simplest change. The problem is to see whether there will exist boxes of these cells containing n (n big!) elements each, the no. of states then is 2n for each box; now we divide the 2n states into K classes (K small like 20) and call each class a state of the box. These boxes will then perhaps be able to play the role of our present cells.

  In the end Ulam acknowledged that a stochastic, rather than deterministic, model might have to be invoked, which, “unfortunately, would have to involve an enormous amount of probabilistic superstructure to the outlined theory. I think it should probably be omitted unless it involves the crux of the generation and evolution problem—which it might?”79

 

‹ Prev