Turing's Cathedral
Page 39
The Institute for Advanced Study computer was duplicated, with variation, by a first generation of immediate siblings that included SEAC in Washington, D.C., ILLIAC at the University of Illinois, ORDVAC at Aberdeen, JOHNNIAC at the RAND Corporation, MANIAC at Los Alamos National Laboratory, AVIDAC at Argonne, ORACLE at Oak Ridge, BESK in Stockholm, DASK in Copenhagen, SILLIAC in Sydney, BESM in Moscow, PERM in Munich, WEIZAC in Rehovot, and the IBM 701. “There are a number of offspring of the Princeton machine, not all of them closely resembling the parent,” Willis Ware reported in March 1953. “From about 1949 on, engineering personnel visited us rather steadily and took away designs and drawings for duplicate machines.”15
In turn, von Neumann made visits to the other laboratories, and freely exchanged ideas. Physicist Murray Gell-Mann was working during the summer of 1951 at the Control Systems Laboratory, housed immediately above the ILLIAC at the University of Illinois. “This was used on secret government work,” says David Wheeler, “and some wires came down.” Gell-Mann and Keith Brueckner had been assigned, by their air force sponsors, “to imagine that we had very, very bad computer parts. And we were to make a very reliable computer out of it.” After a lot of work, they were able to show that even with logical components that had “a 51% probability of being right and a 49% probability of being wrong,” they could design circuits so “that the signal was gradually improved.” They were trying to show exponential improvement, and were getting close. “The project hired various consultants, included Johnny von Neumann for one day,” Gell-Mann adds. “He liked to think about problems while driving across the country. So he was driving to Los Alamos to work on thermonuclear weapon ideas, and on the way he stopped in Urbana for a day and consulted for us. God knows what they had to pay him.”16
In late 1951, von Neumann wrote up these ideas in a short manuscript, “Reliable Organizations of Unreliable Elements,” and in January 1952 he gave a series of five lectures at the California Institute of Technology, later published as Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components, in which he began to formulate a theory of reliability, in his characteristic, axiomatic way. “Error is viewed, therefore, not as an extraneous and misdirected or misdirecting accident, but as an essential part of the process,” he announced. He thanked Keith A. Brueckner and Murray Gell-Mann for “some important stimuli on this subject,” but not in any detail. “I wasn’t upset at all, at the time,” says Gell-Mann. “I thought, my God, this great man is referring to me in the footnote. I’m in the footnote! I was so flattered, and I suppose Keith was, too.”17
Second- and third-generation copies of the IAS machine followed before the decade was out. Larger computers with larger memories spawned larger, more complex codes, in turn spawning larger computers. Hand-soldered chassis gave way to printed circuits, integrated circuits, and eventually microprocessors with billions of transistors imprinted on silicon without being touched by human hands. The 5 kilobytes of random-access electrostatic memory that hosted von Neumann’s original digital universe at a cost of roughly $100,000 in 1947 dollars costs less than 1⁄100 of one cent—and cycles 1,000 times as fast—today.
In 1945, the Review of Economic Studies had published von Neumann’s “Model of General Economic Equilibrium,” a nine-page paper read to a Princeton mathematics seminar in 1932 and first published (in German) in 1937. Von Neumann elucidated the behavior of an economy where “goods are produced not only from ‘natural factors of production,’ but … from each other.” In this autocatalytic economy, equilibrium and expansion coexist at the saddle point between convex sets. “The connection with topology may be very surprising at first,” von Neumann noted, “but the author thinks that it is natural in problems of this kind.”18
Some of the assumptions of von Neumann’s “expanding economic model”—that “natural factors of production, including labour, can be expanded in unlimited quantities” and that “all income in excess of necessities of life will be reinvested”—appeared unrealistic at the time; less so now, when self-reproducing technology is driving economic growth. We measure our economy in money, not in things, and have yet to develop economic models that adequately account for the effects of self-reproducing machines and self-replicating codes.
After von Neumann’s departure for the AEC, the IAS computing group began working on “the problem of synthesizing (‘near minimal’) combinatorial switching circuits” in general, and the problem “of designing a digital computer” as a special case of a circuit that can optimize itself. “This synthesis can be executed by a digital computer, in particular, by the computer to be designed if sufficiently large,” they reported in April 1956, concluding that “it appears that we have thereby exhibited a machine which can reproduce (i.e. design) itself. This result seems to be related to the self-reproducing machines of von Neumann.”19 They were right.
Codes populating the growing digital universe soon became Turing-complete, much as envisioned by Ulam and von Neumann in 1952. Turing’s ACE, a powerful Universal Machine, was to have had a memory of 25 kilobytes, or 2 × 105 bits. The present scale of the digital universe has been estimated at 1022 bits. The number of Turing machines populating this universe is unknown, and increasingly these machines are virtual machines that do not necessarily map to any particular physical hardware at any particular time. They exist as precisely defined entities in the digital universe, but have no fixed existence in ours. And they are proliferating so fast that real machines are struggling to catch up with the demand. Physical machines spawn virtual machines that in turn spawn demand for more physical machines. Evolution in the digital universe now drives evolution in our universe, rather than the other way around.
Theory of Self-Reproducing Automata was to present a grand, unifying theory—one reason von Neumann was saving it for last. The new theory would apply to biological systems, technological systems, and every conceivable and inconceivable combination of the two. It would apply to automata, whether embodied in the physical world, the digital universe, or both, and would extend beyond existing life and technology on Earth.
Von Neumann rarely discussed extraterrestrial life or extraterrestrial intelligence; terrestrial life and intelligence were puzzling enough. Nils Barricelli was less restrained. “The conditions for developing organisms with many of the properties considered characteristic of living beings, by evolutionary processes, do not have to be similar to those prevailing on Earth,” he concluded, based on his numerical evolution experiments at the IAS. “There is every reason to believe that any planet on which a large variety of molecules can reproduce by interconnected (or symbiotic) autocatalytic reactions, may see the formation of organisms with the same properties.”20 One of these properties, independent of the local conditions, might be the development of the Universal Machine.
Over long distances, it is expensive to transport structures, and inexpensive to transmit sequences. Turing machines, which by definition are structures that can be encoded as sequences, are already propagating themselves, locally, at the speed of light. The notion that one particular computer resides in one particular location at one time is obsolete.
If life, by some chance, happens to have originated, and survived, elsewhere in the universe, it will have had time to explore an unfathomable diversity of forms. Those best able to survive the passage of time, adapt to changing environments, and migrate across interstellar distances will become the most widespread. A life form that assumes digital representation, for all or part of its life cycle, will be able to travel at the speed of light. As artificial intelligence pioneer Marvin Minsky observed on a visit to Soviet Armenia in 1970, “Instead of sending a picture of a cat, there is one area in which you can send the cat itself.”21
Von Neumann extended the concept of Turing’s Universal Machine to a Universal Constructor: a machine that can execute the description of any other machine, including a description of itself. The Universal Constructor can, in turn, be extended to the concept of
a machine that, by encoding and transmitting its own description as a self-extracting archive, reproduces copies of itself somewhere else. Digitally encoded organisms could be propagated economically even with extremely low probability of finding a host environment in which to germinate and grow. If the encoded kernel is intercepted by a host that has discovered digital computing—whose ability to translate between sequence and structure is as close to a universal common denominator as life and intelligence running on different platforms may be able to get—it has a chance. If we discovered such a kernel, we would immediately replicate it widely. Laboratories all over the planet would begin attempting to decode it, eventually compiling the coded sequence—intentionally or inadvertently—to utilize our local resources, the way a virus is allocated privileges within a host cell. The read/write privileges granted to digital codes already include material technology, human minds, and, increasingly, nucleotide synthesis and all the ensuing details of biology itself.
The host planet would have to not only build radio telescopes and be actively listening for coded sequences, but also grant computational resources to signals if and when they arrived. The SETI@home network now links some five million terrestrial computers to a growing array of radio telescopes, delivering a collective 500 teraflops of fast Fourier transforms representing a cumulative two million years of processing time. Not a word (or even a picture) so far—as far as we know.
Sixty-some years ago, biochemical organisms began to assemble digital computers. Now digital computers are beginning to assemble biochemical organisms. Viewed from a distance, this looks like part of a life cycle. But which part? Are biochemical organisms the larval phase of digital computers? Or are digital computers the larval phase of biochemical organisms?
According to Edward Teller, Enrico Fermi asked the question “Where is everybody?” at Los Alamos in 1950, when the subject of extraterrestrial beings came up over lunch. Fifty years later, over lunch at the Hoover Institution at Stanford University, I asked a ninety-one-year-old Edward Teller how Fermi’s question was holding up. John von Neumann, Theodore von Kármán, Léo Szilárd, and Eugene Wigner, Teller’s childhood colleagues from Budapest, had all predeceased him. Of the five Hungarian “Martians” who brought the world nuclear weapons, digital computers, much of the aerospace industry, and the beginnings of genetic engineering, only Edward Teller, carrying a wooden staff at his side like an Old Testament prophet, was left.
His limp, from losing most of a foot to a Munich streetcar in 1928, had grown more pronounced, just as memories of his Hungarian youth had become more vivid as his later memories were beginning to fade. “I remember the bridges, the beautiful bridges,” he says of Budapest.22 Although Teller served (with von Neumann and German rocket pioneer Wernher von Braun) as one of the models for the composite title character in Stanley Kubrick’s cold war masterpiece Dr. Strangelove, nuclear weapons in the hands of Teller are, to me, less terrifying than they are in the hands of a new generation of nuclear weaponeers who have never witnessed an atmospheric test firsthand.
Teller assumed that I had come to ask him about the Teller-Ulam invention, and provided a lengthy account of the genesis of the hydrogen bomb, and of the fission implosion-explosion required to get the thermonuclear fuel to ignite. “The whole implosion idea—that is, that one can get densities considerably greater than normal—came from a visit from von Neumann,” he told me. “We proposed that together to Oppenheimer. He at once accepted.”23 With the hydrogen bomb out of the way, I mentioned that I was interested in the status of the Fermi paradox after fifty years.
“Let me ask you,” Teller interjected, in his thick Hungarian accent. “Are you uninterested in extraterrestrial intelligence? Obviously not. If you are interested, what would you look for?”
“There’s all sorts of things you can look for,” I answered. “But I think the thing not to look for is some intelligible signal.… Any civilization that is doing useful communication, any efficient transmission of information will be encoded, so it won’t be intelligible to us—it will look like noise.”
“Where would you look for that?” asked Teller.
“I don’t know.…”
“I do!”
“Where?”
“Globular clusters!” answered Teller. “We cannot get in touch with anybody else, because they choose to be so far away from us. In globular clusters, it is much easier for people at different places to get together. And if there is interstellar communication at all, it must be in the globular clusters.”
“That seems reasonable,” I agreed. “My own personal theory is that extraterrestrial life could be here already … and how would we necessarily know? If there is life in the universe, the form of life that will prove to be most successful at propagating itself will be digital life; it will adopt a form that is independent of the local chemistry, and migrate from one place to another as an electromagnetic signal, as long as there’s a digital world—a civilization that has discovered the Universal Turing Machine—for it to colonize when it gets there. And that’s why von Neumann and you other Martians got us to build all these computers, to create a home for this kind of life.”
There was a long, drawn-out pause. “Look,” Teller finally said, lowering his voice to a raspy whisper, “may I suggest that instead of explaining this, which would be hard … you write a science-fiction book about it.”
“Probably someone has,” I said.
“Probably,” answered Teller, “someone has not.”
SIXTEEN
Mach 9
No time is there. Sequence is different from time.
—Julian Bigelow, 1999
“IN ALL THE YEARS after the war, whenever you visited one of the installations with a modern mainframe computer, you would always find somebody doing a shock wave problem,” remembers German American astrophysicist Martin Schwarzschild, who, still an enemy alien, enlisted in the U.S. Army at the outbreak of World War II. “If you asked them how they came to be working on that, it was always von Neumann who put them onto it. So they became the footprint of von Neumann, walking across the scene of modern computers.”1
Schwarzschild was assigned to the Aberdeen Proving Ground, where he studied the effects of the new “block buster” weapons, fueled with conventional explosives but of such size that most of the damage was caused by the shock wave rather than by the bomb debris itself. It was this problem, foreshadowing the effects of nuclear weapons, that first drew von Neumann to Aberdeen. Before von Neumann arrived, “we had incredible struggles,” according to Schwarzschild. “Even with days of arguing and thinking none of us could really figure out how one should exactly tell the engineers what we wanted.” Von Neumann’s solution was to have the engineers build a machine that could follow a limited number of simple instructions, and then let the mathematicians and physicists assemble programs as needed from those instructions, without having to go back to the engineers. “You immediately saw how you would write down sequences of statements to solve any particular problem,” says Schwarzschild, emphasizing “how dumb we were early in 1943 and how everything seemed terribly plain and straight-forward in 1944.”2 It was von Neumann who conveyed this approach, wherever it originated, to Aberdeen.
Eight years later, the high-explosive blockbusters of 1943 had become thermonuclear weapons, and the ENIAC and the Colossus had become fully universal machines, yet ideas were still exchanged in person at the speed of a propeller-driven DC-3. As the IAS computer was undergoing initial testing in January of 1952, von Neumann flew from California to Cocoa, Florida (near Cape Canaveral, later Cape Kennedy), for a meeting of air force officials and some sixty scientific advisers, prompted by the establishment of the Air Research and Development Command, for whom he had agreed to help set up a mathematical advisory group.
The top-secret Project Vista report on the role of nuclear weapons in the defense of Europe had just been released. The report argued—in keeping with the views of Oppenheimer and against the views of its air
force sponsors—that tactical nuclear weapons aimed at the military battlefield, rather than strategic nuclear weapons aimed at civilian populations, might, both morally and militarily, be the better approach. It was Oppenheimer’s influence over the Vista report, as much as his public hesitation about thermonuclear weapons, that led to his security clearances being withdrawn. The unspoken agreement between the military and the scientists was that the military would not tell the scientists how to do science, and the scientists would not tell the military how to use the bombs. Oppenheimer had stepped out of bounds.
Von Neumann first flew from San Francisco to Tulsa, Oklahoma, via DC-4, a journey that “involved only two stops and two plane-changes: El Paso and Dallas.” From Tulsa to Cocoa required a first leg, “with stops at Muskogee, Fort Worth, Texarkana, Shreveport and a change at New Orleans,” aboard a DC-3. Then came a second leg, “with stops at Mobile, Pensacola, Panama city and a change at Tampa,” aboard a Lockheed Lodestar. Then came the last leg to Orlando, also by Lodestar, and finally an air force car to the “big but dilapidated” Indian River Hotel.3 He was able to return to Washington aboard a military aircraft, and finally by train to Princeton. In Washington he found that the AEC commissioners “wanted a discussion of some shock-wave questions,” while back at the Institute he found the computer looking “reasonably good,” despite a daily quota of “transients and faults.”4
Thermonuclear reactions in a bomb are over in billionths of a second, while thermonuclear reactions in a star play out over billions of years. Both time scales, beyond human comprehension, fell within the MANIAC’s reach. With the war over, Martin Schwarzschild had begun applying desk calculators and punched card tabulating equipment to the problem of stellar evolution, combining Bethe’s theories of 1938 with the techniques developed at Los Alamos to calculate radiation opacity and equations of state. “To get a solution for a particular star for one particular time in the life of the star would take two or three months,” Schwarzschild explained, despite the assistance of the new Watson Scientific Computing Laboratory, built at Columbia University by IBM. “The amount of numerical computation necessary is awfully big,” he reported to Subrahmanyan Chandrasekhar at the beginning of December 1946. “I’m just finishing the solutions for the convective core. However, the integrations of the seventeen necessary particular solutions for the radiative envelope have only just been started by the I.B.M. laboratory on the new relay multipliers.… I fear that the numerical work will still be far from its end at Christmas.”5