Book Read Free

Turing's Cathedral

Page 38

by George Dyson


  In a digitally coded system, each digit has a precise meaning, and if even one digit is misplaced, the computation may produce a wrong answer or be brought to a halt. In a pulse-frequency-coded system, meaning is conveyed by the frequency at which pulses are transmitted between given locations—whether those locations are synapses within a brain or addresses on the World Wide Web. Shifting the frequency shifts the meaning, but the communication, storage, and interpretation of information is probabilistic and statistical, independent of whether each bit is in exactly the right place at exactly the right time. Meaning resides in what connects where, and how frequently, as well as by being encoded in the signals conveyed. As von Neumann explained in 1948, “A new, essentially logical, theory is called for in order to understand high-complication automata and, in particular, the central nervous system. It may be, however, that in this process logic will have to undergo a pseudomorphosis to neurology to a much greater extent than the reverse.”47

  The reliability of monolithic microprocessors and the fidelity of monolithic storage postponed the necessity for this pseudomorphosis far beyond what seemed possible in 1948. Only recently has this pseudomorphosis resumed its course. The von Neumann address matrix is becoming the basis of a non–von Neumann address matrix, and Turing machines are being assembled into systems that are not Turing machines. Codes—we now call them apps—are breaking free from the intolerance of the numerical address matrix and central clock cycle for error and ambiguity in specifying where and when.

  The microprocessor, however, is here to stay, just as the advent of metazoan organisms did not bring the end of individual cells. Biological organisms are subdivided into cells, since the stochastic, template-based molecular addressing on which metabolism and replication depend works faster on a local scale. Technological organisms are also subdivided into cells (and processors subdivided into multiple cores), not only to isolate errors, but also because the numerical addressing on which digital processing depends can operate at nanosecond speed on only a local scale. Across larger domains—of both size and time—other forms of addressing and processing, and other architectures, are starting to evolve.

  In the age of all things digital we are building analog computers again. Didn’t analog computing disappear, in the age of the dinosaurs, with the superseding of the Bush differential analyzer by the ENIAC, when the race was on to perform high-speed arithmetic, leaving no doubt that digital computing would come out ahead? There are other benchmarks besides arithmetic, and Turing, von Neumann, and Bigelow, for all their contributions to the digital revolution, did not see analog computing as a dead end. Part of the problem, as Jack Good put it in 1962, is that “analogue computers are stupidly named; they should be named continuous computers.” For real-world questions—especially ambiguous ones—analog computing can be faster, more accurate, and more robust, not only at computing the answers, but also at asking the questions and communicating the results. Web 2.0 is our code word for the analog increasingly supervening upon the digital—reversing how digital logic was embedded in analog components, sixty years ago. Search engines and social networks are just the beginning—the Precambrian phase. “If the only demerit of the digital expansion system were its greater logical complexity, nature would not, for this reason alone, have rejected it,” von Neumann admitted in 1948.48

  Search engines and social networks are analog computers of unprecedented scale. Information is being encoded (and operated upon) as continuous (and noise-tolerant) variables such as frequencies (of connection or occurrence) and the topology of what connects where, with location being increasingly defined by a fault-tolerant template rather than by an unforgiving numerical address. Pulse-frequency coding for the Internet is one way to describe the working architecture of a search engine, and PageRank for neurons is one way to describe the working architecture of the brain. These computational structures use digital components, but the analog computing being performed by the system as a whole exceeds the complexity of the digital code on which it runs. The model (of the social graph, or of human knowledge) constructs and updates itself.

  Complex networks—of molecules, people, or ideas—constitute their own simplest behavioral descriptions. This behavior can be more easily captured by continuous, analog networks than it can be defined by digital, algorithmic codes. These analog networks may be composed of digital processors, but it is in the analog domain that the interesting computation is being performed. “The purely ‘digital’ procedure is probably more circumstantial and clumsy than necessary,” von Neumann warned in 1951. “Better, and better integrated, mixed procedures may exist.”49

  Analog is back, and here to stay.

  FIFTEEN

  Theory of Self-Reproducing Automata

  I see the problem not from the mathematical point of view, as, for instance, von Neumann did, but as an engineer. It may be better that there is almost no support for such ideas. Perhaps the devil is behind it, too.

  —Konrad Zuse, 1976

  “THE CAMERA MOVES across the sky, and now the black serrated shape of a rocky island breaks the line of the horizon. Sailing past the island is a large, four-masted schooner. We approach, we see that the ship flies the flag of New Zealand and is named the Canterbury. Her captain and a group of passengers are at the rail, staring intently toward the east. We look through their binoculars and discover a line of barren coast.”1

  Thus begins Ape and Essence, Aldous Huxley’s lesser-known masterpiece, set in the Los Angeles of 2108, after a nuclear war (in the year 2008) has devastated humanity’s ability to reproduce high-fidelity copies of itself. On the twentieth of February 2108, the New Zealand Rediscovery Expedition to North America arrives among the Channel Islands off the California coast. The story is presented, in keeping with the Hollywood location, in the form of a film script. “New Zealand survived and even modestly flourished in an isolation which, because of the dangerously radioactive condition of the rest of the world, remained for more than a century almost absolute. Now that the danger is over, here come its first explorers, rediscovering America from the West.”2

  Huxley’s dystopian vision was published in 1948, when a third world war appeared all but inevitable, the prospects little brightened by von Neumann’s argument that the ultimate death toll could be minimized by launching a preventive attack. Although the exact mechanism by which genetic information was replicated had yet to be determined, there was no mistaking the effects of ionizing radiation on the transmission of instructions from one generation to the next. Huxley assumed that in the aftermath of nuclear war, the Darwinian evolution so championed by his grandfather, Thomas Huxley, would start to unwind.

  Too-perfect replication may, in the end, be as much of a threat. Darwinian evolution depends on copies not always being exact. The Los Angeles of 2108 is as likely to be rendered socially unrecognizable not by the error catastrophe of Ape and Essence, but by its opposite: the ability to read genetic sequences into computers, replicate them exactly, and translate them back into living organisms, without a single bit misplaced along the way. A Los Angeles run by human beings able to specify the exact genetic characteristics of their offspring may be more terrifying than the Los Angeles governed by baboons that greeted the crew of the Canterbury in Huxley’s 2108.

  As primitive self-reproducing life forms adopted self-replicating polynucleotide symbionts as the carriers of hereditary information from one generation to the next, so might current life forms adopt computers as carriers of their genetic code. Nils Barricelli hinted at this in 1979, observing that nature has shown a tendency, among highly social organisms, “to separate the organisms of which the society is composed into two main classes, namely: the workers, specialized in carrying out all the tasks necessary for the society’s survival except reproduction, and the carriers of hereditary information, whose function is to reproduce the society.” This separation of reproductive function, Barricelli noted, “is common in highly organized societies of living organisms (queens and
drones among ants, bees and termites, gametes among the species of cells forming a multicellular organism) except man, whose society is of relatively recent formation in biological terms.”3

  The powers of computers derive as much from their ability to copy as from their ability to compute. Sending an e-mail, or transferring a file, does not physically move anything; it creates a new copy somewhere else. A Turing machine is, by definition, able to make exact copies of any readable sequence—including its own state of mind and the sequence stored on its own tape. A Turing machine can, therefore, make copies of itself. This caught the attention of von Neumann at the same time as the Institute for Advanced Study computer project was being launched. “I did think a good deal about self-reproductive mechanisms,” he wrote to Norbert Wiener in November 1946. “I can formulate the problem rigorously, in [the same way in] which Turing did it for his mechanisms.” Von Neumann envisioned an axiomatic theory of self-reproduction, general enough to encompass both living organisms and machines, telling Wiener that “I want to fill in the details and to write up these considerations in the course of the next two months.”4

  This mathematical theory of self-reproduction had to be grounded in what could be directly observed. “I would, however, put on ‘true’ understanding the most stringent interpretation possible,” he added. “That is, understanding the organism in the exacting sense in which one may want to understand a detailed drawing of a machine.”5 Individual molecules would be the only axiomatic parts. “It would be a mistake to aim at less than the complete determination of the charge distribution in the protein molecule—that is, a complete detailed determination of its geometry and structure,” he wrote to Irving Langmuir, seeking to enlist the chemist’s help. “Of course, the first really exciting structures, that is the first self-reproducing structures (plant virus and bacteriophages), are yet three powers of ten above the protein.”6

  Expanding the abilities of Turing’s Universal Machine, von Neumann showed “that there exists an automaton B which has this property: If you provide B with a description of anything, it consumes it and produces two copies of the description.” Von Neumann outlined this theory in a talk given in Pasadena, California, on September 20, 1948, more than four years before the details of how this is done in nature were revealed by Franklin, Watson, and Crick. “For the ‘self-reproduction’ of automata, Turing’s procedure is too narrow in one respect only,” he explained. “His automata are purely computing machines. Their output is a piece of tape with zeros and ones on it. What is needed … is an automaton whose output is other automata.”7

  Using the same method of logical substitution by which a Turing machine can be instructed to interpret successively higher-level languages—or by which Gödel was able to encode metamathematical statements within ordinary arithmetic—it was possible to design Turing machines whose coded instructions addressed physical components, not memory locations, and whose output could be translated into physical objects, not just zeros and ones. “Small variations of the foregoing scheme,” von Neumann continued, “also permit us to construct automata which can reproduce themselves and, in addition, construct others.” Von Neumann compared the behavior of such automata to what, in biology, characterizes the “typical gene function, self-reproduction plus production—or stimulation of production—of certain specific enzymes.”8

  Viewing the problem of self-replication and self-reproduction through the lens of formal logic and self-referential systems, von Neumann applied the results of Gödel and Turing to the foundations of biology—although his conclusions had little effect on working biologists, just as his Mathematical Foundations of Quantum Mechanics had little effect on the day-to-day work of physicists at the time. Applying Turing’s proof of the unsolvability of the Entscheidungsproblem to the domain of self-reproducing automata, he concluded, in December 1949, that “in other words you can build an organ which can do anything that can be done, but you cannot build an organ which tells you whether it can be done.”9

  “This is connected with the theory of types and with the results of Gödel,” he continued. “The question of whether something is feasible in a type belongs to a higher logical type. It is characteristic of objects of low complexity that it is easier to talk about the object than produce it and easier to predict its properties than to build it. But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object.”10

  Can automata produce offspring as complicated, or more complicated, than themselves? “ ‘Complication’ on its lower levels is probably degenerative, that is, every automaton that can produce other automata will only be able to produce less complicated ones,” von Neumann explained. There is a certain level of complication, however, beyond which “the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself.”11

  This conjecture goes to the heart of the probability or improbability of the origin of life. If true, then the existence of a sufficiently complicated self-reproducing system may lead to more complicated systems and, with reasonable probability, either to life or to something lifelike. Self-reproduction is an accident that only has to happen once. “The operations of probability somehow leave a loophole at this point,” explained von Neumann, “and it is by the process of self-reproduction that they are pierced.”12

  Von Neumann had intended to return to the question of self-reproduction after leaving the AEC. “Toward the end of his life he felt sure enough of himself to engage freely and yet painstakingly in the creation of a possible new mathematical discipline,” says Ulam, “a combinatorial theory of automata and organisms.” The theory would have to be simple enough to be mathematically comprehensible, yet complicated enough to apply to nontrivial examples from the real world. “I do not want to be seriously bothered with the objection that (a) everybody knows that automata can reproduce themselves [and] (b) everybody knows that they cannot,” von Neumann announced.13

  The plan had been to produce, with Ulam as coauthor, a comprehensive treatise comparable to Theory of Games and Economic Behavior, developing a theory of self-reproducing automata with applications to both biology and technology—and the combination of the two regimes. The work was never completed. Ulam was not as disciplined a coauthor as Oskar Morgenstern, and von Neumann’s schedule became even busier after the war. The incomplete manuscript, including a lengthy introduction based on the series of five lectures given by von Neumann at the University of Illinois in 1949, was eventually assembled, with careful editing by Arthur Burks, and published as Theory of Self-Reproducing Automata almost ten years after von Neumann’s death. Some headings from a surviving outline for the first three chapters, sent to Ulam, hint at their thinking at the time:

  1. Wiener!

  3. Turing!

  5. Not Turing!

  6. Boolean algebra

  7. Pitts-McCulloch!

  13. Ulam!

  14. Calling for stronger results

  16. Crystal classes in 2 and 3 dim.

  18. J.B., H.H.G.!

  20. Turing!

  23. Double line trick, etc.

  24. Degeneration (?)

  25. Turing!14

  Our understanding of self-reproduction in biology, and our development of self-reproducing technology, proceeded almost exactly as the proposed theory prescribed. “Wiener!” probably refers to Wiener’s theories of information and communication—expanded upon by Claude Shannon—since the problem of self-reproduction is fundamentally a problem of communication, over a noisy channel, from one generation to the next. “Turing!” refers to the powers of the Universal Turing Machine, and “Not Turing!” refers to the limitations of those powers—and how they might be transcended by living and nonliving things. “Pitts-McCulloch!” refers to Walter Pitts and Warren McCulloch’s 1943 resul
ts on the powers—including Turing universality—of what we now call neural nets. “J.B., H.H.G.!” refers to Julian Bigelow and Herman H. Goldstine—who rarely agreed on anything, so this may be a reference to early discussions of the powers of arrays of communicating cells, before disagreement about how to implement this in practice intervened. “Double line trick, etc.” is evocative of the double-helix replication of DNA, and “Degeneration (?)” probably refers to how any enduring system of self-reproduction must depend on error-correcting codes in translating from one generation to the next. “Ulam!” probably refers to Ulam’s interest in the powers of Turing-complete cellular automata, now evidenced by many of the computational processes surrounding us today. The triplicate appearance of “Turing!” reflects how central Turing’s proof of universality was to any theory of self-reproduction, whether applied to mathematics, biology, or machines.

 

‹ Prev