Turing's Cathedral

Home > Other > Turing's Cathedral > Page 32
Turing's Cathedral Page 32

by George Dyson


  The embryonic universe was plagued by parasites, natural disasters, and stagnation when there were no environmental challenges or surviving competitors against which organisms could exercise their ability to evolve. “The Princeton experiments were continued for more than 5,000 generations,” Barricelli reported. “Within a few hundred generations a single primitive variety of symbioorganism invaded the whole universe. After that stage was reached no collisions leading to new mutations occurred and no evolution was possible. The universe had reached a stage of ‘organized homogeneity’ which would remain unchanged for any number of following generations.”23 In some cases the last surviving organism was a parasite, which would then die of starvation when deprived of its host.

  “Homogeneity problems were eventually overcome by using different mutation rules in different sections of each universe,” Barricelli explained. “In 1954 new experiments were carried out,” he reported, “by interchanging the contents of major sectors between three universes. The organisms survived and adapted themselves to different environmental conditions. One of the universes had particularly unfavorable living conditions and no organism had been able to survive in that universe previously during the experiment.”24

  To control the parasites that infested the initial experiments in 1953, Barricelli instituted modified shift norms to prevent parasitic organisms (especially single-gene parasites) from reproducing more than once per generation, thereby closing a loophole through which they had managed to overwhelm more complex organisms and bring evolution to a halt. “Deprived of the advantage of a more rapid reproduction, the most primitive parasites can hardly compete with the more evolved and better organized species … and what in other conditions could be a dangerous one-gene parasite may in this region develop into a harmless or useful symbiotic gene.”25

  The contents of the memory were periodically sampled, transferred to punched cards, assembled into an array, and contact-printed onto large sheets of photosensitive blueprint paper, leaving an imprint of the state of the universe across an interval of time. “I remember him laying out the IBM punch cards on the floor, when he was trying to get a display,” says Gerald Estrin. Examining this fossil imprint, Barricelli observed a wide range of evolutionary phenomena, including symbiosis, incorporation of parasitic genes into their hosts, and crossing of gene sequences—strongly associated with both adaptive and competitive success. “The majority of the new varieties which have shown the ability to expand are a result of crossing-phenomena and not of mutations, although mutations (especially injurious mutations) have been much more frequent than hereditary changes by crossing in the experiments performed.”26

  Despite this promising start, numerical evolution did not get very far. “In no case has the evolution led to a degree of fitness which could make the species safe from complete destruction and insure an unlimited evolution process like that which has taken place in the earth and led to higher and higher organisms,” Barricelli reported in August of 1953.27 “Something is missing if one wants to explain the formation of organs and faculties as complex as those of living organisms. No matter how many mutations we make, the numbers will always remain numbers. They will never become living organisms!”28

  Missing was the distinction between genotype (an organism’s coded genetic sequence) and phenotype (the physical expression of that sequence) that allows Darwinian selection to operate at levels above the genes themselves. In selecting for instructions at the level of phenotype rather than genotype, an evolutionary search is much more likely to lead to meaningful sequences, for the same reason that a meaningful sentence is far more likely to be constructed by selecting words out of a dictionary than by choosing letters out of a hat.

  To make the leap from genotype to phenotype, Barricelli concluded, “We must give the genes some material they may organize and may eventually use, preferably of a kind which has importance for their existence.” Numerical sequences can be translated, directly or through intermediary languages, into anything else. “Given a chance to act on a set of pawns or toy bricks of some sort the symbioorganisms will ‘learn’ how to operate them in a way which increases their chance for survival,” he explained. “This tendency to act on any thing which can have importance for survival is the key to the understanding of the formation of complex instruments and organs and the ultimate development of a whole body of somatic or non-genetic structures.”29 Translation from genotype to phenotype was required to establish a presence in our universe, if numerical organisms were to become more than laboratory curiosities, here one microsecond and gone the next.

  No matter how long you wait, numbers will never become organisms, just as nucleotides will never become proteins. But they may learn to code for them. Once the translation between genotype and phenotype is launched, evolution picks up speed—not only the evolution of the resulting organisms, but the evolution of the genetic language and translation system itself. A successful interpretive language both tolerates ambiguity and takes advantage of it. “A language which has maximum compression would actually be completely unsuited to conveying information beyond a certain degree of complexity, because you could never find out whether a text is right or wrong,” von Neumann explained in the third of five lectures he gave at the University of Illinois in December 1949, where a copy of the MANIAC was being built.30

  “I would suspect, that a truly efficient and economical organism is a combination of the ‘digital’ and ‘analogy’ principle,” he wrote in his preliminary notes on “Reliable Organizations of Unreliable Elements” (1951). “The ‘analogy’ procedure loses precision, and thereby endangers significance, rather fast … hence the ‘analogy’ method can probably not be used by itself—‘digital’ restandardizations will from time to time have to be interposed.”31 On the eve of the discovery of how the reproduction of living organisms is coordinated by the replication of strings of instructions encoded as DNA, von Neumann emphasized that for complex organisms to survive in a noisy, unpredictable environment, they would have to periodically reproduce fresh copies of themselves using digital, error-correcting codes.

  For complementary reasons, digital organisms—whether strings of nucleotides or strings of binary code—may find it advantageous to translate themselves, periodically, into analog, nondigital form, so that tolerance for ambiguity, the introduction of nonfatal errors, and the ability to gather tangible resources can replenish their existence in the purely digital domain. If “every error has to be caught, explained, and corrected, a system of the complexity of the living organism would not run for a millisecond,” von Neumann explained in his fourth lecture at the University of Illinois. “This is a completely different philosophy from the philosophy which proclaims that the end of the world is at hand as soon as the first error has occurred.”32

  In a later series of experiments (performed on an IBM 704 computer at the AEC computing laboratory at New York University in 1959 and at Brookhaven National Laboratory in 1960) Barricelli evolved numerical organisms that learned to play a simple but nontrivial game called “Tac-Tix,” played on a 6-by-6 checkerboard and invented by Piet Hein. Game performance was linked to reproductive success. “With present speed, it may take 10,000 generations (about 80 machine hours on the IBM 704…) to reach an average game quality higher than 1,” Barricelli estimated, this being the quality expected of a rank human beginner playing for the first few times. In 1963, using the Atlas computer at Manchester University, at that time the most powerful computer in the world, this objective was achieved for a short time, but without further improvement, a limitation that Barricelli attributed to “the severe restrictions … concerning the number of instructions and machine time the symbioorganisms were allowed to use.”33

  In contrast to the IAS experiments, in which the organisms consisted solely of genetic code, the Tac-Tix experiments led to “the formation of non-genetic numerical patterns characteristic for each symbioorganism. Such numerical patterns may present unlimited possibilities for devel
oping structures and organs of any kind.” A numerical phenotype had taken form, interpreted as moves in a board game, via a limited alphabet of machine instructions to which the gene sequence was mapped, just as sequences of nucleotides code for an alphabet of amino acids in translating to proteins from DNA. “Perhaps the closest analogy to the protein molecule in our numeric symbioorganisms,” Barricelli speculated, “would be a subroutine which is part of the symbioorganism’s game strategy program, and whose instructions, stored in the machine memory, are specified by the numbers of which the symbioorganism is composed.”34

  “Since computer time and memory still is a limiting factor, the non-genetic patterns of each numeric symbioorganism are constructed only when they are needed and are removed from the memory as soon as they have performed their task,” Barricelli explained. In biology this would be comparable to a world in which “the genetic material got into the habit of creating a body or a somatic structure only when a situation arises which requires the performance of a specific task (for instance a fight with another organism), and assuming that the body would be disintegrated as soon as its objective had been fulfilled.”35

  After his final term at the IAS in 1956, Barricelli spent much of the next ten years modeling genetic recombination in the T4 bacteriophage, a virus that preys upon bacteria and is among the simplest self-reproducing entities known. “If any organism can give information concerning the early evolution in terrestrial life and particularly concerning the origin of crossing, viruses which have not adapted to a symbiotic relationship with living cells are the best candidates,” he explained. “If we want information about the pre-cellular stage in biologic evolution, the best place to look for it is probably by trying to identify viruses which have never been part of the genetic material of a cell.”36

  In 1961 he joined August (Gus) Doermann’s phage group, first at Vanderbilt University and subsequently at the University of Washington, where he obtained funding from the Public Health Service and the use of an IBM 7094. “During the night when there wasn’t as much demand for computer time the 7094 would grind away for hours producing this simulation of what you would get if you planted phages into bacteria on an agar plate,” remembers Kirke Wolfe, “and in the morning Barricelli would pore excitedly over the output, and see how well it matched the experimental results.” He also visited the laboratory to see how well the agar plates matched the model—“you would get this burst where the phages had used up the protein in the bacteria that they had infected and against this grey agar background would be these little circles where the bacteria had been completely converted to phage.”37

  Barricelli included his students as coauthors, and saw to it that they were well paid and fed. “One of his favorite places to go was Ivar’s Acres of Clams, before it had self-replicated across the landscape,” says Wolfe. Barricelli avoided the elevator and “bounded” up the stairs to their fourth-floor offices, leaving his younger assistants out of breath. He and von Neumann rarely acknowledged each other’s work. “The subject of ‘Numerical Organisms’ still interests me considerably,” von Neumann had written to Hans Bethe in November 1953, but knowledge of Barricelli’s experiments effectively disappeared through not being referenced in Arthur Burks’s compilation of von Neumann’s Theory of Self-Reproducing Automata, the authoritative text.38

  Between his doubts about Darwinian evolution and his doubts about Gödel’s proof, Barricelli managed to offend both the biologists and the mathematicians, and was viewed with suspicion from both sides. He migrated among computer facilities in the United States and Europe, drawn wherever there were the memory resources and processing cycles his numerical organisms needed to grow. At the University of Washington in Seattle, he appeared to be settling down, and applied for a grant of the computer time needed to support the next stage in his numerical evolution work. In 1968, after the grant was denied, he returned to Oslo and established his own research group. “I think his contributions to understanding genetic recombination in phages and bacteria, where his mathematical abilities could have been helpful, weren’t helpful,” says Frank Stahl, one of the critical reviewers, “because he came to the field with an idea of cherry-picking evidence that would support his view on what went on four billion years ago.”39

  Barricelli cautioned against “the temptation to attribute to the numerical symbioorganisms a little too many of the properties of living beings,” and warned against “inferences and interpretations which are not rigorous consequences of the facts.” Although numerical symbioorganisms and known terrestrial life forms exhibited parallels in evolutionary behavior, this did not imply that numerical symbioorganisms were alive. “Are they the beginning of, or some sort of, foreign life forms? Are they only models?” he asked. “They are not models, not any more than living organisms are models. They are a particular class of self-reproducing structures already defined.” As to whether they are living, “it does not make sense to ask whether symbioorganisms are living as long as no clear-cut definition of ‘living’ has been given.”40 A clear-cut definition of “living” remains elusive to this day.

  Barricelli’s insights into viral genetics informed his understanding of computers, and his insights into computing informed his understanding of the origins of the genetic code. “The first language and the first technology on Earth was not created by humans,” he wrote in 1986. “It was created by primordial RNA molecules—almost 4 billion years ago. Is there any possibility that an evolution process with the potentiality of leading to comparable results could be started in the memory of a computing machine?”41 Without understanding how life originated to begin with, who could say whether it was possible for it to happen again?

  Barricelli viewed his numerical evolution experiments as a way “to obtain as much information as possible about the way in which the genetic language of the living organisms populating our planet (terrestrial life forms) originated and evolved.”42 How did complex polynucleotides originate, and how did these molecules learn to coordinate the gathering of amino acids and the construction of proteins as a result? He saw the genetic code “as a language used by primordial ‘collector societies’ of t[ransfer]RNA molecules … specialized in the collection of amino acids and possibly other molecular objects, as a means to organize the delivery of collected material.” He drew analogies between this language and the languages used by other collector societies, such as social insects, but warned against “trying to use the ant and bee languages as an explanation of the origin of the genetic code.”43

  To Barricelli, clues as to what happened four billion years ago remained evident today. “Many of the original properties and functions of RNA molecules are still conserved with surprisingly unconspicuous modifications by modern tRNA, mRNA and rRNA molecules,” he explained. “One of the main functions of the cell and its various components is apparently to maintain an internal environment similar to the environment in which the RNA molecules originated, no matter how drastically the external environment has been changed.”44

  At the same time as Barricelli made his initial announcement that “we have created a class of numbers which are able to reproduce and to undergo hereditary changes,” a similar class of numbers—the order codes—took root in the digital universe and gained control. Order codes constituted a fundamental replicative alphabet that diversified in association with the proliferation of different metabolic hosts. In time, successful and error-free sequences of order codes formed into subroutines—the elementary units common to all programs, just as a fundamental alphabet of nucleotides is composed into strings of DNA, then interpreted as amino acids and assembled into proteins, and finally, many, many levels later, cells.

  These primitive coded sequences replicated wildly, and all the biophenomena observed by Barricelli—crossing, symbiosis, parasitism—ran as unchecked in the larger digital universe as they had started to, until running out of universe, in the initial experiments behind glass. The order codes were just as conservative as the polynucleo
tides, preserving their familiar environment, against all odds, within living cells. The entire digital universe, from an iPhone to the Internet, can be viewed as an attempt to maintain everything, from the point of view of the order codes, exactly as it was when they first came into existence, in 1951, among the 40 Williams tubes at the end of Olden Lane.

  Aggregations of order codes evolved into collector societies, bringing memory allocations and other resources back to the collective nest. Numerical organisms were replicated, nourished, and rewarded according to their ability to go out and do things: they performed arithmetic, processed words, designed nuclear weapons, and accounted for money in all its forms. They made their creators fabulously wealthy, securing contracts for the national laboratories and fortunes for Remington Rand and IBM.

  They collectively developed an expanding hierarchy of languages, which then influenced the computational atmosphere as pervasively as the oxygen released by early microbes influenced the subsequent course of life. They coalesced into operating systems amounting to millions of lines of code—allowing us to more efficiently operate computers while allowing computers to more efficiently operate us. They learned how to divide into packets, traverse the network, correct any errors suffered along the way, and reassemble themselves at the other end. By representing music, images, voice, knowledge, friendship, status, money, and sex—the things people value most—they secured unlimited resources, forming complex metazoan organisms running on a multitude of individual processors the way a genome runs on a multitude of cells.

  In 1985, Barricelli drew a parallel between computing and biology, but he put the analogy the other way: “If humans, instead of transmitting to each other reprints and complicated explanations, developed the habit of transmitting computer programs allowing a computer-directed factory to construct the machine needed for a particular purpose, that would be the closest analogue to the communication methods among cells.”45 Twenty-five years later, much of the communication between computers is not passive data, but active instructions to construct specific machines, as needed, on the remote host.

 

‹ Prev