Turing's Cathedral

Home > Other > Turing's Cathedral > Page 33
Turing's Cathedral Page 33

by George Dyson


  With our cooperation, self-reproducing numbers are exercising increasingly detailed and far-reaching control over the conditions in our universe that make life more comfortable in theirs. The barriers between their universe and our universe are breaking down completely as digital computers begin to read and write directly to DNA.

  We speak of reading genomes—three million base pairs at a time—but no human mind can absorb these unabridged texts. It is computers that are reading genomes, and beginning to code for proteins by writing executable nucleotide sequences and inserting them into cells. The translation between sequences of nucleotides and sequences of bits is direct, two-way, and conducted in languages that human beings are unable to comprehend.

  Barricelli believed in intelligent design, but the intelligence was bottom-up. “Even though biologic evolution is based on random mutations, crossing and selection, it is not a blind trial-and-error process,” he explained in a later retrospective of his numerical evolution work. “The hereditary material of all individuals composing a species is organized by a rigorous pattern of hereditary rules into a collective intelligence mechanism whose function is to assure maximum speed and efficiency in the solution of all sorts of new problems.… Judging by the achievements in the biological world, that is quite intelligent indeed.”46

  “The notion that no intelligence is involved in biological evolution may prove to be as far from reality as any interpretation could be,” he argued in 1963.

  When we submit a human or any other animal for that matter to an intelligence test, it would be rather unusual to claim that the subject is unintelligent on the grounds that no intelligence is required to do the job any single neuron or synapse in its brain is doing.

  We are all agreed upon the fact that no intelligence is required in order to die when an individual is unable to survive or in order not to reproduce when an individual is unfit to reproduce. But to hold this as an argument against the existence of an intelligence behind the achievements in biological evolution may prove to be one of the most spectacular examples of the kind of misunderstandings which may arise before two alien forms of intelligence become aware of one another.47

  Barricelli claimed to detect faint traces of this intelligence in the behavior of pure, self-reproducing numbers, just as viruses were first detected by biologists examining fluids from which they had filtered out all previously identified self-replicating forms. His final paper, published in 1987, was titled “Suggestions for the Starting of Numeric Evolution Processes Intended to Evolve Symbioorganisms Capable of Developing a Language and Technology of Their Own.” He saw natural and artificial intelligence as collective phenomena, with both biological and numerical evolution constituting “a powerful intelligence mechanism (or genetic brain) that, in many ways, can be comparable or superior to the human brain as far as the ability of solving problems is concerned.” He drew a parallel between the evolution of computer code and the evolution of gene sequences, with the development of interpretive languages leading to a process that grows more intelligent and complex. “Whether there are ways to communicate with genetic brains of different symbioorganisms, for example by using their own genetic language, is a question only the future can answer,” he noted.48

  Multiple levels of translation separate the languages now used by computer programmers from the machine language by which the instructions are carried out, just as many levels of interpretation lie between a coded sequence of nucleotides and its ultimate expression by a living cell. Communication between sequences stored in DNA and sequences stored in digital memory, in contrast, is more direct. The machine language of the gene and the machine language of the computer have more in common with each other than either of them have with us.

  The advent of computer-mediated communication of genetic sequences is a violation of the orthodox neo-Darwinian doctrine that genetic information is acquired by inheritance from one’s ancestors, and nowhere else. Lateral gene transfer, however, has long been business as usual for terrestrial life. Viruses are constantly inserting foreign DNA sequences into their hosts. Despite obvious dangers, most cells have maintained the ability to read gene sequences transferred from outside the cell. This vulnerability is exploited by malevolent viruses—so why maintain a capability that has such costs?

  One reason is to facilitate the acquisition of new, useful genes that would otherwise remain the property of someone else. “The power of horizontal gene transfer is so great that it is a major puzzle to understand why it would be that the eukaryotic world would turn its back on such a wonderful source of genetic novelty and innovation,” Carl Woese and Nigel Goldenfeld, of the University of Illinois, explained sixty years after von Neumann’s lectures on self-reproduction in 1949. “The exciting answer, bursting through decades of dogmatic prejudice, is that it hasn’t. There are now compelling documentations of horizontal gene transfer in eukaryotes, not only in plants, protists, and fungi, but in animals (including mammals) as well.”49

  When we “sequence” a genome, we reconstruct it, bit by bit, from fragmentary parts. Life has been doing this all along. Backup copies of critical gene sequences are distributed across the viral cloud. “Microbes absorb and discard genes as needed, in response to their environment … which casts doubt on the validity of the concept of a ‘species’ when extended into the microbial realm,” Goldenfeld and Woese observed in 2007, noting “a remarkable ability to reconstruct their genomes in the face of dire environmental stresses, and that in some cases their collective interactions with viruses may be crucial to this.”50

  Horizontal gene transfer is exploited by drug-resistant pathogens—“We declared war against the microbes, and we lost,” adds Goldenfeld—and genetic engineers. But who is exploiting whom? The genomics revolution is being driven by our ability to store, replicate, and manipulate genetic information outside the cell. Biology has been doing this from the start. Life evolved, so far, by making use of the viral cloud as a source of backup copies and a way to rapidly exchange genetic code. Life may be better adapted to the digital universe than we think. “Cultural patterns are in a sense a solution of the problem of having a form of inheritance which doesn’t require killing of individuals in order to evolve,” observed Barricelli in 1966. We have already outsourced much of our cultural inheritance to the Internet, and are outsourcing our genetic inheritance as well. “The survival of the fittest is a slow method for measuring advantages,” Turing argued in 1950. “The experimenter, by the exercise of intelligence, should be able to speed it up.”51

  The entrepreneurial genomicist George Church recently announced, concerning biotechnology’s success in the laboratory, “We are able to program these cells as if they were an extension of the computer.”52 To which life, with three billion years of success in the wild, might answer, “We are able to program these computers as if they were an extension of the cell.”

  The origin of species was not the origin of evolution, and the end of species will not be its end.

  And the evening and the morning were the fifth day.

  THIRTEEN

  Turing’s Cathedral

  In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.

  —Alan Turing, 1950

  THE HISTORY OF DIGITAL computing can be divided into an Old Testament whose prophets, led by Leibniz, supplied the logic, and a New Testament whose prophets, led by von Neumann, built the machines. Alan Turing arrived in between.

  Twenty-four years old, Turing boarded the Cunard White Star Liner Berengaria bound for New York on September 23, 1936. His mother, Sara, accompanied him to Southampton to say farewell, carrying his prized possession, a heavy brass sextant in a wooden case, from the train to the ship. “Of all the ungainly things to hold,” she remembers, “commend me to an old-fashioned sextant case.”1

>   John von Neumann, whom Turing would be joining in Fine Hall at Princeton for the next two years, always booked a first-class cabin for the voyage between Southampton and New York. Turing booked himself into steerage. “There is mighty little room for putting things in one’s cabin, but nothing else that worries me,” he reported to his mother on September 28. “The mass of canaille with which one is herded can easily be ignored.”2

  Turing’s arrival in Princeton was followed, five days later, by the proofs of his “On Computable Numbers, with an Application to the Entscheidungsproblem.” These thirty-five pages would lead the way from logic to machines.

  Alan Mathison Turing was born at Warrington Lodge, London, on June 23, 1912, to Julius Mathison Turing, who worked for the Indian Civil Service, and Ethel Sara Turing (née Stoney), whose family included George Johnstone Stoney, who named the electron, in advance of its 1894 discovery, in 1874. “Alan was interested in figures—not with any mathematical association—before he could read,” says his mother, who adds that in 1915, at the age of three, “as one of the wooden sailors in his toy boat had got broken he planted the arms and legs in the garden, confident that they would grow.”3

  His disarming curiosity lent young Alan “an extraordinary gift for winning the affection of maids and landladies on our various travels,” his mother notes. He was inventive from the start. “For his Christmas present, 1924, we set him up with crucibles, retorts, chemicals, etc., purchased from a French chemist,” she adds. He was nicknamed “the alchemist” in boarding school. “He spends a great deal of time in investigations in advanced mathematics to the neglect of his elementary work,” his housemaster at Sherborne reported in 1927, adding that “I don’t care to find him boiling heaven knows what witches’ brew by the aid of two guttering candles on a naked windowsill.”4

  The Berengaria landed in New York on September 29. After clearing customs, and paying too much for a taxi, Turing made his way to the Graduate College in Princeton, where he would reside while pursuing his PhD. Von Neumann, who had arrived in Princeton six years earlier, had taken wholeheartedly to life in the United States. Turing never quite fit. “Americans are the most insufferable and insensitive creatures you could wish,” he had reported to his mother while still on board the ship.5

  Princeton University had spared no expense to duplicate the architecture of Turing’s Cambridge, applying the full resources of the twentieth century to making much of the campus, especially the new Graduate College, appear as if it had been built in the thirteenth. The university chapel was a replica of the chapel at King’s College in Cambridge, and a series of new dormitories were “Collegiate Gothic” interpretations of rooms in Cambridge and Oxford—but with showers and central heating. “Beyond the way they speak there is only one (no two!) feature[s] of American life which I find really tiresome, the impossibility of getting a bath in the ordinary sense, and their ideas on room temperature,” Turing complained after he had settled in.6

  The Graduate College, set on higher ground between the Springdale golf course and Olden Farm, incorporated stones that had been brought from Cambridge and Oxford in 1913. Rising 173 feet above its residential courtyard was the Cleveland Tower, housing a carillon that spans five octaves, commissioned by the class of 1892—who had provided for its being played at regular intervals, except during examinations for the PhD. The largest bell weighs 12,880 pounds and sounds lower G. The Graduate College dining hall, with stained-glass windows, vaulted ceilings, and a pipe organ, was constructed by William Cooper Procter, grandson of the cofounder of Procter and Gamble, who established the Jane Eliza Procter and William Cooper Procter Fellowships to ensure that at least one scholar each from Cambridge, Oxford, and Paris, “in reasonably good health, possessing high character, excellent education and exceptional scholarly promise,” was in residence each year. A carved likeness of their benefactor, holding a laboratory beaker symbolizing the source of their fellowships, looks down from the end of one of the oak roof beams.

  “There seems to be quite a traffic-jam on the road to Princeton,” von Neumann had written to Oswald Veblen from Cambridge in 1935, where he had been visiting for the spring term. Von Neumann singled out Turing (spelling his name “Touring”), who “seems to be strongly supported by the Cambridge mathematicians for the Procter fellowship (I think that he is quite promising); and one or two more, whose names I forgot.”7 Turing, whose first paper, “Equivalence of Left and Right Almost Periodicity,” was a strengthening of one of von Neumann’s own results, failed to secure the Procter fellowship on the first attempt, but did so for his second year.

  During his stay in Cambridge in 1935, von Neumann became friends with the combinatorial topologist Maxwell H. A. Newman, whom he described to Veblen as “very attractive both from the topological and from the human side.”8 Newman was the son of a Polish German Jew, Herman Alexander Neumann, who had immigrated to England in 1879 and changed his name to Newman in 1916. Max Newman, who was mentor to Turing, was invited by von Neumann to the Institute, arriving in Princeton in September 1937 for a full academic year. His wife, Lyn, reported to her family back in England that “Max has no job here. He simply sits at home doing anything he likes.”9 He spent most of his time on a proof of Poincaré’s conjecture, which later turned out to have a fatal flaw. Lyn, who became a close friend to Turing, returned to Princeton with the Newmans’ two children during the war.

  Turing, like von Neumann, grew up under the influence of David Hilbert, whose ambitious program of formalization set the course for mathematics between World War I and World War II. The Hilbert school believed that if a proposition could be articulated within the language of mathematics, then either its proof or its refutation could be reached, by logic alone, without any intervening leaps of faith. In 1928, Hilbert posed three questions by which to determine whether an all-encompassing mathematical universe could be defined by a finitary set of rules: Are these foundations consistent (so that a statement and its contradiction cannot ever both be proved)? Are they complete (so that all true statements can be proved within the system itself)? Does there exist a decision procedure that, given any statement expressed in the given language, will always produce either a finite proof of that statement or else a definite construction that refutes it, but never both? Gödel’s incompleteness theorems of 1931 brought Hilbert’s program to a halt. No consistent mathematical system sufficient for dealing with ordinary arithmetic can establish its own consistency, nor can it be complete.

  Hilbert’s remaining question—the Entscheidungsproblem, or “decision problem”—of whether any precisely mechanical procedure could distinguish provable from disprovable statements within a given system (defined, say, by the axioms of elementary logic or arithmetic) remained unanswered. Even asking the question required the intuitive notion of a mechanical procedure to be mathematically defined. In the spring of 1935—at the time of von Neumann’s visit to Cambridge—Turing was attending Max Newman’s lectures on the foundations of mathematics when the Entscheidungsproblem first attracted his attention. Hilbert’s challenge aroused Turing’s instinct that mathematical questions resistant to strictly mechanical procedures could be proved to exist.

  Turing’s argument was straightforward—as long as you threw out all assumptions and started fresh. “One of the facets of extreme originality is not to regard as obvious the things that lesser minds call obvious,” says I. J. (Jack) Good, who served as an assistant to Turing (then referred to as “Prof”) during World War II. Originality can be more important than intelligence, and according to Good, Turing constituted proof. “Henri Poincaré did quite badly at an intelligence test, and Prof also was only about halfway up the undergraduate scale when he took such a test.” Had Turing more closely followed the work of Alonzo Church or Emil Post, who anticipated his results, his interest might have taken a less original form. “The way in which he uses concrete objects such as exercise books and printer’s ink to illustrate and control the argument is typical of his insight and ori
ginality,” says colleague Robin Gandy. “Let us praise the uncluttered mind.”10

  A function is computable, over the domain of the natural numbers (0, 1, 2, 3…), if there exists a finite sequence of instructions (or algorithm) that prescribes exactly how to list the value of the function at f(0) and, for any natural number n, at f(n + 1). Turing approached the question of computable functions in the opposite direction, from the point of view of the numbers produced as a result. “According to my definition,” he explained, “a number is computable if its decimal can be written down by a machine.”11

  Turing began with the informal idea of a computer—which in 1935 meant not a calculating machine but a human being, equipped with pencil, paper, and time. He then substituted unambiguous components until nothing but a formal definition of “computable” remained. Turing’s machine (which he termed an LCM, or Logical Computing Machine) thus consisted of a black box (as simple as a typewriter or as complicated as a human being) able to read and write a finite alphabet of symbols to and from a finite but unbounded length of paper tape—and capable of changing its own “m-configuration,” or “state of mind.”

  “We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions … which will be called ‘m-configurations,’ ” Turing wrote.

  The machine is supplied with a “tape” (the analogue of paper) running through it, and divided into sections (called “squares”) each capable of bearing a “symbol.” At any moment there is just one square … which is “in the machine.”…However, by altering its m-configuration the machine can effectively remember some of the symbols which it has “seen.”…In some of the configurations in which the scanned square is blank (i.e., bears no symbol) the machine writes down a new symbol on the scanned square; in other configurations it erases the scanned symbol. The machine may also change the square which is being scanned, but only by shifting it one place to right or left. In addition to any of these operations the m-configuration may be changed.12

 

‹ Prev