Book Read Free

Turing's Cathedral

Page 37

by George Dyson


  In March he entered Walter Reed Hospital, where he spent the remaining eleven months. “He discussed his illness with the doctors in such a matter-of-fact way and with such a wealth of medical knowledge that he pushed them into telling him the entire truth—which was very grim,” Klári reported. He received a constant stream of visitors, and was placed in the same wing as the Eisenhower Suite. Air force colonel Vincent Ford was assigned, with several airmen under him, to assist full-time. Lewis Strauss would later recall “the extraordinary picture, of sitting beside the bed of this man, in his f[if]ties, who had been an immigrant, and there surrounding him, were the Secretary of Defense, the Deputy Secretary of Defense, the Secretaries of Air, Army, Navy, and the Chiefs of Staff.”24

  His mental faculties deteriorated, bit by bit. “He wanted somebody to talk with him,” says Julian Bigelow, “and Klári who I think knew me better than she knew anybody else, asked me to go see him at the Walter Reed Hospital. So I went every weekend for almost a year.” Strauss obtained a personal service contract from the AEC to pay Bigelow’s travel expenses and, at von Neumann’s request, reinstated Bigelow’s Q-level security clearance (on June 27, 1956). Bigelow visited with von Neumann, read science journals to him, and fielded his questions until the end. “It was a terrible experience to see him going downhill.”25

  Stan Ulam visited whenever he could. “He never complained about pain, but the change in his attitude, his utterances, his relations with Klári, in fact his whole mood at the end of his life were heartbreaking,” he remembers. “At one point he became a strict Catholic. A Benedictine monk visited and talked to him. Later he asked for a Jesuit. It was obvious that there was a great gap between what he would discuss verbally and logically with others, and what his inner thoughts and worries about himself were.” Von Neumann’s scientific curiosity and his memory were the last things he let go. “A few days before he died,” adds Ulam, “I was reading to him in Greek from his worn copy of Thucydides a story he liked especially about the Athenians’ attack on Melos, and also the speech of Pericles. He remembered enough to correct an occasional mistake or mispronunciation on my part.”26

  Marina von Neumann was twenty-one years old, about to get married, and at the beginning of her own career. Her father “clearly realized that the illness had gone to his brain and that he could no longer think, and he asked me to test him on really simple arithmetic problems, like seven plus four, and I did this for a few minutes, and then I couldn’t take it anymore; I left the room,” she remembers, overcome by “the mental anguish of recognizing that that by which he defined himself had slipped away.”27

  “I once asked him,” she adds, “when he knew he was dying, and was very upset, that ‘you contemplate with equanimity eliminating millions of people, yet you cannot deal with your own death.’ And he said, ‘That’s entirely different.’ ” Nicholas Vonneumann believes that his brother asked for a Catholic priest because he wanted someone he could discuss the classics with. “With our background it would have been inconceivable to turn overnight into a devout Catholic,” he says.28

  “I don’t believe that for a minute,” Marina counters. “My father told me, in so many words, once, that Catholicism was a very tough religion to live in but it was the only one to die in. And in some part of his brain he really hoped that it might guarantee some kind of personal immortality. That was at war with other parts of his brain, but I’m sure he had Pascal’s wager in mind.” The sudden conversion was unsettling to Klári, the Ulams, and Lewis Strauss. “The tragedy of Johnny continues to affect me very strongly,” Ulam wrote to Strauss on December 21, 1956. “I am also deeply perturbed about the religious angle as it developed. Klári … told me about her own and your attempts to moderate anything that might appear in writing about it.”29

  Bigelow “found things beyond communication” when he visited on December 27–28. “Before his death he lost the will or capacity to speak,” Klári explains. “To those of us who knew him well he could communicate every wish, will or worry through those marvelously expressive eyes which never lost their luster and vivacity until the very end.”30

  Von Neumann died on February 8, 1957, and was buried in Princeton on February 12. His colleagues at the Institute ordered (for “about $15”) a flat arrangement of daffodils to be laid on the grave. After a brief Catholic service, the graveside eulogy was delivered by Lewis Strauss. A detailed memorial was delivered by Stan Ulam in the Bulletin of the American Mathematical Society the following spring. Ulam was now left alone to witness the revolutions in both biology and computing that von Neumann had launched but would not see fulfilled. “He died so prematurely, seeing the promised land but hardly entering it,” Ulam wrote in 1976.31

  The remaining Electronic Computer Project staff were scattered to industry, to the national laboratories, and among a growing number of university computer science departments, where derivatives of the IAS machine were being built. Julian Bigelow was determined to stay put. Although Marston Morse had apologized, in the end, for “the conclusion of my mathematical colleagues with regard to the computer,” the mathematicians never changed their mind about engineers. “There really was a caste system,” Morris Rubinoff remembers. “You could separate out different types of members and different types of full members on the basis of their willingness to engage in conversation or even associate socially with the engineers.”32

  Bigelow received job offers from UCLA, RAND, NYU, RCA, the University of Michigan, Hughes Aircraft, the Defense Mapping Agency, and even the Albert Einstein College of Medicine—all of which he refused. “Julian was a man who would take his soldering iron in there and just do it,” says Martin Davis. “He would have been much better off if he had never got that tenure [at IAS]. He would have got a job in industry, where he really would have flourished.”33 The Institute could not force him to resign, but they refused to increase his salary.

  He survived on $9,000 a year, supplemented with occasional consulting fees, while raising three children and later taking care of his wife, Mary, who became gravely ill. Klári suggested he be appointed editor of von Neumann’s unpublished papers on computing and automata, but nothing came of this. Bigelow published little over the next forty years. Although he remained the most direct link to von Neumann’s unfinished thoughts about the future of computing, these ideas, already attenuated by von Neumann’s untimely death and refusal to publish incomplete work, were silenced further by Bigelow’s exile at the IAS.

  Bigelow’s insights into the future of computing were more than lag functions reversed to project forward in time. Turing’s one-dimensional model, however powerful, and von Neumann’s two-dimensional implementation, however practical, might be only first steps on the way to something else. “If you actually tried to build a machine the way Turing described it,” Bigelow explained, “you would be spending more time rushing back and forth to find places on a tape than you would doing actual numerical work or computation.”34 The von Neumann model might turn out to be similarly restrictive, and the solutions arrived at between 1946 and 1951 should no more be expected to persist indefinitely than any one particular interpretation of nucleotide sequences would be expected to persist for three billion years. The last thing either Bigelow or von Neumann would have expected was that long after vacuum tubes and cathode-ray tubes disappeared, digital computer architecture would persist largely unchanged from 1946.

  Once the IAS computer was completed, it was possible to look back at the compromises that were made to get it running—and Bigelow did. “The design of an electronic calculating machine … turns out to be a frustrating wrestling-match with problems of interconnect-ability and proximity in three dimensions of space and one dimension of time,” he wrote in 1965, in one of the few glimpses into his thinking in the post-MANIAC years.35 Why have so few of the alternatives received any serious attention for sixty-four years? If you examined the structure of a computer, “you could not possibly tell what it is doing at any moment,” Bigelow explained
. “The importance of structure to how logical processes take place is beginning to diminish as the complexity of the logical process increases.” Bigelow then pointed out that the significance of Turing’s 1936 result was “to show in a very important, suggestive way how trivial structure really is.”36 Structure can always be replaced by code.

  “Serial order along the time axis is the customary method of carrying out computations today, although … in forming any model of real world processes for study in a computer, there seems no reason why this must be initiated by pairing computer-time-sequences with physical time parameters of the real-world model,” observed Bigelow, who had puzzled over how to map physics to computation ever since being given Wiener’s problem of predicting the path of an evasive airplane in 1941, and von Neumann’s problem of predicting the explosion of a bomb in 1946. “It should also be possible to trace backward or forward from results to causes through any path-representation of the process,” he noted, adding that “it would seem that the time-into-time convention ordinarily used is due to the … humans interpreting the results.”37

  “A second result of the habitual serial-time sequence mode and of the large number of candidate cells waiting to participate in the computation at the next opportunity, if it becomes their turn, is the emergence of a particularly difficult identification problem … because of the need to address an arbitrary next candidate, and to know where it is in machine-space,” Bigelow continued, explaining how the choice of serial dependence in time has led to computers “built of elements that are, to a large extent, strictly independent across space.” This, in turn, requires that communication between individual elements be conducted “by means of explicit systems of tags characterizing the basically irrelevant geometric properties of the apparatus, known as ‘addresses.’ Accomplishment of the desired time-sequential process on a given computing apparatus turns out to be largely a matter of specifying sequences of addresses of items which are to interact.”38

  The 32-by-32 matrix instituted in 1951 addressed 1,024 different memory locations, each containing a string of 40 bits. The address matrix grew explosively over the next sixty years. Today’s processors keep track of billions of local addresses from one nanosecond to the next—while the nonlocal address space is expanding faster than the protocol for assigning remote addresses has been able to keep up. A single incorrect address reference can bring everything to a halt.

  Forced to focus undivided attention on getting the address references and instruction sequences exactly right, a computer, despite billions of available components, does only one thing at a time. “The modern high speed computer, impressive as its performance is from the point of view of absolute accomplishment, is from the point of view of getting the available logical equipment adequately engaged in the computation, very inefficient indeed,” Bigelow observed. The individual components, despite being capable of operating continuously at high speed, “are interconnected in such a way that on the average almost all of them are waiting for one (or a very few of their number) to act. The average duty cycle of each cell is scandalously low.”39

  To compensate for these inefficiencies, processors execute billions of instructions per second. How can programmers supply enough instructions—and addresses—to keep up? Bigelow viewed processors as organisms that digest code and produce results, consuming instructions so fast that iterative, recursive processes are the only way that humans are able to generate instructions fast enough. “Electronic computers follow instructions very rapidly, so that they ‘eat up’ instructions very rapidly, and therefore some way must be found of forming batches of instructions very efficiently, and of ‘tagging’ them efficiently, so that the computer is kept effectively busier than the programmer,” he explained. “This may seem like a highly whimsical way of characterizing a logically deep question of how to express computations to machines. However, it is believed to be not far from an important central truth, that highly recursive, conditional and repetitive routines are used because they are notationally efficient (but not necessarily unique) as descriptions of underlying processes.”40

  Bigelow questioned the persistence of the von Neumann architecture and challenged the central dogma of digital computing: that without programmers, computers cannot compute. He (and von Neumann) had speculated from the very beginning about “the possibility of causing various elementary pieces of information situated in the cells of a large array (say, of memory) to enter into a computation process without explicitly generating a coordinate address in ‘machine-space’ for selecting them out of the array.”41

  Biology has been doing this all along. Life relies on digitally coded instructions, translating between sequence and structure (from nucleotides to proteins), with ribosomes reading, duplicating, and interpreting the sequences on the tape. But any resemblance ends with the different method of addressing by which the instructions are carried out. In a digital computer, the instructions are in the form of COMMAND (ADDRESS) where the address is an exact (either absolute or relative) memory location, a process that translates informally into “DO THIS with what you find HERE and go THERE with the result.” Everything depends not only on precise instructions, but also on HERE, THERE, and WHEN being exactly defined.

  In biology, the instructions say, “DO THIS with the next copy of THAT which comes along.” THAT is identified not by a numerical address defining a physical location, but by a molecular template that identifies a larger, complex molecule by some smaller, identifiable part. This is the reason that organisms are composed of microscopic (or near-microscopic) cells, since only by keeping all the components in close physical proximity will a stochastic, template-based addressing scheme work fast enough. There is no central address authority and no central clock. Many things can happen at once. This ability to take general, organized advantage of local, haphazard processes is the ability that (so far) has distinguished information processing in living organisms from information processing by digital computers.

  Our understanding of life has deepened with our increasing knowledge of the workings of complex molecular machines, while our understanding of technology has diminished as machines approach the complexity of living things. We are back to where Julian Bigelow and Norbert Wiener left off, at the close of their precomputer “Behavior, Purpose and Teleology,” in 1943. “A further comparison of living organisms and machines … may depend on whether or not there are one or more qualitatively distinct, unique characteristics present in one group and absent in the other,” they concluded. “Such qualitative differences have not appeared so far.”42

  As the digital universe expanded, it collided with two existing stores of information: the information stored in genetic codes and the information stored in brains. The information in our genes turned out to be more digital, more sequential, and more logical than expected, and the information in our brains turned out to be less digital, less sequential, and less logical than expected.

  Von Neumann died before he had a chance to turn his attention to the subject of genetic code, but near the end of his life he turned his attention to the question of information processing in the brain. His final, unfinished manuscript, for the upcoming Silliman Memorial Lectures at Yale University, gave “merely the barest sketches of what he planned to think about,” according to Ulam, and was edited by Klári and published posthumously as The Computer and the Brain.43 Von Neumann sought to explain the differences between the two systems, the first difference being that we understand almost everything that is going on in a digital computer and almost nothing about what is going on in a brain.

  “The message-system used in the nervous system … is of an essentially statistical character,” he explained.

  What matters are not the precise positions of definite markers, digits, but the statistical characteristics of their occurrence … a radically different system of notation from the ones we are familiar with in ordinary arithmetics and mathematics.… Clearly, other traits of the (statistical) message could also
be used: indeed, the frequency referred to is a property of a single train of pulses whereas every one of the relevant nerves consists of a large number of fibers, each of which transmits numerous trains of pulses. It is, therefore, perfectly plausible that certain (statistical) relationships between such trains of pulses should also transmit information.… Whatever language the central nervous system is using, it is characterized by less logical and arithmetical depth than what we are normally used to [and] must structurally be essentially different from those languages to which our common experience refers.44

  The brain is a statistical, probabilistic system, with logic and mathematics running as higher-level processes. The computer is a logical, mathematical system, upon which higher-level statistical, probabilistic systems, such as human language and intelligence, could possibly be built. “What makes you so sure,” asked Stan Ulam, “that mathematical logic corresponds to the way we think?”45

  In the age of vacuum tubes, it was inconceivable that digital computers would operate for hundreds of billions of cycles without error, and the future of computing appeared to belong to logical architectures and systems of coding that would be tolerant of hardware failures over time. In 1952, codes were small enough to be completely debugged, but hardware could not be counted on to perform consistently from one kilocycle to the next. This situation is now reversed. How does nature, with both sloppy hardware and sloppy coding, achieve such reliable results? “There is reason to suspect that our predilection for linear codes, which have a simple, almost temporal sequence, is chiefly a literary habit, corresponding to our not particularly high level of combinatorial cleverness, and that a very efficient language would probably depart from linearity,” von Neumann suggested in 1949.46 The most successful new developments in computing—search engines and social networks—are nonlinear hybrids between digitally coded and pulse-frequency-coded systems, and are leaving linear, all-digital systems behind.

 

‹ Prev