Turing's Cathedral
Page 36
Digital computers are able to answer most—but not all—questions stated in finite, unambiguous terms. They may, however, take a very long time to produce an answer (in which case you build faster computers) or it may take a very long time to ask the question (in which case you hire more programmers). Computers have been getting better and better at providing answers—but only to questions that programmers are able to ask. What about questions that computers can give useful answers to but that are difficult to define?
In the real world, most of the time, finding an answer is easier than defining the question. It is easier to draw something that looks like a cat than to define what, exactly, makes something look like a cat. A child scribbles indiscriminately, and eventually something appears that resembles a cat. An answer finds a question, not the other way around. The world starts making sense, and the meaningless scribbles (and unused neural connections) are left behind. “I agree with you about ‘thinking in analogies,’ but I do not think of the brain as ‘searching for analogies’ so much as having analogies forced upon it by its own limitations,” Turing wrote to Jack Good in 1948.55
Random search can be more efficient than nonrandom search—something that Good and Turing had discovered at Bletchley Park. A random network, whether of neurons, computers, words, or ideas, contains solutions, waiting to be discovered, to problems that need not be explicitly defined. It is easier to find explicit answers than to ask explicit questions. This turns the job of the programmer upside down. “An argument in favor of building a machine with initial randomness is that, if it is large enough, it will contain every network that will ever be required,” advised Good, speaking to IBM in 1958.56
The paradox of artificial intelligence is that any system simple enough to be understandable is not complicated enough to behave intelligently, and any system complicated enough to behave intelligently is not simple enough to understand. The path to artificial intelligence, suggested Turing, is to construct a machine with the curiosity of a child, and let intelligence evolve.
How to begin to realize what Turing imagined—a machine that would be able to answer all answerable questions that anyone could ask? The computable functions are easy. Beginning with addition (or subtraction, its binary complement), we have, subroutine by subroutine, been building the library from there. What about questions that have answers, but no explicit, algorithmic map or questions, such as determining molecular structure from X-ray diffraction patterns, that have an asymmetric map?
One approach is to start with the questions, and search for the answers. Another approach is to start with the answers and search for the questions. Because it is easier (and more economical) to collect answers (which are already encoded) than to ask questions (which have to be encoded), the first step would be to crawl through the matrix and collect the meaningful strings. Unfortunately, in a matrix of 1022 bits, the number of meaningful strings is a number too large to search, let alone collect. It is too large a number even to write down. Fortunately, there is a key. Human beings and machines have already done much of the work, filing away meaningfully encoded strings since the beginning of the digital universe and, since the dawn of the Internet, giving them unique numerical addresses.
To collect the answers, you do not have to search through the entire matrix; you only have to crawl through the vastly smaller number of valid addresses and collect the resulting strings. The result is an indexed list (within your machine’s “state of mind,” to use Turing’s language) of a significant fraction of the meaningful answers in the digital universe. With two huge deficiencies: you don’t have any questions—you have only answers—and you have no clue where the meaning is.
Where do you go to get the questions, and how do you find where the meaning is? If, as Turing imagined, you have the mind of a child, you ask people, you guess, and you learn from your mistakes. You invite people to submit questions—keeping track of all submissions—and, starting with simple template-matching, suggest possible answers from your indexed list. People click more frequently on the results that provide more meaningful answers, and with simple bookkeeping, meaning, and the map between questions and answers, begins to accumulate over time. Are we searching the search engines, or are the search engines searching us?
Search engines are copy engines: replicating everything they find. When a search result is retrieved, the data are locally replicated: on the host computer and at various servers and caches along the way. Data that are widely replicated, or associated frequently by search requests, establish physical proximity that is manifested as proximity in time. More meaningful results appear higher on the list not only because of some mysterious, top-down, weighting algorithm, but because when microseconds count, they are closer, from the bottom up, in time. Meaning just seems to “come to mind” first.
An Internet search engine is a finite-state, deterministic machine, except at those junctures where people, individually and collectively, make a nondeterministic choice as to which results are selected as meaningful and given a click. These clicks are then immediately incorporated into the state of the deterministic machine, which grows ever so incrementally more knowledgeable with every click. This is what Turing defined as an oracle machine.
Instead of learning from one mind at a time, the search engine learns from the collective human mind, all at once. Every time an individual searches for something, and finds an answer, this leaves a faint, lingering trace as to where (and what) some fragment of meaning is. The fragments accumulate and, at a certain point, as Turing put it in 1948, “the machine would have ‘grown up.’ ”57
FOURTEEN
Engineer’s Dreams
If, by a miracle, a Babbage machine did run backwards, it would not be a computer, but a refrigerator.
—I. J. Good, 1962
“I REMEMBER one day walking out the back door of that little brick building, and here’s Julian lying under this little Austin, welding a hole in a gas tank,” remembers Willis Ware. “And he said ‘Nope! It won’t explode!’ And he had some perfectly reasonable explanation for why it wouldn’t explode, based on the principles of physics.”1
Julian Bigelow was a hands-on engineer, from the first batch of war-surplus 6J6 vacuum tubes and transplanted ENIAC technicians to the lead-acid battery house built when there turned out to be too many transient voltage fluctuations at the end of Olden Lane for the new computer to be connected directly to the grid. “The actual machine that will be completed soon, and which has quite exceptional characteristics, is, in its physical embodiment, much more Bigelow’s personal achievement than anyone else’s,” von Neumann reported in 1950, urging the Institute’s executive committee to break with precedent by granting an academic appointment to an engineer.2
Von Neumann pushed the exception through. “Bigelow’s career has deviated from the conventional academic norm considerably,” he argued. “This is, apart from economic reasons and the war, due to the fact that his field lies somewhere between a number of recognized scientific fields, but does not coincide with any of them.”3 Computer science, as a recognized discipline, did not yet exist. Julian Bigelow and Herman Goldstine were awarded permanent memberships in the School of Mathematics on December 1, 1950, at salaries of $8,500 per year. Their objective was not so much to build better or faster computers, but, as Bigelow put it, to pursue “the relationship between logic, computability, perhaps machine languages, and the things that you can find out scientifically, now that this tool is available.”4
As ill equipped as it was for engineering, the Institute was well equipped to accommodate visitors bringing problems to run on the new machine. The housing project was adjacent to the computer building, and there was no established research group defending its turf. Digital computing “would cleanse and solve areas of obscurity and debate that had piled up for decades,” Bigelow believed. “Those who really understood what they were trying to do would be able to express their ideas as coded instructions … and find answers and demonstrate explic
itly by numerical experiments. The process would advance and solidify knowledge and tend to keep men honest.”5
“The reason von Neumann made Goldstine and me permanent members,” Bigelow explains, “was that he wanted to be sure that two or three people whose talent he respected would be around no matter what happened, for this effort.” Von Neumann was less interested in building computers, and more interested in what computers could do. “He wanted mathematical biology, he wanted mathematical astronomy, and he wanted earth sciences.” Thanks to the computer, the Institute could do applied science without having to build laboratories. The prevailing culture might even change. “We would have the greatest school of applied science in the world,” Bigelow hoped. “We could show the theoreticians that we could find out the answer to their number theoretic problems, their problems in physics, their problems in solid state, and their problems in mathematical economics. We would do planning, we would do things that would be known for centuries, you see.”6
Bigelow’s optimism was short-lived. When President Eisenhower appointed von Neumann to the Atomic Energy Commission in October 1954, the computer project went into decline. Not only did the Institute lose von Neumann, but they also lost much of the funding that had been provided, with few strings attached, by the AEC. With von Neumann appointed to the commission, the AEC could no longer give the Institute anything it wished. “We had nobody we could go to without all this fear of conflict of interest,” explains Goldstine. “It worked very much to our detriment to have all this influence, because we couldn’t exercise it.”7
IBM was less constrained. “IBM people kept coming almost weekly to look at the machine’s development,” remembers Thelma Estrin. IBM retained von Neumann as a consultant, and began developing their first fully electronic computer, the IBM 701, “a carbon copy of our machine,” according to Bigelow, “even down to and including the Williams memory tubes.” By 1951, IBM had become “sufficiently interested,” as Oppenheimer put it, “to want to give the Institute $20,000 a year for a period of five years with no strings attached.”8
The computer project was caught between those who welcomed this ability to attract outside funding and those who thought the Institute, now that the war was over, should abstain from government or industry support. Marston Morse believed that the Institute was not the place to build machines. Oswald Veblen welcomed digital computing but objected to hydrogen bombs. Oppenheimer tried to appear neutral, saying only that computing at the Institute should either “be endowed and expanded, and take its proper place in the academic structure,” or be shut down. “At that time, having Oppenheimer for something was exactly the way to get it stopped by all the rest of the faculty,” Bigelow observed.9
Freeman Dyson, thirty-one years old and just beginning his second year as a professor, was commissioned “to collect a few outside opinions and views on a question of long-range policy which we feel we ought to make up our minds about. Namely, what is a proper role for the Institute to play in the fields of applied mathematics and electronic computing?”10 The immediate question was whether meteorologist Jule Charney should be offered a permanent appointment. The long-term question was what to do with the Electronic Computer Project, which, in von Neumann’s absence, was being kept on life support.
Charney’s group was a victim of its own success. The numerical forecasting methods pioneered at the Institute were being adopted by weather services all over the world. Multiple copies of the IAS computer were being built, with a constant stream of visitors coming to Princeton to learn the new techniques. Internal sentiment, even among the mathematicians, sided against the computer, and the outside reviewers generally agreed the machine belonged somewhere else. “It is time that Von Neumann revolutionized some other subject; He has spent rather too long in the field of automatic computation,” recommended James Lighthill, F.R.S.11 Founding trustees Herbert Maass and Samuel Leidesdorf, who believed that a better understanding of the weather was the kind of knowledge that the Bambergers had hoped to advance, sought to preserve the meteorology project, but were overruled.
“The use of computers was a very funny subject in the early days,” recalls British mathematician and computer scientist David Wheeler, concerning mathematics in Princeton at that time. “It was slightly beneath the dignity of mathematicians. Engineers were used to doing calculations, whereas mathematicians weren’t.”12 After the dust had settled, Freeman Dyson spoke up. “When von Neumann tragically died, the snobs took their revenge and got rid of the computing project root and branch,” he said at the dedication of the university’s new Fine and Jadwin halls, equipped with multiple computers, in 1970. “The demise of our computer group was a disaster not only for Princeton but for science as a whole. It meant that there did not exist at that critical period in the 1950s an academic center where computer people of all kinds could get together at the highest intellectual level.… We had the opportunity to do it, and we threw the opportunity away.” It would be twenty-two years before the next computer—a Hewlett Packard model 9100-B programmable calculator, sequestered for the use of the astronomers in the basement of Building E—arrived at the IAS.13
Bigelow’s hopes of keeping the Institute at the leading edge of the computational revolution came to a halt. Von Neumann, and the excitement he had generated in 1946, were gone and not coming back. Klári had long wanted to leave Princeton for the West Coast; now the Institute’s ambivalence toward the computer project, and lingering divisions over the Oppenheimer security hearings, began to wear down Johnny as well. Veblen would not forgive von Neumann for joining the Atomic Energy Commission, a situation that, according to Klári, “grew into a pathetic sorrow in Johnny’s last years.”14 Even some of von Neumann’s closest friends began to question how someone who had supported Oppenheimer against his AEC accusers could now side with ringleader Strauss. Oppenheimer himself was more forgiving. “I shall always remember Robert,” says Klári, “summing up his attitude in a very simple statement: ‘There have to be good people on both sides.’ ”15
“The lines were drawn and after the first flurry of excitement it became clear that we did not belong in Princeton any more,” Klári explains. “The highly emotional atmosphere in Princeton annoyed Johnny no end. He wanted to work on improved designs for computers, or on the urgency of expanding missile programs—in other words, on anything that was a real intellectual challenge instead of debating interminably who had done what and why and how.”16 Von Neumann believed that conflicted loyalties during the development of the atomic bomb should be left behind. “We were all little children with respect to the situation which had developed, namely, that we suddenly were dealing with something with which one could blow up the world,” he had testified, in defense of Oppenheimer, in 1954. “We had to make our rationalization and our code of conduct as we went along.”17
Two weeks later, while in Los Angeles on air force strategic missile business and staying at the Miramar Bungalows in Santa Monica, von Neumann met with Paul A. Dodd, dean of letters and sciences at UCLA, who offered him a special interdisciplinary position, with no teaching responsibilities, as professor-at-large. “They would give me ‘everything’ I want,” he reported to Klári on May 16, adding that “they do not mind if I do consultations for Industry as well.” Dodd also assured von Neumann that he would be able to spend as much time at the Scripps Institution of Oceanography in La Jolla as he wished. Von Neumann agreed to refuse all other offers until further discussions with UCLA, and Dodd agreed to keep the matter confidential, since von Neumann had not informed the Institute that he was leaving, and, as he put it to Klári, “I do not want to look like a deserter or a traitor to them.”18
“Since we first decided, 1-½ years ago, that it would be better to leave Princeton, I see for the first time concrete evidence for doing it,” he wrote to Klári the next day.19 As the negotiations continued, he secured appointments at UCLA for both Jule Charney and Norman Phillips, with assurances that a state-of-the-art computing laboratory
would be established, building on the resources that already existed, in Los Angeles, at the Institute for Numerical Analysis and at RAND. Von Neumann would finally be able to assemble the cross-disciplinary information systems laboratory that he and Norbert Wiener had proposed in 1946, before the push to develop the hydrogen bomb had drawn a curtain between them and their work. If the California laboratory had been established, the second half of the twentieth century might have taken a quite different course. “Someone should write a novel for the future which is in the past,” says von Neumann’s Los Alamos colleague Harris Mayer. “And that is: what would science and mathematics be if Fermi and Johnny von Neumann didn’t die young?”20
Spring of 1955 found Johnny and Klári settled into a small but comfortable house in Georgetown in Washington, D.C., Johnny having made the journey from postdoctoral immigrant to a presidential appointment in just twenty-five years. The interlude in Washington promised to lead to even more productive years ahead. “I want to become independent of the regulated academic life,” von Neumann had written to Klári from Los Alamos in 1943—a goal that was finally within reach. It was not to be. “On the 9th of July of that exceptionally hot summer, even for Washington,” Klári remembers, “Johnny collapsed while talking on the phone to Lewis Strauss.”21
On August 2 he was diagnosed with advanced, metastasizing cancer, discovered in his collarbone, and underwent emergency surgery. By November his spine was affected, and on December 12 he addressed the National Planning Association in Washington, D.C., the last speech he gave standing up. “The best we can do is to divide all processes into those things which can be better done by machines and those which can be better done by humans,” he advised, “and then invent methods by which to pursue the two.”22 He was confined to a wheelchair in January 1956. “The last scientific discussion we had was on New Year’s Eve, when I told him of a new theory that I had on the dynamics of the mature hurricane,” Jule Charney remembers. “He was in bed all that New Year’s Eve Day. The next morning he walked downstairs to see Elinor and me off to Princeton. On the way back upstairs he fell, and never walked again.”23