Turing perceived a parallel between intelligence and “the genetical or evolutionary search by which a combination of genes is looked for, the criterion being survival value. The remarkable success of this search confirms to some extent the idea that intellectual activity consists mainly of various kinds of search.”47 He saw evolutionary computation as the best approach to truly intelligent machines. “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s?” he asked.48 “Bit by bit one would be able to allow the machine to make more and more ‘choices’ or ‘decisions.’ One would eventually find it possible to program it so as to make its behaviour the result of a comparatively small number of general principles. When these became sufficiently general, interference would no longer be necessary, and the machine would have ‘grown up.’”49
An incremental, trial-and-error path toward artificial intelligence lay ahead. It is a misconception, based on the stereotype of a Turing machine as executing a prearranged program one step at a time, to assume that Turing believed that any single, explicitly programmed serial process would ever capture human intelligence in mechanical form. Turing knew how many interconnected neurons it took to make a brain, and he knew how many brains it took to form a society that could kindle the spark of language and intelligence into flame. He himself had drawn the curtains on Leibniz’s illusion of an ideal, completely formalized logical system in 1936. And in 1939 even his own attempt to transcend Gödelian incompleteness by his “Systems of Logic Based on Ordinals” had failed. In this sequel to “On Computable Numbers,” prepared in Princeton as his doctoral thesis under Alonzo Church, Turing explored “how far it is possible to eliminate intuition, and leave only ingenuity,” noting that since ingenuity can always be replaced by patience, “we do not mind how much ingenuity is required, and therefore assume it to be available in unlimited supply.”50
Intelligence would never be clean and perfectly organized, but like the brain would remain slippery and disordered in its details. The secret of large, reliable, and flexible machines, as Turing noted, is to construct them, or let them construct themselves, from large numbers of individual parts—independently free to make mistakes, search randomly, and generally act unpredictably so that at a much higher level of the hierarchy the machine appears to be making an intelligent choice. It is an appealing model—advocated by Oliver Selfridge in his Pandemonium of 1959, I. J. Good in his Speculations Concerning the First Ultraintelligent Machine (1965), and Marvin Minsky in his Society of Mind (1985). A similar principle of distributed intelligence (enforced by need-to-know security rules) led to successful code breaking at Bletchley Park.
The Turing machine, as a universal representation of the relations between patterns in space and sequences in time, has given these intuitive models of intelligence a common language that translates freely between concrete and theoretical domains. Turing’s machine has grown progressively more universal for sixty years. From McCulloch and Pitts’s demonstration of the equivalence between Turing machines and neural nets in 1943 to John von Neumann’s statement that “as far as the machine is concerned, let the whole outside world consist of a long paper tape,”51 the Turing machine has established the measure by which all models of computation have been defined. Only in theories of quantum computation—in which quantum superposition allows multiple states to exist at the same time—have the powers of the discrete-state Turing machine been left behind.
All intelligence is collective. The truth that escaped Leibniz, but captured Turing, is that this intelligence—whether that of a billion neurons, a billion microprocessors, or a billion molecules forming a single cell—arises not from the unfolding of a predetermined master plan, but by the accumulation of random bits of wisdom through the power of small mistakes. The logicians of Bletchley Park breathed the spark of intelligence into the Colossus not by training the machine to recognize the one key that held the answer, but by training it to eliminate the billions of billions of keys that probably wouldn’t fit.
Turing broke the mystery of intelligence into bits, but in so doing revealed a greater mystery: how to reconcile the mechanism of intelligence with the unpredictability of mind. The upheaval in logic of the 1930s was reminiscent of the revolution in physics that revealed the certainties of Newtonian mechanics to be uncertainties in disguise. The great mysteries were shifted from the very large to the very small. By means of Turing’s machine, all computable processes could be decomposed into elemental steps—just as all mechanical devices can be decomposed into smaller and smaller parts. Leibniz’s thought experiment, in which he imagined entering into a thinking machine as into a mill, was embodied by Turing in rigorous form. The mystery of intelligence was replaced by a succession of smaller mysteries—until there lingered only the mystery of mind.
As Leibniz argued that “there must be in the simple substance a plurality of conditions and relations, even though it has no parts,”52 so Turing’s analysis suggests that the powers of mind derive not only from the realm of very large numbers (by combinatorial processes alone) but from the realm of the very small (by the element of chance adhering to any observable event). Hobbes and Leibniz could both be right.
5
THE PROVING GROUND
I am thinking about something much more important than bombs. I am thinking about computers.
—JOHN VON NEUMANN1
As the human intellect grew sharper in exercising the split-second timing associated with throwing stones at advancing enemies or fleeing prey, so the development of computers was nurtured by problems in ballistics—the science of throwing things at distant targets through the air or, more recently, through space. The close relationship between mathematics and ballistics goes back to Archimedes, Leonardo da Vinci, Galileo, and Isaac Newton, whose legendary apple remains the most famous example of an insight into ballistics advancing science as a whole. It was Robert Boyle, in The Usefulnesse of Mechanical Disciplines to Natural Philosophy, who introduced the term balisticks into the English language in 1671. Boyle classified ballistics as one of the “fatal arts.” The precise use of gunpowder was regarded as a humanitarian advance over indiscriminate mayhem, greeted with the zeal for “smart” weapons that continues to this day.
Alan Turing and his colleagues at Bletchley Park were chess players at heart, pitting their combined intelligence against Hitler for the duration of the war and then returning as quickly as possible to civilian life. John von Neumann (1903–1957) was a warrior who joined the game for life. The von Neumann era saw digital computers advance from breaking ciphers to guiding missiles and building bombs. The advent of the cold war was closely associated with the origins of high-speed electronic computers, whereby the power of new weapons could be proved by calculation instead of by lighting a fuse and getting out of the way.
Von Neumann had a gift for calculating the incalculable. After bringing thermonuclear Armageddon within reach, he applied his imagination to the possibility of certain especially cold-blooded forms of life. With his theory of self-reproducing automata he endowed Turing’s Universal Machine with the power of constructing an unlimited number of copies of itself. Despite the threatening nature of these accomplishments, von Neumann was not an evil genius at heart. He was a mathematician who could not resist pushing the concepts of destruction and construction to their logical extremes, seeking to assign probabilities rather than moral judgment to the results.
Von Neumann played an enthusiastic role in the development of thermonuclear weapons, ballistic missiles, the application of game theory to nuclear deterrence, and other known and unknown black arts. He was one of the few Manhattan Project scientists who was not sequestered at Los Alamos, appearing periodically, like a comet, in the course of his transcontinental rounds. Advocating a hard line against the Soviet Union and publicly favoring a preventive nuclear attack, his views on nuclear war were encapsulated in his 1950 motto “Not whether but when.” Nonetheless, he helped con
struct a policy of peace through the power of assured destruction that has avoided nuclear war for fifty years. Von Neumann’s statements must be viewed not only in historical perspective, but also in the context of his pioneering work in game theory, which demonstrated the possibility of stabilizing a dangerously unstable situation by a convincing bluff—if and only if there appears to be the determination to back it up.
“Von Neumann seemed to admire generals and admirals and got along well with them,” recalled Stan Ulam (1909–1984), a friend and colleague who shared in the full spectrum of von Neumann’s work.2 When von Neumann was invited to join the club and become a professional cold warrior, he did. He compressed an entire career as a military strategist into the final decade of his life. For the last nine months of von Neumann’s battle with cancer, President Eisenhower arranged for a private suite at Walter Reed Hospital in Washington, D.C., assigning air force colonel Vincent Ford and eight airmen with top-secret clearance to provide twenty-four-hour protection and support.
When his end neared, von Neumann spoke not in the language of military secrets but in the Hungarian of his youth. Janos von Neumann was born in Budapest in 1903, the son of Max Neumann, a successful banker and economist elevated to the nobility by Emperor Franz Joseph in 1913. Life in the von Neumann household exposed young Johnny not only to economic theory but to administrative and political skills. Max brought his children into as much contact as possible with his world. “Managing a bank became a matter for family discussions, no less than our school subjects,” recalled von Neumann’s younger brother Nicholas. “All of us, but particularly John, observed and eventually used father’s business techniques.”3 Nicholas remembers that his father, after making an investment in a textile works, brought home a card-controlled Jacquard automatic loom and believes that his brother’s fascination with this mechanism resurfaced later in electronic form.
Von Neumann authored his first mathematical paper (with Michael Fekete, tutor turned collaborator) at the age of seventeen, launching a streak of productivity that continued without interruption until his death at age fifty-four—and for a period thereafter, if one includes his Theory of Self-Reproducing Automata (1966), reconstructed by logician Arthur Burks from von Neumann’s unfinished manuscripts and notes. Von Neumann’s Mathematical Foundations of Quantum Mechanics (1932) and Theory of Games and Economic Behavior (1944) remain classics in their fields. Eugene Wigner, a colleague since his school days in Budapest, commented that “nobody knows all science, not even von Neumann did. But as for mathematics, he contributed to every part of it except number theory and topology. That is, I think, something unique.”4 Von Neumann’s mental acrobatics were legendary. “If you enjoy thinking, your brain develops,” said Edward Teller. “And that is what von Neumann did. He enjoyed the functioning of his brain. And that is why he outdid anyone I know.”5 Hungary produced an exceptional crop of scientific talent between World War I and World War II. Von Neumann, Teller, Wigner, and Leo Szilard left their generation of mathematical physicists wondering how one small country had spawned four such minds at once. According to Ulam, von Neumann credited “the necessity of producing the unusual or facing extinction”6—a response that von Neumann pushed to its extreme. “Perhaps the consciousness of animals is more shadowy than ours and perhaps their perceptions are always dreamlike,” wrote Eugene Wigner in 1964. “On the opposite side, whenever I talked with the sharpest intellect whom I have known—with von Neumann—I always had the impression that only he was fully awake, that I was halfway in a dream.”7
Von Neumann saw his homeland disfigured by two world wars and a succession of upheavals in between. “I am violently anti-communist,” he declared on his nomination to membership in the Atomic Energy Commission in 1955, “in particular since I had about a three-months taste of it in Hungary in 1919.”8 During the communist takeover the family retreated to the Italian Adriatic and was never personally at risk. Von Neumann spent the years 1921 to 1926 as a student shuttling between the University of Budapest, the University of Berlin, and the Eidgenössische Technische Hochschule (Federal Institute of Technology, or ETH) in Zurich, receiving both a degree in chemical engineering (assuring a livelihood) and a Ph.D. in mathematics (a field in which European positions were scarce). For the 1926–1927 academic year he received a Rockefeller Fellowship to work with David Hilbert at Göttingen, developing an axiomatization of set theory in support of Hilbert’s program to formalize all mathematics from the ground up. Later, von Neumann admitted having had doubts that might have led him to anticipate Gödel’s incompleteness results. Also with Hilbert, in 1927, he began his mathematical treatment of quantum mechanics, erecting a landmark in a field in which both geniuses and artisans were at work. He published twenty-five papers between 1926 and 1929. After accepting a visiting position at Princeton University in 1930, he was appointed to a full professorship there in 1931.
His escape from war-torn Europe left von Neumann determined to ensure that the most powerful weapons imaginable were placed in the hands of his adopted side. Along with fellow hydrogen-bomb designer Edward Teller, he perceived the Soviet threat to be only a stone’s throw away—and the two Hungarians had seen what happened to the defenseless villages of their youth. “I don’t think any weapon can be too large,”9 he counseled Oppenheimer, who suffered second thoughts about atomic weapons, whereas von Neumann never flinched.
When von Neumann announced in 1946 that he considered bombs less important than computers, this did not mean that his interest in bombs had been eclipsed. He was thinking about both. The first job performed by the ENIAC, arranged under the auspices of von Neumann, was a feasibility study for the hydrogen, or super, bomb. To define the boundary conditions for the job, half a million IBM cards were shipped from Los Alamos to Philadelphia, where the calculation consumed six weeks and many more cards between November 1945 and January 1946. “This exposure to such a marvelous machine,” recalled Nicholas Metropolis, who supervised the calculation, “coupled in short order to the Alamogordo [bomb test] experience was so singular that it was difficult to attribute any reality to either.”10 The shakedown run of the Institute for Advanced Study (IAS) computer, in the summer of 1951, was also a thermonuclear calculation for Los Alamos, running continuously for sixty days, well in advance of the machine’s public dedication in 1952. “When the hydrogen bomb was developed,” von Neumann testified at the Oppenheimer hearings in 1954, “heavy use of computers was made [but] they were not yet generally available . . . it was necessary to scrounge around and find a computer here and find a computer there which was running half the time and try to use it.”11 Ralph Slutz, who worked on the early stages of the IAS computer and then went on to construct the Bureau of Standards’ SEAC, remembers “a couple people from Los Alamos” showing up as soon as the computer began operating, “with a program which they were terribly eager to run on the machine . . . starting at midnight, if we would let them have the time.”12
Von Neumann believed that all fields of science, including pure mathematics, derive their sustenance through contact with real problems in the physical world. The military often arrives at these intersections first. Whether the application is for good or evil has little to do with the beauty of the science underneath. “If science is not one iota more divine for helping society, maybe she isn’t one iota less divine for harming society,” he wrote in 1954. “The principle of laissez faire has led to strange and wonderful results.”13
In the United States, ballistics research fell under the domain of the U.S. Army’s proving ground at Aberdeen, Maryland, founded in 1918, when field artillery was still being dragged around with horses but was beginning to fire high-velocity, long-range shells. Increasingly distant and mobile targets, especially aircraft, were difficult to hit by trial-and-error adjustment of the range. The converse presented an equivalent problem: how to hit a fixed target by dropping a bomb from a moving plane. Firing tables, tabulating target distance as a function of muzzle altitude, atmospheric conditi
ons, temperature, and a host of other entangled variables became an essential adjunct to every gun. But their preparation required enormous numbers of complex calculations, largely performed by hand. The task resembled preparing the annual nautical almanac, except that it was necessary to prepare a separate almanac for each gun.
Mathematician Oswald Vehlen (1880–1960), the proving ground’s first director, assembled a notable constellation of mathematicians, including Norbert Wiener, at Aberdeen during World War I. The group dispersed its talent widely, contributing to every facet of computational mathematics and computer technology between World War I and World War II. Veblen became department head at Princeton University, soon making Princeton the rival of Göttingen in mathematics. In 1924, Veblen wrote a proposal for a Princeton mathematics institute that served, six years later, as a model for the creation of the Institute for Advanced Study, where both von Neumann and Veblen were appointed to professorships for life. During World War II, Veblen returned as chief scientist to the proving ground at Aberdeen, and, after von Neumann was naturalized as a U.S. citizen in 1937, Veblen recruited him to the Ballistic Research Laboratory’s scientific advisory board. Advances in weaponry had left the principles of war unchanged, including the long-standing tradition of calling in the mathematicians to help aim catapults or guns.
Darwin Among the Machines Page 11