Book Read Free

The Man Who Knew Too Much: Alan Turing and the Invention of the Computer (Great Discoveries)

Page 19

by David Leavitt


  Virtually all computers today from $10 million supercomputers to the tiny chips that power cell phones and Furbies, have one thing in common: they are all “von Neumann machines,” variations on the basic computing architecture that John von Neumann, building on the work of Alan Turing, laid out in the 1940s.

  5.

  For Turing the 1940s was an era defined more by beginnings than by culminations. Ideas would come to him, he would throw himself into them, and then, before he had brought them to fruition, he would drift away, either because circumstances compelled him to or because some other idea had seized his attention. Thus by the time Don Bayley presented an operational version of the Delilah before the Cipher Policy Board in 1945, Turing had already left the project, moving on to Teddington, and the ACE. Likewise, he had left Teddington by the time the Pilot ACE was tested. At Bletchley, when his colleagues had talked about what they planned to do after the war, he had always said that he intended to resume his fellowship at King’s. On September 30, 1947, he did just that. Officially he was taking a sabbatical—the idea was that at Cambridge he would do theoretical work that he could later apply to the building of the ACE—but in fact both he and Darwin probably knew that he would never return to the NPL.

  According to Mrs. Turing, her son decamped to Cambridge because he was “disappointed with what appeared to him the slow progress made with the construction of the ACE, and convinced that he was wasting time since he was not permitted to go on the engineering side.” It was a relief to find himself back in bookish and tolerant Cambridge, where he could once again work as he liked. Another plus was that Robin Gandy was now at Cambridge, where he had become a member of the Apostles. Once again, Turing was not elected to this society, which had mattered so much to Forster, and of which Forster had written in The Longest Journey. However, he did join the play-reading Ten Club, the Moral Science Club, and the Hare and Hounds Club, under the aegis of which he was able to continue running. He also began a relationship with Neville Johnson, a third-year mathematics student, that would last for several years—again, not a love affair so much as a “friendship with benefits.”

  To some degree, at Cambridge Turing really was able to take up as if the war years had never happened, and in 1948 he published two papers in mathematics journals: “Rounding-off Errors in Matrix Processes” in the Quarterly Journal of Mechanical and Applied Mathematics and “Practical Forms of Type Theory” in Church’s Journal of Symbolic Logic. He also played chess with the economist Arthur Pigou, who recalled that his opponent “was not a particularly good player over the board, but he had good visualizing powers, and on walks together he and an Oxford friend used to play games by simply naming the moves. This, from the point of view of a chess master, is very small beer . . . but for us humble wood-pushers it was impressive.” According to Pigou, Turing “was interested in many other things” besides mathematics “and would gallantly attend lectures on psychology and physiology at an age when most of us were no longer capable of sitting on a hard bench listening to someone else talking.”

  Another important friendship formed during the sabbatical was with Peter Matthews, then in his second year of the natural sciences tripos, with whom Turing discussed the relationship between physiology and mathematics. Turing introduced Matthews “to the similarities between computing engines and brains,” a comparison that Matthews found “very useful.” Appositely, on January 22, 1948, Turing gave a talk to the Moral Science Club on “Problems of Robots.”

  Most of his year at Cambridge, however, Turing devoted to trying to decide what to do with himself once the year was over. One option was to remain at King’s, resume the career of a pure mathematician that the war had interrupted, and hope for a lectureship. Another was to return to the NPL—as he was officially supposed to—and continue working on the ACE. A third (and this was, to him, perhaps the most attractive of the alternatives) was to take a position at Manchester University, where since 1946 Max Newman had been in residence as Fielden Professor of Pure Mathematics. Building on the work he had done on the Colossus, Newman was collaborating with the electrical engineer F. C. Williams to develop a computer to rival the EDSAC. Working with Tom Kilburn, Williams had developed a storage system based on the cathode-ray tube that was proving to be much more efficient, flexible, and reliable than the EDSAC’s mercury delay lines. The Williams-Kilburn tube, as it came to be called, displayed information as dot patterns and also allowed for the first true use of random-access memory in the history of computer design. Williams later recalled (a bit inaccurately, since Turing had at this point not yet joined the project):

  With this store available, the next step was to build a computer around it. Tom Kilburn and I knew nothing about computers, but a lot about circuits. Professor Newman and Mr. A. M. Turing . . . knew a lot about computers and substantially nothing about electronics. They took us by the hand and explained how numbers could live in houses with addresses and how if they did they could be kept track of during a calculation.

  Newman’s plan, as he laid it out in a letter to von Neumann, was for a machine that could take on “mathematical problems of an entirely different kind from those so far tackled by machines . . . , e.g. testing out (say) the 4-colour theorem* or various theorems on lattices, groups, etc. . . .” On a philosophical level the kind of inquiry he had in mind was much more in line with Turing’s interests than the speed-for-speed’s-sake ethos that governed the ENIAC. Nor was money a problem: Newman had a Royal Society grant worth £20,000 to cover the cost of construction, plus £3,000 a year for five years.

  It was becoming increasingly obvious at Manchester that the NPL strategy of maintaining a rigid division between the engineering and the mathematical arms of computer development was destined to prove totally counterproductive; intellectual synergy depended not just on allowing ideas to be shared but on recognizing that the barrier erected by the NPL was completely arbitrary. Nor was this view held only at Manchester. At Cambridge, Wilkes was moving ahead with the EDSAC in an environment likewise marked by collaboration between engineering and mathematics. He also had his own money.

  It seems likely that Wilkes and Turing distrusted each other. Although Wilkes’s laboratory was only minutes from King’s, for months Turing avoided going to visit him there. When he finally did go, all he could say was that Wilkes looked like a beetle. Yet if Turing felt envious of Wilkes, he had every right. Not only did Wilkes have a more secure position—as well as the support of the university—he had the ear of the NPL, where Womersley was becoming increasingly disenchanted with Turing’s maverick and minimalist design. Fearing rightly that the NPL might once again end up being left behind, Womersley now made enquiries as to the progress of the Manchester computer while simultaneously proposing to Darwin that the NPL team use “as much of Wilkes’ development work as is consistent with our own programming system” in overhauling the ACE. Soon enough nearly everything that made the machine unique, and uniquely Turing’s, would be erased from its design, as the ACE was normalized, brought into line with industry standards.

  Not surprisingly, Turing decided to go to Manchester. Newman wanted him there and promised him the chance to do the sort of original research that the culture of the NPL subtly discouraged. More importantly, he would be in on the ground floor on the development of a machine that was actually going to be built—and built in an atmosphere decidedly more sympathetic than the one in Teddington. In May 1948, therefore, Turing resigned from the NPL, irritating Darwin, who felt that Newman had stolen his boy wonder away from him. (It appears not to have occurred to Darwin that he might not have done much to make the boy wonder want to stay.) Before he took up his new post, however, Turing wrote a last report for the NPL. It was entitled “Intelligent Machinery,” and it would prove to be one of the most startling, even subversive, documents in the history of computer science.

  6.

  Like many of Turing’s later papers, “Intelligent Machinery” mixes hard-core technical analysis with passa
ges of philosophical, sometimes whimsical speculation. At the heart of the paper is a discussion of the possibility that “machinery might be made to show intelligent behavior.” Before delving into this discussion, however, Turing gives a list of what he sees as the five most likely objections to it: “an unwillingness to admit the possibility that mankind can have any rivals in intellectual power”; “a religious belief that any attempt to construct such machines is a sort of Promethean irreverence”; “the very limited character of the machinery which has been used until recent times (e.g. up to 1940),” which has “encouraged the belief that machinery was necessarily limited to extremely straightforward, possibly even to repetitive, jobs”; the discovery, by Gödel and Turing, that “any given machine will in some cases be unable to give an answer at all,” while “the human intelligence seems to be able to find methods of ever-increasing power for dealing with such problems ‘transcending’ the methods available to machines”; and, lastly, the idea that “in so far as a machine can show intelligence this is to be regarded as nothing but a reflection of the intelligence of its creator.”

  Turing’s strategy of opening with a summary of the claims of the naysayers foreshadows the gay rights manifestos of the 1950s and 1960s, which often used a rebuttal of traditional arguments against homosexuality as a frame for its defense. He acknowledges from the outset the futility of trying to talk a zealot out of his zealotry, noting that the first two objections, “being purely emotional, do not really need to be refuted. If one feels it necessary to refute them there is little to be said that could hope to prevail, though the actual production of the machines would probably have some effect.” The third objection he dispatches by pointing out that existing machines such as the ENIAC or ACE “can go on through immense numbers (e.g. 1060,000 about for ACE) of operations without repetition, assuming no breakdown,” while he dispenses with the fourth by reiterating a point made in his lecture before the London Mathematical Society, that infallibility is not necessarily “a requirement for intelligence.” This idea he underscores by means of an anecdote from the life of Gauss:

  It is related that the infant Gauss was asked at school to do the addition 15 + 18 + 21 + . . . + 54 (or something of the kind) and that he immediately wrote down 483, presumably having calculated it as (15 + 54) (54–12)/2.3.* One can imagine circumstances where a foolish master told the child that he ought instead to have added 18 to 15 obtaining 33, then added 21, etc. From some points of view this would be a “mistake,” in spite of the obvious intelligence involved. One can also imagine a situation where the children were given a number of additions to do, of which the first 5 were all arithmetic progressions, but the 6th was say 23 + 34 + 45 + . . . + 100 + 112 + 122 + . . . + 199. Gauss might have given the answer to this as if it were an arithmetic progression, not having noticed that the 9th term was 112 instead of 111. This would be a definite mistake, which the less intelligent children would not have been likely to make.

  Educability, then, is the principal ingredient of intelligence—which means that in order to be called intelligent, machines must show that they are capable of learning. The fourth objection—“that intelligence in machinery is merely a reflection of that of its creator”—can thus be countered by recognizing its equivalence to “the view that the credit for the discoveries of a pupil should be given to his teacher. In such a case the teacher would be pleased with the success of his methods of education, but would not claim the results themselves unless he had actually communicated them to his pupil.” The student, on the other hand, can be said to be showing intelligence only once he has leaped beyond mere imitation of the teacher and done something that is at once surprising and original, as the infant Gauss did. But what kind of machine would be able to learn in this sense?

  By way of answering that question, Turing first divides machines into categories. A “discrete” machine, by his definition, is one whose states can be described as a discrete set; such a machine works by moving from one state to another. In “continuous” machinery, on the other hand, the states “form a continuous manifold, and the behaviour of the machine is described by a curve on this manifold.” A “controlling” machine “only deals with information,” while an “active” machine is “intended to produce some very definite continuous effect.” A bulldozer is a “continuous active” machine, just as a telephone is a “continuous controlling” one. The ENIAC and the ACE, by contrast, are “discrete controlling,” while a brain is “continuous controlling, but . . . very similar to much discrete machinery.” Though “discrete controlling” machines, moreover, are the most likely to show intelligence, “brains very nearly fall into this class, and there seems every reason to believe that they could have been made to fall genuinely into it without any change in their essential properties.” Such a classification of the brain as a neural machine neatly reverses the popular conception of the computer as an electronic brain, just as Turing’s subtle use of the passive “could have been made” furthers the report’s quiet anti-Christian agenda by recasting God as an inventor or programmer whose failure to make brains “discrete controlling” was more or less accidental. Had God been a little smarter, Turing implies, he would have designed the brain better.*

  Indeed, at this point in the report, one begins to get the sense that Turing’s ambition is as much to knock mankind off its pedestal as to argue for the intelligence of machines. What seems to irk him, here and elsewhere, is the automatic tendency of the intellectual to grant to the human mind, merely by virtue of its humanness, a kind of supremacy. Even the science of robotics, on which he spoke at Cambridge before the Moral Science Club, comes in for some mockery, thanks to its emphasis on modeling machines on human beings:

  A great positive reason for believing in the possibility of making thinking machinery is the fact that it is possible to make machinery to imitate any small part of a man. That the microphone does this for the ear, and the television camera for the eye are commonplaces. One can also produce remote-controlled robots whose limbs balance the body with the aid of servo-mechanisms. . . . We could produce fairly accurate electrical models to copy the behaviour of nerves, but there seems very little point in doing so. It would be rather like putting a lot of work into cars which walked on legs instead of continuing to use wheels.

  And yet if one were to “take a man as a whole and try to replace all the parts of him by machinery,” what would the result look like? A latter-day Frankenstein monster, to judge from the description and scenario that follows:

  He would include television cameras, microphones, loudspeakers, wheels and “handling servo-mechanisms” as well as some sort of “electronic brain.” . . . The object, if produced by present techniques, would be of immense size, even if the “brain” part were stationary and controlled the body from a distance. In order that the machine should have a chance of finding things out for itself it should be allowed to roam the countryside, and the danger to the ordinary citizen would be serious. Moreover even when the facilities mentioned above were provided, the creature would still have no contact with food, sex, sport and many other things of interest to the human being. Thus although this method is probably the “sure” way of producing a thinking machine it seems to be altogether too slow and impracticable.

  Better, perhaps, to design the sort of machine that would please another machine: a brain without a body, possessed at most of organs allowing it to see, speak, and hear. But what could such a machine do? Turing lists five possible applications. It could play games (chess, bridge, poker, etc.), it could learn languages, it could translate languages, it could encipher and decipher, and it could do mathematics.

  In fact, over the years computers have been shown to be notoriously resistant to learning languages. On the other hand, they can be very good at games, cryptography, and mathematics—the poetry, as it were, of their language. If they are to undertake these efforts of their own volition, however—if they are to play (and win) at tic-tac-toe, generate an unbreakable cipher, or c
alculate the zeros of the zeta function—they have to be taught. And who is to teach them? What will be the methods by which the “masters” program into them the ability to learn? Turing’s answer to this question (which is really the central question of “Intelligent Machinery”) says as much about his own education as about his tendency to think of the ACE as a child—and a British child, at that:

  The training of the human child depends largely on a system of rewards and punishments, and this suggests that it ought to be possible to carry through the organizing with only two interfering inputs, one for “pleasure” or “reward” (R) and the other for “pain” or “punishment” (P). One can devise a large number of such “pleasure-pain” systems. . . . Pleasure interference has a tendency to fix the character, i.e., towards preventing it changing, whereas pain stimuli tend to disrupt the character, causing features which had become fixed to change, or to become again subject to random variation.

  This rather draconian theory of child rearing suggests the degree to which Turing had internalized the very ethos of “spare the rod, spoil the child” so dominant in England at the time, and to a version of which he would in just a few years fall victim. Perhaps he’d picked up its rudiments in the psychology classes he’d sat in on during his sabbatical year at King’s. Or perhaps he was simply recapitulating the educational principles of Sherborne and other English public schools.

  If the untrained infant’s mind is to become an intelligent one, it must acquire both discipline and initiative. So far we have been considering only discipline. . . . But discipline is certainly not enough in itself to produce intelligence. That which is required in addition we call initiative. This statement will have to serve as a definition. Our task is to discover the nature of this residue as it occurs in man, and to try and copy it in machines.

 

‹ Prev