Book Read Free

The Man Who Knew Too Much: Alan Turing and the Invention of the Computer (Great Discoveries)

Page 17

by David Leavitt


  It was also taking him away from Bletchley. Not terribly far away—only about ten miles north—was another country estate, Hanslope Park. The house dated from the late eighteenth century and since 1941 had served as the base for the secret service’s “Special Communications Unit No. 3.” Bletchley was becoming overcrowded, and given its proximity, it seemed natural that Turing should set up shop at Hanslope, where, along with his young assistants Robin Gandy (who later wrote extensively about Turing) and Donald Bayley he went to work on the speech encipherment project, offering, in typical Bletchley fashion, a prize to whoever among his colleagues could come up with the best name for it. Gandy won with Delilah, a reference to the biblical temptress who deceived Samson.

  The atmosphere at Hanslope was much more formal than that at Bletchley. For one thing, there was a much more visible military presence. Then too, the powers that be did not accord Turing any special privileges. Instead, he was given space in a large hut in which a wide variety of research programs was being carried out. Once again, his own contribution to Delilah put less emphasis on the hardware than on the establishment of a sound theoretical foundation for the system; it was as if, at every opportunity, he was determined to prove Wittgenstein wrong about the falling bridge, by making sure each step was shored up by logic. Most of the nuts-and-bolts work—literally—he left to Bayley, a youth from the Midlands who had recently graduated from Birmingham University with a degree in electrical engineering; what interested Turing was the theory behind the machine, which he was determined to make as impressive for its simplicity as for its invulnerability. Not for the first time he saw the advantages of applying the aesthetic standards that Hardy had set forth for mathematical proof to the much less rarefied business of building things.

  At last Turing moved out of the Crown Inn, taking lodgings first at the officers’ mess at Hanslope and then at a cottage in the kitchen garden of the estate, which he shared with Gandy and a cat called Timothy. He took to running long distances, read a lot of Trollope and Jane Austen, and went to parties at the mess—the first time he had had anything like a normal social life since his stay in Princeton. With the end of the war in sight, it was no longer necessary to practice the sort of social austerity to which Turing had become acclimated at Bletchley. Once again, people were allowed to have personal needs. He had never been reticent about his homosexuality. Indeed, even at a time when thousands of Englishmen led outwardly heterosexual lives while secretly engaging in “acts of gross indecency,” Turing had displayed a remarkable degree of self-confidence and comfort in his own sexual identity. That he saw his sexuality as part of his identity in the first place put him at odds with the prevalent thinking of his age and reflected, no doubt, the years that he had spent in the privileged corridors of King’s College.

  Not that he was in any way a zealot. Indeed, in contrast to that of Edward Carpenter, the mathematician turned philosopher whose ideal of male comradeship had inspired Forster to write Maurice, Turing’s openness owed less to any conscious decision process than to an allergy to dishonesty that was an outgrowth of his notorious literal-mindedness. Put simply, Turing could keep a secret only when he thought there was a good reason to do so. In the case of Enigma, there was obviously a host of good reasons not to tell anyone about the job he’d performed. So far as his homosexuality was concerned, he saw no point in dissembling. (Lyn Newman recalled that Turing “found the idea of deceiving others so distasteful that he supposed it equally so to almost everyone.”)

  So he told people. He told Joan Clarke. He even told Don Bayley, his assistant at Hanslope. As Hodges describes it, there was nothing somber or earnest about the conversation. He did not sit Bayley down and announce that he had something grave or even consequential to impart. Rather, he just let the news slip casually, while they were working. Bayley’s reaction—a frank revulsion perfectly in keeping with his Midlands upbringing—took Turing completely aback.

  According to Hodges’ account of the incident, what appalled Don Bayley was not merely the fact of Turing’s homosexuality, which could be seen as part and parcel of his general eccentricity; it was that Turing “seemed to think it perfectly natural and almost to be proud of it.” That he refused to demonize himself, however, did not mean that other people wouldn’t demonize him: this was what he appears to have failed, or perhaps even refused, to understand. Forster, less credulous and, generally speaking, more pessimistic, feared that Maurice might provoke a similar backlash, and therefore chose not to publish the novel during his lifetime. “If it had ended unhappily, with a lad dangling from a noose or with a suicide pact, all would be well,” he wrote in a 1960 terminal note to the novel. Carpenter, he added later in the same note, “had hoped for the generous recognition of an emotion and for the reintegration of something primitive into the common stock. And I, though less optimistic, had supposed that knowledge would bring understanding. We had not realized that what the public really loathes in homosexuality is not the thing itself but having to think about it.” It was presumably “having to think about it” that was so upsetting for Bayley.

  The problem, in part, was loneliness. Despite his ease with his homosexuality, which did indeed verge on pride, Turing had never had a really fulfilling relationship with another man. Instead, his erotic life so far had consisted of bouts of unrequited longing, usually for heterosexual men who had no interest in him, alternating with occasional “friendships with benefits” with other gay men in whom he had a minimal sexual interest, and with whom he was far from in love. These friendships, by their very nature, were compromises. Better than nothing they might be, but they paled in comparison with the unfulfilled ideal that was Christopher Morcom. It seems not improbable that when Turing let slip the fact of his homosexuality “accidentally,” especially to a young man like Bayley, he was hoping against hope that the admission might provoke an expression of reciprocal desire. That rarely happened. Later he told Robin Gandy, “Sometimes you’re sitting talking to someone and you know that in three quarters of an hour you will either be having a marvellous night or you will be kicked out of the room.” With Bayley, he was very firmly kicked out of the room; indeed, he considered himself lucky that Bayley agreed to continue working with him at all.

  By the spring of 1945 the Delilah was operational. It came too late to be of any practical use in the war—which was perhaps part of the reason why the Post Office showed so little enthusiasm for it. (Another reason was that the output was crackly.) Soon enough the bugs had been dealt with, but by then, as was typical for Turing, he had lost interest in the project. For another idea had seized him—or perhaps it would be more accurate to say that it had reseized him. Delilah was a single-purpose machine; the bombe was something even less general, a machine built for the specific purpose of defeating another machine. Both of them were rather like the friendships that in Turing’s life had served as stand-ins for the love affairs he had never known. Now he wanted the real thing: to build a machine that was not just universal but to which, as Claude Shannon had speculated, one might read a poem or play a piece of music: a machine that could actually be said to think.

  2.

  In June 1945 Turing accepted a post as temporary senior scientific officer at the National Physics Laboratory in Teddington, a suburb of North London abutting Bushey Park, at a salary of £800 per annum. Since 1938 the laboratory had been under the directorship of Sir Charles Galton Darwin (1887–1962), the grandson of the father of evolutionary theory and himself an applied mathematician from Cambridge whose field of expertise was x-ray crystallography. Darwin’s initiatives at the laboratory included the institution of a new mathematics division, the superintendent of which, J. R. Womersley, had been given the mandate to start a program of research into “the possible adaptation of automatic telephone equipment to scientific computing” as well as the development of an “electronic counting device suitable for rapid computing.”

  Part of what motivated Darwin and Womersley was a fear that the United S
tates had pulled ahead of Britain in computer research. That very year, at the University of Pennsyl-vania’s Moore School of Engineering, a computer called the ENIAC (short for electronic numerical integrator and calculator) was being put into operation. The brainchild of John Mauchley and J. Prosper Eckert, whose later lives would be marred by legal battles over its patent, the ENIAC employed 17,468 vacuum tubes (as opposed to the Colossus’ 1,500) as well as 70,000 resistors, 10,000 capacitors, and 5 million soldered joints. Speed was its principal objective, as the patent application made clear:

  With the advent of everyday use of elaborate calculations, speed has become paramount to such a high degree that there is no machine on the market today capable of satisfying the full demand of modern computational methods. The most advanced machines have greatly reduced the time required for arriving at solutions to problems which might have required months or days by older procedures. This advance, however, is not adequate for many problems encountered in modern scientific work and the present invention is intended to reduce to seconds such lengthy computations. . . .

  In other words, the ENIAC was intended to be mainly a very fast number cruncher—not the sort of machine, one assumes, that would respond with much enthusiasm to a sonnet or a sonata. Moreover, it departed radically from Turing’s ideal of a universal machine in that it was pretty much all hardware, which meant that in order to change the programming one had literally to open the machine and reattach its thousands of switches and cable connections. To borrow a phrase of Turing’s, making alterations to the ENIAC necessitated “screwdriver interference” rather than “paper interference.” Turing, on the other hand, envisioned a machine whose hardware would be as streamlined as possible and which one could adapt to different purposes simply by changing its instruction tables.

  It was at this point that John von Neumann reentered the picture. Like Turing, von Neumann (or Johnny, as his friends called him) had spent the war years as a consultant to the military, particularly on the development of the hydrogen bomb. He had also served as a member of the Scientific Advisory Committee to the Ballistic Research Laboratories at the Aberdeen Proving Ground in Maryland. All this was work that required large-scale computation of just the sort for which the ENIAC (on which von Neumann consulted) was designed. At the same time, von Neumann recognized the limitations of the ENIAC; with his background in logic, he envisioned a machine less dependent on engineering, more flexible in terms of programming, and—perhaps most crucially—possessing an enormous memory. The machine was to be called the EDVAC (electronic discrete variable automatic computer), and on June 30, 1945, a proposal for its design was delivered to the U.S. Army Ordinance Department under von Neumann’s name.

  It is not too much of a stretch to say that Turing’s fingerprints are all over the report. For instance, of the memory, von Neumann writes,

  While it appeared that various parts of the memory have to perform functions that differ somewhat in their nature and considerably in their purpose, it is nevertheless tempting to treat the entire memory as one organ, and to have its parts even as interchangeable as possible for the various functions enumerated above.

  As Hodges observes, von Neumann’s “one organ” is pretty much equivalent to Turing’s “tape”; indeed, this idea that he finds “tempting” lies at the very heart of “Computable Numbers.” Likewise, the EDVAC had a stored program, as opposed to the ENIAC’s cable-based programming system. Yet the EDVAC report contains not a single mention of Turing’s name. If, as von Neumann claimed, he had indeed never read another paper in logic after his unhappy encounter with Gödel in Königsberg, then his reiteration of many of Turing’s key points not only in the report on the EDVAC but in several articles from the period was yet another remarkable example of two mathematicians making the same discovery years apart.

  Between von Neumann and Alonzo Church, it appeared that Turing had made so little of an impression at Princeton that he might as well never have been there. Yet if Turing resented von Neumann’s apparent wholesale appropriation of his ideas, he said nothing about it, at least publicly. Instead, he focused on the differences and, having settled in at Teddington, went to work on a proposal of his own. This was for a computer that would be called the ACE, short for “automatic computing engine.” (The use of the word “engine” might have been meant as an allusion to Babbage’s analytical engine.) Turing’s report on the ACE, published in 1945 and meant to be read, according to Turing, in conjunction with von Neumann’s on the EDVAC, is much more densely detailed than von Neumann’s, with logical circuit diagrams and a cost estimate (£11,200). It also posited a machine that was in many ways more radical, and certainly more minimalist, than the EDVAC—not to mention most of the computers in operation today.

  What made the ACE unique, in Turing’s words, was its capacity to “tackle whole problems. Instead of repeatedly using human labour for taking material out of the machine and putting it back in at the appropriate moment all this will be looked after by the machine itself.” Self-sufficiency, however, was only one of several facets of the machine’s character that distinguished it from predecessors such as the ENIAC.* It was also to be much less dependent on hardware:

  There will positively be no internal alterations to be made even if we wish suddenly to switch from calculating the energy levels of the neon atom to the enumeration of groups of order 720. It may appear somewhat puzzling that this can be done. How can one expect a machine to do all this multitudinous variety of things? The answer is that we should consider the machine as doing something quite simple, namely carrying out orders given to it in a standard form which it is able to understand.

  The machine, in other words, would be capable not only of “looking after” itself but of “understanding” instructions. Already Turing’s language bestowed personhood upon it. This personhood was not meant to be taken merely as a metaphor or even, as Keynes might have put it, a “state of mind”: rather, it owed entirely to the ACE’s independence from “screwdriver interference,” from the fact that instructions fed in from outside were what gave it identity. Indeed, in a lecture that he delivered on the ACE to the London Mathematical Society on February 20, 1947, Turing went so far as to suggest that, much as a child matures in response to social stimuli and education, such a machine might be capable of growth: “Possibly it might still be getting results of the type desired when the machine was first set up, but in a much more efficient manner. . . . It would be like a pupil who had learnt much from his master, but had added much more by his own work. When this happens I feel that one is obliged to regard the machine as showing intelligence.”

  The ACE could in theory “learn by experience”—but only if certain technical requirements were met. First, its memory, if not “infinite,” would need “to be very large.” Another “desirable feature” would be that “it should be possible to record into the memory from within the computing machine, and this should be possible whether or not the storage already contains something, i.e. the storage should be erasible [sic].” But what form should such storage take? Alluding to “Computable Numbers,” Turing rejected his old idea of an “infinite tape” on the grounds that too much time would have to be spent “in shifting up and down the tape to reach the point at which a particular piece of information required at the moment is stored. Thus a problem might easily need a storage of three million entries, and if each entry was equally likely to be the next required the average journey up the tape would be through a million entries, and this would be intolerable.” What was needed was a “form of memory with which any required entry can be reached at short notice.” To have a “really fast machine,” Turing concluded, “we must have our information, or at any rate a part of it, in a more accessible form than can be obtained with books. It seems that this can only be done at the expense of compactness and economy, e.g. by cutting the pages out of the books, and putting each one into a separate reading mechanism. Some of the methods of storage which are being developed at the present ti
me are not unlike this.” Wittgenstein, of course, had begun his first lecture on the philosophy of mathematics by imagining a scenario in which Turing, asked to point out a Greek sigma in a book, “cuts out the sign [Wittgenstein] showed him and puts it in the book.” To be sure, the philosopher added, such “misunderstandings only immensely rarely arise—although my words might have been taken either way.” Now Turing was imagining pages cut out of books and then inserted, in a similar fashion, into “separate reading mechanisms.” It was as if he were determined to challenge Wittgenstein, once again, by envisioning a situation in which logic demanded the very kind of literal-mindedness Wittgenstein had mocked.

  For Turing was, as ever, literal-minded—so much so that he built a certain literal-mindedness into his design for the ACE: “The machine interprets whatever it is told in a quite definite manner without any sense of humour or sense of proportion. Unless in communicating with it one says exactly what one means, trouble is bound to result.” He might have been writing about some of his own more tortured efforts to determine whether another man would be amenable to a caress or a kiss, whether a conversation was going to lead to a “marvellous night” or to being “kicked out of the room.” Trouble resulted when channels got crossed—an idea Turing would explore in a later essay. For now his main goal was to ask that his machines be given a fair chance, that they not be faulted simply because they were machines. To illustrate this point, he referred to his solution to the Entscheidungsproblem, noting that a machine developed to distinguish provable formulae from unprovable ones would sometimes, by necessity, fail to provide an answer. By contrast, a mathematician, upon being given such a problem to solve, “would search around and find new methods of proof, so that he ought eventually to be able to reach a decision about any given formula.” Against such an argument, Turing wrote,

 

‹ Prev