The Man Who Knew Too Much: Alan Turing and the Invention of the Computer (Great Discoveries)

Home > Other > The Man Who Knew Too Much: Alan Turing and the Invention of the Computer (Great Discoveries) > Page 11
The Man Who Knew Too Much: Alan Turing and the Invention of the Computer (Great Discoveries) Page 11

by David Leavitt


  And the paper really was something new. When they were first performed, Chopin’s compositions perplexed and even outraged his listeners. Monet’s paintings horrified the Parisians of his epoch. “Computable Numbers” elicited no reactions so violent, but it was ahead of its time, and in a sense the silence that greeted it reflected the inability of the age in which Turing was working to register the implications of what he had done. In 1936, after all, Post’s factory analogy represented a far more recognizable symbolic language than Turing’s a-machine. As for Church, though his lambda calculus boasted a certain elegance, as well as the virtue of self-containment, it displayed none of the boldness, the imaginative vigor, that made Turing’s formulation so compelling and so memorable.

  For his part, Church was more than generous to his putative student. Reviewing “Computable Numbers” in the Journal of Symbolic Logic, he wrote,

  As a matter of fact, there is involved here the equivalence of three different notions: computability by a Turing machine, general recursiveness in the sense of Herbrand-Gödel-Kleene, and λ-definability in the sense of Kleene and the present reviewer. Of these, the first has the advantage of making the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately—i.e. without the necessity of proving preliminary theorems. The second and third have the advantage of suitability for embodiment in a system of symbolic logic.

  Notably, Church’s review also marked the first use of the term “Turing machine”—evidence even this early of the degree to which the machine, above and beyond the problem it had been invented to solve, was taking on a life of its own. Such a machine was appealing to visualize, tirelessly shifting the tape back and forth between its jaws. To put the matter in theatrical terms, the machine was starting to steal the show from all the other characters—from Hilbert, from Church, even from its inventor.

  In the meantime, Turing worked under Church on a paper intended to lay out clearly the equivalence of Turing computability and λ-definability. It was published in the fall of 1937 in the Journal of Symbolic Logic. In the circle of those who cared about such things, some discussion went on as to which system would in the end prove to be the most useful. Kleene, for one, felt that it was “actually easier to work with general recursive functions than with complicated Turing-machine tables,” while Post criticized some of the details of Turing’s paper. None of his colleagues went so far as to deny the importance of Turing’s work. And yet none of them embraced it, either.

  It was clear that at least for the time being—and at least at Princeton—the Turing machine would have to take a back seat to the other approaches. The kingdom might be tiny, but Church ruled it, and Turing knew enough not to contemplate regicide. Instead, he worked away on his Ph.D. thesis, which Church directed, and wrote two papers on group theory, one of them in response to a problem passed on by von Neumann. (It seemed that von Neumann was perfectly willing to give Turing encouragement so long as the work in question had nothing to do with logic.) He also gave a lecture on computability before the Mathematics Club, but the attendance was small. “One should have a reputation if one hopes to be listened to,” he wrote to his mother. “The week following my lecture G. D. Birkhoff* came down. He has a very good reputation and the room was packed. But his lecture wasn’t up to the standard at all. In fact everyone was just laughing about it afterwards.” It seemed that Turing was rapidly learning a painful lesson: too often, reputation trumps talent. What remained to be seen was whether he would develop the schmoozing skills already practiced by his friend Maurice Pryce.

  5.

  Social awkwardness aside, Turing’s abilities did not go unnoticed at Princeton. In particular he received encouragement from Luther Eisenhart, dean of the Department of Mathe-matics, and his wife, Katherine. As he wrote to his mother in February 1937,

  I went to the Eisenharts regular Sunday tea yesterday and they took me in relays to try and persuade me to stay another year. Mrs. Eisenhart mostly put forward social or semi-moral semi-sociological reasons why it would be a good thing to have a second year. The Dean weighed in with hints that the Procter Fellowship was mine for the asking (this is worth $2000 p.a.) I said I thought King’s would probably prefer that I return, but gave some vague promise that I would sound them on the matter.

  He did sound King’s out on the matter—to no avail. The college failed to come through with a lectureship (Maurice Pryce won one instead), and in the end—rather reluctantly, since he was becoming homesick—Turing agreed to stay the extra year. Von Neumann wrote him a letter of recommendation for the Procter—notably, he emphasized Turing’s work in the theories of almost periodic functions and continuous groups, but said nothing about “Computable Numbers”—and this cinched the deal. As Turing wrote to his mother, he would now be “a rich man.” He nonetheless decided to return to England at least for the summer, and on June 23—his twenty-fifth birthday—he sailed out of New York.

  Back in Cambridge, Turing kept himself busy correcting some errors in “Computable Numbers,” working on his Ph.D. thesis, and completing the paper he was writing under Church establishing the equivalency of his own notion of computability, λ-definability, and Gödel’s concept (taken up by Kleene) of “general recursiveness.” Perhaps the most interesting work he undertook, however, involved a mathematical problem that was distinctly remote from the rarefied world of logic. This was the Riemann hypothesis, at that time an issue of pressing concern among number theorists. As of this writing, the Riemann hypothesis remains unsolved; indeed, it is considered the most important unsolved problem in mathematics.

  The Riemann hypothesis concerns the distribution of the prime numbers. Thanks to Euclid, we know that there are infinitely many primes. But is there a pattern to the sequence in which they crop up? At first the distribution appears to be arbitrary:

  1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25

  Moreover, as you climb up the number line, there seem to be fewer and fewer primes:

  In fact, though, there is some pattern to the primes, as Karl Gauss (1777–1855) discovered around 1793, when he was fifteen. Gauss recognized a correlation between the distribution of the primes up to a certain number n and the natural logarithm of that number. As a result he was able to come up with what would later be called the prime number theorem, a formula for determining the number of primes up to a certain number—or something close to it: inevitably the formula overestimated by a small degree the number of primes. The next step was to find a way of eliminating the “error term” in the prime number theorem—that is to say, to find a formula by means of which the exact number of primes up to even some inconceivably huge n could be calculated.

  This was where the German mathematician Bernhard Riemann (1826–1866) came in. Riemann was working with the so-called zeta function, denoted by the Greek letter ζ and calculated according to the formula

  By feeding complex numbers* into the zeta function, Riemann discovered, he could eliminate the error term in the prime number theorem. Through some pretty difficult mathematics, he was able to hypothesize that whenever the zeta function, upon being fed with a complex x, took the value of 0, the real part of the complex x would be . This meant that on a graph of the function, all the zeros would be positioned along the so-called critical line of . The elaborate mathematical link that Riemann mapped between the zeros of the zeta function and the prime numbers meant that if his hypothesis was true, the error term in the prime number theorem could be eliminated.

  But was the Riemann hypothesis true? Could it be proven? Disproof would be simple, a matter of finding just one zero off the critical line. Unfortunately, testing out the zeros of the Riemann hypothesis was a technically challenging business, requiring the use of highly complex mathematics. A formula developed by G. H. Hardy and J. E. Littlewood, for instance, allowed them to confirm by the late 1920s that the first 138 zeros lay on the critical line, but turned out to be infeasible for
the calculation of zeros beyond that number. Likewise a project undertaken in 1935 by the Oxford mathematician E. C. “Ted” Titchmarsh (1899–1963) to map the zeros using punch-card machines intended for the calculation of celestial motions established only that the first 1,041 zeroes lay along the critical line. Only one contrary example would have been needed to disprove the hypothesis. A proof, on the other hand, would have to be theoretical, eliminating entirely the possibility of there being even a single zero off the line, up to infinity. (Of course, a third possibility was that the hypothesis would turn out to be true yet unprovable in a Gödelian sense. So far, however, no one had been able to prove its unprovability, either.) That Riemann himself was rumored to have had a proof of the hypothesis—burned, along with other important papers, by an overzealous housekeeper after his death—only added to the sense of urgency and frustration surrounding the problem. Opinions on the validity of the hypothesis varied wildly; even the opinion of Hardy, who devoted much of his career to wrestling with Riemann, fluctuated at various points in his career. In 1937, the year he was living in Princeton, he was apparently feeling some pessimism as to the truth of the hypothesis—a pessimism he may well have passed on to Turing.

  Turing’s work that summer involved a particularly arcane aspect of the Riemann hypothesis. Although application of Gauss’s prime number theorem suggested that the theorem would always overestimate the number of primes up to n, in 1933 the Cambridge mathematician Stanley Skewes had shown that at some point before a crossover occurred, and the formula began to underestimate the number of primes. Hardy made the observation that was probably the largest number ever to serve any “definite purpose” in mathematics, and tried to indicate its enormousness by noting,

  The number of protons in the universe is about 1080. The number of possible games of chess is much larger, perhaps 101050. If the universe were the chessboard, the protons the chessmen, and any interchange in the position of two protons a move, then the number of possible games would be something like the Skewes number.

  Turing’s ambition was either to lower the bound established by Skewes or to disprove the Riemann hypothesis altogether. But though he wrote up a draft of a paper, he never published his results. After his death the paper circulated, and a number of errors were discovered, all of which were corrected by A. M. Cohen and M. J. E. Mayhew in 1968. Using Turing’s methods, they were able to lower the bound established by Skewes to 1010529.7. However, it then turned out that in 1966 R. S. Lehman had reduced the bound to 1.65 × 101165 by another method. Even in death, it seemed, Turing was fated to be beaten to the punch.

  6.

  When Turing returned to Princeton in the fall of 1937, Hardy was no longer in residence; he had gone back to Cambridge, where he was beginning work on his memoir, A Mathemati-cian’s Apology, which would be published in 1940. All friendliness aside, he and Turing had never really gotten on, as in Hodges’ words, “a generation and multiple layers of reserve” divided them. Hardy was also a far more orthodox mathematician than Turing. His work fell very much within the purview of pure mathematics, which was why he could argue, in A Mathematician’s Apology, that

  the “real” mathematics of the “real” mathematicians, the mathematics of Fermat and Euler and Gauss and Abel and Riemann, is almost wholly “useless” (and this is as true of “applied” as of “pure” mathematics). It is not possible to justify the life of any genuine professional mathematician on the ground of the “utility” of his work.

  Hardy saw mathematics as fundamentally innocent, even neutral—a sort of Switzerland of the sciences—and this meant that as late as 1940 he was still taking “comfort” in the knowledge that “no one has yet discovered any warlike purpose to be served by the theory of numbers or relativity, and it seems very unlikely that anyone will do so for many years.” As a pure mathematician, he wanted to affirm his faith that his discipline would remain forever “gentle and clean,” even as in Germany the Third Reich was adapting a typewriter-sized encipherment machine called the Enigma—in production since 1913—for military use, and in America the theory of relativity was being applied to the production of an atomic bomb. The simple construction of an apparently unbreakable code, and a machine to transmit it, would be the first of many tasks for which mathematicians would find themselves drafted in the war effort; far from being allowed to pursue their “gentle and clean” science, they would now have to put their skills at the disposal of rival military machines, the one bent on world domination and the other on stopping it. Indeed, Hardy was so utterly wrong in his affirmation that one cannot help wondering how much of what he wrote was wishful thinking. For soon the fate of Europe would be resting, to a great extent, on the shoulders of a small group of scientists. Their decisions would save lives and cost lives. Alan Turing was to be one of them.

  For most of his twenty-five years, he had been trying to reconcile his passion for mathematical logic with his impulse to build things. As a homosexual, he was used to leading a double life; now the closeted engineer in him began to clamor for attention, and not long after arriving at Princeton, he wrote to his mother,

  You have often asked me about possible applications of various branches of mathematics. I have just discovered a possible application of the kind of thing I am working on at present. It answers the question “What is the most general kind of code or cipher possible,” and at the same time (rather naturally) enables me to construct a lot of particular and interesting codes. One of them is pretty well impossible to decode without the key, and very quick to encode. I expect I could sell them to H. M. Government for quite an additional sum, but I am rather doubtful about the morality of such things. What do you think?

  Questions of morality aside, the project intrigued Turing so much that once he was settled again in Princeton, he undertook a plan to build, at the machine shop overseen by the physics department, an electronic multiplier employing relay-operated switches specifically for the purpose of encipherment. The use of such switches, in 1937, was in and of itself something of a novelty; so was the use of binary numbers, which in turn allowed for the employment of a two-valued Boolean algebra, with TRUE and FALSE represented by 0 and 1 and 0 and 1, in turn, associated with the on and off positions of the switches. In this regard Turing was replicating the research of a contemporary, Claude Shannon (1916–2001), whose 1937 MIT master’s thesis, A Symbolic Analysis of Relay and Switching Circuits, would be the first published work to describe the implementation of Boolean algebra in a switch-based machine.

  As Turing explained to the physicist Malcolm MacPhail, the idea behind the encipherment machine was to “multiply the number corresponding to a specific message by a horrendously long but secret number and transmit the product. The length of the secret number was to be determined by the requirement that it should take 100 Germans working eight hours a day on desk calculators 100 years to discover the secret factor by routine search!” According to MacPhail, Turing actually built the first three or four stages of this project. He didn’t know, of course, that in just a few years, the Second World War would erupt, and he would be sent by his government to Bletchley Park, where he would undertake the design of a much more sophisticated decipherment machine. Already, however, he was on the road to what would become the preoccupying focus of the next dozen years for him: the application of mathematical logic to the building of machines.

  Although the electronic multiplier, which Turing never finished, was his principle spare time project for the academic year 1937–38, he had not forgotten the Riemann hypothesis. Titchmarsh’s punch-card machine had succeeded in establishing only that the first 1,041 zeroes of the zeta function were on the critical line of ½. Now Turing proposed building a machine based on one in Liverpool that carried out the calculations necessary to predict tidal motions. He had recently learned about a powerful method for calculating the zeros that Riemann himself had come up with, and that the mathematician Carl Ludwig Siegel (1896–1981) had rediscovered among the papers that Rieman
n’s wife, Elise, had managed to rescue from his pyromaniacal housekeeper. Like Titchmarsh before him, Turing had noticed a parallel between Riemann’s formula and those that were applied to the prediction of tides, planetary orbits, and other such physical phenomena. The Liverpool machine was an analog machine—it functioned by replicating the tidal motions it was intended to calculate—and in principle a similar machine for the calculation of the zeros of the zeta function could likewise run on . . . either forever or until it came up with a zero off the critical line. Despite Titchmarsh’s written encouragement, however, Turing appears not to have had the chance to put his ideas to the test—at least while he was at Princeton.

  Officially, he was devoting himself to the thesis he was writing for Church. This was to be a treatise on a problem relating to Gödel’s incompleteness theorem. In the wake of Gödel’s result, much effort was being expended in logic circles on finding ways to minimize its impact, so that it should interfere as little as possible with the practice of mathematics. Such a project, of course, was very much within the purview of Alonzo Church, and Turing undertook it very much at Church’s suggestion. Church was at first not entirely convinced by the incompleteness theorem, and spent several years in pursuit of the mathematical equivalent of loopholes. What really galled him about Gödel, however, was that Gödel “regarded as thoroughly unsatisfactory” Church’s “proposal to use λ-definability as a definition of effective calculability.” By way of reply, Church suggested that if Gödel could “present any definition of calculability which seemed even partially satisfactory,” then he (Church) “would undertake to prove that it was included in the lambda-calculus.” Which, no doubt, he could have: the point Church was missing was that Gödel’s objections to the lambda calculus were philosophical, even aesthetic in nature, and crystallized as soon as he read “Computable Numbers.”

 

‹ Prev