A History of Pi

Home > Other > A History of Pi > Page 17
A History of Pi Page 17

by Petr Beckmann


  Before we take a look at these lately arrived intelligent computers, we return to their older, moronic brother, who is not capable of anything but slavishly following comparatively simple commands of its programmer, albeit with enormous speed and the use of its vast memory. It was this kind of computer that was used to rattle off the decimal digits of π. The details of the story have been told by Wrench,93 and only a brief survey will be given here.

  The first computer calculation of π was apparently made in September 1949 on ENIAC (Electronic Numerical Integrator and Computer) at the Ballistic Research Labs; it calculated π to 2,037 places in 70 hours, a pitifully long time by today’s standards. Like many other computer evaluations, this one was programmed in accordance with Machin’s formula (14), here, in the form

  In November 1954 and January 1955, NORC (Naval Ordnance Research Calculator) at Dahlgren, Virginia, was programmed to compute π to 3,089 significant places; the run took only 13 minutes.

  This record was broken at the Ferranti Computer Centre, London, in March 1957, when a Pegasus computer computed 10,021 decimal places in 33 hours. The program was based on an arctangent formula similar to, but not identical with, the one used by Strassnitzky (here). However, a subsequent check revealed that a machine error had occurred, so that “only” 7,480 decimal places were correct. The run was therefore repeated in March 1958, but the correction was not published.

  Then, in July 1958, an IBM 704 at the Paris Data Processing Center was programmed according to a combination of Machin’s formula and the Gregory series, corresponding to (15) here; it yielded 10,000 decimal places in 1 hour and 40 minutes.

  A year later, in July 1959, the same program was used on an IBM 704 at the Commissariat à l’Energie Atomique in Paris, and 16,167 places were obtained in 4.3 hours.

  Machin’s formula was also the basis of a program run on an IBM 7090 at the London Data Centre in July 1961, which resulted in 20,000 decimal places and required only 39 minutes running time.

  By this time the limit of the then available computer memories had almost been reached. Further substantial increases in the number of decimal places could have been obtained only by modifying the programs to use more machine time and therefore to run into unreasonable costs.

  But in July 1961, Shanks and Wrench94 increased the speed of the computation by a factor of about 20. (Daniel Shanks, incidentally, is not related to William Shanks, who calculated 707 places just 100 years ago, see here). In part, this was due to a faster computer (an IBM 7090 at the IBM Data Processing Center, New York), but they also used several tricks in programming it; in particular, they abandoned Machin’s formula in favor of the formula

  which was found by Störmer in 1896. The run resulted in 100,265 decimal places, of which the first 100,000 were published94 by photographically reproducing the print-out with 5,000 decimals per page. The first 10,000 places of the print-out are reproduced on the end sheets of this book. The time required for computing the first term in (2) was 2 hours and 7 minutes, for the second term 3 hours and 7 minutes, and for the third term 2 hours and 20 minutes. To this must be added 42 minutes for converting the final result from binary to decimal digits, so that the total time required was 8 hours and 43 minutes.

  A computation of this kind involves billions of individual arithmetic operations, and if a single one of these is mistaken, the entire subsequent operation may yield an erroneous result. It is therefore necessary to check the result. For this, Shanks and Wrench used a special method which calculates π by a different formula (another arctangent formula, due to Gauss), but uses the partial results of the original run in such a way that the check takes less time than the original computation.

  Subsequently, π was computed to 250,000 decimal places on an IBM 7030 at the Commissariat à l’Energie Atomique in Paris in February 1966, and a year later, in February 1967, a CDC 6600 was programmed by J. Gilloud and J. Filliatre, at the same institution, to yield 500,000 decimal places. The program was again based on Störmer’s formula (2) and the Shanks-Wrench method for checking the digits; the running time was 28 hours and 10 minutes (of which 1 hour and 35 minutes were used for conversion), and an additional 16 hours and 35 minutes were needed for the check. These quarter- and half-million digit values of π were published in reports of the Commissariat à l’Energie Atomique in Paris.

  This, as far as I know, is the present record. I may be mistaken, and even if I am not, this record will, no doubt, eventually be broken.

  The driving force behind these computations seems to be, at least in part, the same as the one that drove Ludolf van Ceulen to find the first 20 decimal places in 1596. Yet these hundreds of thousands of digits are not quite as useless as the results of the earlier digit hunters. There are two reasons for this. The first, admittedly, is not very convincing. It concerns the statistical distribution of the digits, which is expected to be uniform, that is, the frequency with which the digits (0 to 9) appear in the result will tend to the same limit (1/10) as the number of decimal places increases beyond all bounds. An analysis of the first 16,000 decimal digits bears this out within the usual statistical tests,93 but this does not constitute a rigorous proof for a finite number of digits, no matter how large; on the other hand, a rigorous theoretical proof (which has not yet been given) has no need of the actual arithmetical computation. And as for the generation of digits with equal probabilities, this can be done in much simpler ways.

  The other reason for such computations is more convincing. Before it goes into operation, a computer, like any other machine, is tested whether it can do its job reliably. One such method is to let it churn out a few tens of thousands of decimal digits of π and to check the result against the known figures; if they agree, the computer has performed millions of arithmetical operations faultlessly. (There are, of course, other functions that must also be tested.)

  * * *

  ALL of the computations above were performed by computers with not an ounce of intelligence. The frustrations resulting from the computer’s inability to insert a simple dot have been remarked on before (here). However, it should be added that it takes only a relatively primitive program to make the computer supply the missing dot and to print, for example, the following comment:

  LOOK BUDDY, I PUT IN A DOT FOR YOU IN LINE 123, BUT NEXT TIME DO IT YOURSELF, OK?

  But this does not, of course, constitute intelligence. Every step to produce this result must be covered by the instructions that make a computer execute a program, and the above sentence must be, so to speak, put into the computer’s mouth by the programmer. That is not the way one gives instructions to an intelligent being. If you ask your wife (or husband) to bring you a glass of water, you don’t instruct her (or him) exactly what muscle to move at any given time. She will, without specific instructions, turn on the cold, not the hot, water, and she will use her own judgement in unexpected situations — if for some reason no glasses are available, she will overrule your instructions and bring you a cup, even though you asked for a glass. That does not take much intelligence, but it is a lot more than most contemporary computers have.

  Will computers ever become intelligent?

  They already have. Not the morons that bill your charge account or that compute the decimal places of π, but the amazing programs (it is the programs rather than the actual computer hardware) that have been growing in the last few years at Stanford, M.I.T., Johns Hopkins, and other laboratories.

  Intelligence, says my dictionary,96 is “the ability to adapt to new situations, and to learn from experience; the inherent ability to seize the essential factors of a complex matter.”

  Believe it or not, but there is nothing in that definition that a machine cannot be programmed to do. Programs have been written that learn from experience, adapt to new conditions, grasp the essentials of a complicated problem, and decide for themselves how to solve it; and all that (as yet in a few very restricted areas) with an intelligence that approaches that of the best humans in the field, and fa
r surpasses the intelligence of most others. The stress here is on how well they can do this, not on how many varied problems they can manage, for the memory of a computer and the access to it cannot (yet) compete with the human brain. But the qualitative principle is there.

  Take, for example, the program that plays checkers, as developed over the years since 1947 by Arthur Samuel.97 This program will learn from experience (Samuel improved it by making several computers play checkers furiously against each other for prolonged periods). It can also learn from other players’ experience, and it will “study” other people’s games and moves recommended by champions. In a given position, it will not slavishly go through all the possible moves and their consequences (there are too many), but it will use certain criteria to evaluate its own position and to determine the best strategy, and it will then make its own decision as to the next move. The results: Although the program could not beat the world checkers champion, it did beat the champion of Connecticut, it would probably beat you, it would certainly beat me, and — an extremely significant fact — it beats its own programmer.

  There are many other examples. There are programs to play other games (including chess) intelligently, programs that will prove theorems (one such program proved, in its initial version, 38 out of the first 52 theorems in Newton’s Principia), programs that verify mathematical proofs and expose fallacies, programs that solve general problems to attain given goals, and many others, including one with great potentialities: a program to write programs.97 A computer that is particularly dramatic, though perhaps less sophisticated than others in this class, is The Beast, a battery-operated cylinder on wheels built by scientists at the Applied Physics Lab of Johns Hopkins University. It has its own computer logic and steering, and it is furnished with tactile, optical and sonar sensors. The Beast was often let loose to roam the halls and offices of the Applied Physics Lab, which it would do without bumping into walls or falling downstairs (it would turn round on sensing a step), and when its batteries were low, The Beast would optically find an outlet in some office, plug itself in, and depart again when it had “eaten,” no doubt often leaving behind a new secretary frozen in horrified incredulity.97

  But let us return to the checkers program that can beat its own programmer. A long time ago, even when he constructed his first bow and arrow, man used his intelligence to design machines that surpassed him in speed, force, and many other qualities. Arthur Samuel’s program might be taken as an historic landmark: Somewhere near that point, man first used his intelligence to design a machine that surpassed him in intelligence. We are now only at the birth of such a machine, but eventually the intelligent computer might be to the moronic computer as the spacecraft is to the bow and arrow. There are already programs to write programs, and programs to balance assembly lines. It is therefore entirely within the realm of possibility that such a machine will eventually have the ability to reproduce itself.

  “Destroy it!” is what the pious, respectable and community-minded ladies will scream when word gets out about the new computer.

  Their screams have been heard before.

  “Destroy it!” is what Julius Caesar screamed as his hordes put the torch to the Library of Alexandria.

  “Destroy it!” is what the Grand Inquisitor screamed when he read Galileo’s Dialogues.

  “Destroy it!” is what the Luddites screamed in 18th-century England when they smashed the machinery that was supposedly responsible for their misery in the Industrial Revolution.

  “Destroy it!” is what the Soviet censor screams when he sees a copy of Orwell’s 1984.

  “Destroy it!” is what the Fascists of the Left screamed when they bombed or smashed computing centers in Minnesota or Montreal.

  It has again become fashionable to blame science and technology for the ills of society. I have some sympathies for the Luddites who were uneducated, miserable, and desperate. I have none for the college-educated illiterates who drivel about “too much science and technology” because they want to conserve their life style by denying it to everybody else.

  * * *

  THREE centuries ago, Gottfried Wilhelm Leibniz, co-inventor of the calculus and co-discoverer of the first infinite series for π, dreamt of the day when courts would be abolished, because disputes would be settled mathematically by solving impartial equations that would show who was right and who was wrong. The intelligent computer that is now being born makes that dream somewhat less fantastic. Perhaps the nth generation of intelligent computers will make a better job of keeping peace among men and nations than men have ever been able to.

  And with that thought our story of π is coming to an end. It is a story as varied as the brilliance of Archimedes of Syracuse and the ignorance of Heisel of Cleveland.

  The history of π is only a small part of the history of mathematics, which itself is but a mirror of the history of man. That history is full of patterns and tendencies whose frequency and similarity is too striking to be dismissed as accidental. Like the laws of quantum mechanics, and in the final analysis, of all nature, the laws of history are evidently statistical in character.

  But what those laws are, nobody knows. Only a few scraps are evident. And one of these is that the Heisels of Cleveland are more numerous than the Archimedēs of Syracuse.

  NOTES

    1. Dantzig.

    2. Kolman

    2. Kolman.

    3. Sagrada Biblia, Editorial Catolica, Madrid, 1955.

    4. From Sato Moshun’s Tengen Shinan (1698), see Smith and Mikami, p.131.

    5. Neugebauer.

    6. Neugebauer, pp.58-61.

    7. Midonick.

    8. Midonick.

    9. Rudio, p.18

  10. Rudio, p.19

  11. See bibliography.

  12. Needham, p.29. Reproduced by kind permission of Cambridge University Press.

  13. Needham, p. 43. Reproduced by kind permission of Cambridge University Press.

  14. Hogben (1937).

  15. Brandon.

  16. Collier.

  17. Butkevich et al. The quotation has been retranslated from Russian and may not be quite accurate.

  18. De Camp, who quotes Bishop Landa.

  19. Coolidge (1949), pp. 46-47.

  20. Oxford, 1931. Reprinted by E.P. Dutton & Co., New York, 1967.

  21. According to some, the Museum and Library were founded by Ptolemy I, but acquired its definite form under Ptolemy II. Others still say that the Museum was founded by Arsinoe.

  22. De Camp.

  23. It could be proved by a reductio ad absurdum, but so could many other things that do not follow from Euclid’s axioms, e.g., the statement that “cause precedes effect.”

  24. De Camp

  25. Loeb Classical Library, 10 vols., 1938-63.

  26. See Cantor, vol. I, Chapters 25-27, for a very poor record compared with that of their contemporaries.

  27. Rudio.

  28. Barnes, vol. 1, p. 255.

  29. Geddie.

  30. Saint Joan

  31. Lodge, p. 61.

  32. From On Floating Bodies, translated by Sir Thomas Heath, see bibliography under Heath (1897 and 1912). Reproduced by kind permission of Cambridge University Press.

  33. From Aristotle’s Physics, translated by H.G. Apostle, © 1969 by Indiana University Press, reproduced by permission of the publishers.

  34. Heath (1897 and 1912).

  35. Tropfke, p. 210.

  36. A hand was about 4 inches, a cubit about 21.8 inches (J.P. Boyd, Bible Dictionary, Ottenheimer Publishers, Owing Mills, Md., 1958).

  37. Hogben (1937).

  38. It is even doubtful whether Pythagoras knew Pythagoras’ Theorem, and if so, whether he could prove it. See Tropfke, pp. 137-138.

  39. Depman. In fairness I must add that I have not found a reference to this event anywhere else, and that Soviet books are unreliab
le where competitive religions are concerned.

  40. Hogben (1937).

  41. De Camp.

  42. Lodge.

  43. Neugebauer.

  44. Hogben (1937) writes that there was also a stream of Jewish physicians who brought the science of Algebra to Europe, pointing to the term “Physician and Algebraist” used in Spain at the time, and I used this interpretation in previous editions of this book. Not so, writes Dr. R.B. Lees, professor of linguistics at Tel-Aviv University, to whom I am greatly indebted for this and other comment. Arabic el jabr meant “the bonesetting” and only later came to be used in the sense of reuniting the parts of an equation.

  45. Hogben (1937); but Boyer (1968) says it is not clear where Adelard came into contact with Muslim learning.

  46. Boyer (1968).

  47. Tropfke.

  48. Boyer (1968)

  49. Cantor, vol. 2, pp. 199-201.

  50. Incorrectly attributed to Galileo (see Lodge, p. 131); may not be true of Bruno, either.

  51. De Camp.

  52. Ore.

  53. Zeitschrift f. Mathematik und Physik, vol. 36, Historisch-literarische Abteilung, pp. 139-140 (1891).

  54. Ore.

  55. See Chapter 18.

  56. Schubert. The distance to Sirius has been brought up to date.

  57. Tropfke, p. 217.

  58. Newman, vol. 1, p. 466.

  59. Ball.

  60. Courtesy of Miss Angela Dunn, Director, Problematical Recreations, Litton Industries, Beverly Hills, California.

  61. See bibliography.

  62. Needham, Fig. 80, p. 135. Reproduced by courtesy of Cambridge University Press.

  63. Freyman.

  64. Boyer (1968), p. 401.

  65. Smith and Mikami, p. 87.

  66. Smith and Mikami, p. 130.

  67. This follows Hogben’s suggestion, see Hogben (1937), p. 258-262.

  68. Struik (1969), pp. 244ff.

  69. See, for example, C.D. Olds, Continued Fractions, Random House, N.Y., 1963.

  70. Tropfke, p. 224.

 

‹ Prev