It Began with Babbage

Home > Other > It Began with Babbage > Page 32
It Began with Babbage Page 32

by Dasgupta, Subrata


  The legacy of Algol 60 is far-flung.

  XXII

  As we have noted, the creators of the first programming languages were, almost without exception, trained formally in mathematics. Mathematical notation served as a source of inspiration and ideas for them, especially to those who aspired to create a “universal” language of computation (see Section XIV, this chapter). But, the designers of such languages such as Algol 60 were also mindful that their artifacts were not so much ends in themselves as means to other ends: the description, expression, and communication of computations between humans and from humans to machines. Toward this end, the austerity of strictly mathematical notation had to be tempered, and programming languages had to be given, on the one hand, a more “human face” and on the other, endowed with a sense of “implementability.”

  But there were some to whom the very austerity, economy, aesthetics of mathematical notation beckoned, and to none more so than Canadian Kenneth Iverson (1920–2004). A graduate in mathematics and physics from Queen’s University, Kingston, Ontario, Iverson did his graduate work in applied mathematics at Harvard, obtaining his PhD in 1954 with a dissertation on the automatic solutions of linear differential equations. He was a student of Howard Aiken, and years later he would acknowledge Aiken’s influence on his thinking. One of the precepts he learned from Aiken was the desire for simplicity in scientific thinking.126

  After achieving his doctorate, and as a freshly appointed assistant professor in mathematics at Harvard, Iverson was inducted into a new graduate program on “automatic data processing” initiated by Aiken.127 As part of his new teaching duties, he embarked on the invention of a new notation for teaching and writing about computation—a notation, in other words, for strictly human consumption; a notation for pedagogic use,128 but as much a notation as a tool of thought.129

  We revert to the question: When does notation become a language? The dividing line is fuzzy. Swiss-American mathematician and mathematical historian Florian Cajori (1859–1930) titled his most famous book A History of Mathematical Notation (1928), not A History of Mathematical Language. On the other hand, we commonly speak of mathematics as “the language of science”—meaning, not just notation, but that the calculus of mathematics serves as a means of scientific discourse.

  At any rate, in 1962, Iverson published his first book, devoted to the notation he had invented and, with breathtaking aplomb, titled his book A Programming Language.130 Although the language had yet no name, this very title would, in a few years, inspire the acronym APL. Rather as John Backus’s name is inextricably entwined with FORTRAN, so also Iverson’s with APL.

  XXIII

  If brevity is the soul of wit, as Polonius claimed in Hamlet, then both writer and reader of APL programs must be enormously quick-witted for, in designing APL, Iverson took his mentor Aiken’s ideal of simplicity to heart to mean an extreme brevity. His goal seemed to be: How little do I need to write to say something meaningful. The outcome was a notation that was uncompromisingly terse—hence the cult of the “one-liners” to express meaningful computational actions much beloved of APL devotees in later years.131

  This brevity of expressiveness was a result of a number of factors, the most noticeable being the presence of an implicit parallelism in the meanings (semantics) of APL statements. We illustrate this with two small examples.

  First, suppose there are two programs (or routines) named A and B. And suppose that inside B there is an APL statement

  → A, n

  Then, the execution of this statement results in the immediate execution of the statement numbered n in A and the continuation of the execution of statements in A following n, whereas B also continues on to its next statement. In other words, this statement effects both a branch to another program that is then executed concurrently with the continuing execution of the originating program.

  A second source of implicit parallelism in APL statements lay in the fact that APL is an array-based language. In addition to ordinary numbers (scalars), its fundamental data types are vectors and matrices—two standard mathematical entities. APL’s brevity of expression is exemplified by notations that express operations on entire vectors or matrices. To take an example, suppose the elements of a vector x = [x1, x2,…, xn] are to be added to form a single scalar z = x1 + x2 + … + xn. In an Algol-like language, this would be expressed as something like

  z := 0;

  i := 1;

  while i ≤ n do

  z := z + x [i];

  i := i + 1

  That is, after “initializing” the variables z and i, the contents of x[i] is added to z iteratively with i incremented by one each time. The iteration (while loop) ends when the value of i exceeds n.

  In APL, this task would be expressed by the statement

  z ← +/x

  XXIV

  As we have noted, Iverson’s original intention in inventing APL (even before it came to be known by this acronym) was a notation for teaching computing. It was meant to be a pedagogic tool.132 Indeed, when he and Frederick P. Brooks (1931–) wrote their textbook Automatic Data Processing (1965),133 some of the algorithms presented therein were in APL. But well before then, in 1960, Iverson left Harvard for IBM’s Thomas J. Watson Research Center, the company’s headquarters for fundamental research. And much more interestingly and creatively, in 1964, Iverson’s colleague and (as it turned out) long-time collaborator, Adin Falkoff (1921–2010), set out to use APL as a machine description language. One of IBM’s major computer systems of the mid 1960s, the System/360, was described formally and precisely in APL (although the language in this exposition was still presented as a notation without a name).134

  This work was especially seminal for it was undoubtedly the first formal description of a physical computer (more precisely, its architecture, a liminal artifact) in a quasiprogramming notation. When coupled with the description of algorithms in Automatic Data Processing, we find a blurring of the distinction between the computer as a material artifact, algorithms as abstract artifacts, and computer architectures as liminal artifacts. They can all be spoken about using a common language.

  The description of the System/360 in APL also marks the emergence of a new language genus that came to be called computer hardware description language (or computer design language).135

  By the time the description of the System/360 was published, the notation was sufficiently matured that thoughts turned to its implementation—the transformation of a mathematical notation for thinking about computation to a programming language.136 In two fundamental ways, however, the language’s implementation deviated markedly from the implementation of such languages as FORTRAN and Algol 60. First, the symbols constituting the APL notation demanded the development of a distinct character set for printers to print APL programs. Indeed, specialized APL input/output devices (called APL terminals) were developed.137 Second, the execution strategy for APL programs eschewed the compile-and-execute strategy developed for FORTRAN and other contemporary programming languages (that is, first translate the whole program into a “native” machine code and then execute the translated program). Instead, APL statements were interpreted (by another program, the “interpreter”) and executed one statement at a time. APL was implemented as an interpretive language.138 A new language, implemented for the IBM System/360 and named APL360 was thus born.139

  NOTES

  1. D. E. Knuth & L. T. Pardo. (1980). The early development of programming languages (original work published 1977). In N. Metropolis, J. S. Howlett, & G.- C. Rota (Eds.), A history of computing in the twentieth century (pp. 197–273). New York: Academic Press (see especially, p. 202).

  2. K. Zuse. (1975). The outline of a computer development from mechanics to electronics (original work published 1962). In B. Randell (Ed.), The origins of digital computers (2nd ed., pp. 171–186). New York: Springer-Verlag (see especially p. 181).

  3. F. L. Bauer & H. Wössner. (1972). The “Plankalkül” of Konrad Zuse: A forerunner of
today’s programming languages. Communication of the ACM, 15, 678–685.

  4. Zuse, op cit., p. 101.

  5. Quoted by Knuth and Pardo, op cit., p. 203.

  6. Baurer and Wössner, op cit., p. 679.

  7. Ibid., p. 681.

  8. Knuth and Pardo, op cit., p. 206.

  9. Zuse, op cit., p. 181.

  10. Bauer and Wössner, op cit., p. 682.

  11. Ibid., p. 683.

  12. Knuth and Pardo, op cit., p. 203.

  13. Ibid.

  14. A. W. Burks. (1951). An intermediate program language as an aid in program synthesis. Report for Burroughs Adding Machine Company. Ann Arbor, MI: University of Michigan.

  15. Knuth and Pardo, op cit., p. 216.

  16. Zuse, op cit., p. 180.

  17. J. W. Backus. (1981). The history of Fortran I, II and III. In R. L. Wexelblat (Ed.), History of programming languages (pp. 25–74). New York: Academic Press (see especially p. 25).

  18. Ibid., p. 28.

  19. J. Backus & H. Herrick. (1954). IBM 701 speedcoding and other automatic programming systems. In Proceedings of the ONR Symposium on Automatic Programming for Digital Computers. Washington, DC: Office of Naval Research, Department of the Navy. Cited by Backus, op cit., p. 28.

  20. Knuth and Pardo, op cit., p. 222.

  21. J. H. Laning, Jr. & N. Zierler. (1952). A program for translation of mathematical equations for Whirlwind I. Engineering memorandum E-364. Cambridge, MA: MIT Instrumentation Laboratory.

  22. Knuth and Pardo, op cit., p. 227.

  23. Ibid.

  24. Quoted by Knuth and Pardo, op cit., p. 228.

  25. See, for example, Remington-Rand, Inc. (1953). The A-2 compiler system operations manual. Norwalk, CT: Remington-Rand (prepared by R. K. Ridgeway and M. H. Harper).; N. B. Moser. (1954). Compiler method of automatic programming. In Proceedings of the ONR Symposium on Automatic Programming for Digital Computers (pp. 15–21). Washington, DC: Office of Naval Research, Department of the Navy.

  26. Knuth and Pardo, op cit., p. 227.

  27. Backus and Herrick, op cit.

  28. S. Rosen. (1969). Electronic computers: A historical survey. Computing Surveys, 1, 7–36 (see especially p. 14).

  29. Backus, op cit., p. 27.

  30. Ibid., p. 28.

  31. Ibid.

  32. Ibid., p. 30.

  33. Ibid., p. 28.

  34. Ibid.

  35. S. H. Lavington. (1998). A history of Manchester computers (p. 20). London: The British Computer Society (original work published 1975).

  36. The terms Big Science and Little Science are due to the historian of science Derek de Solla Price. See D. K. de Solla Price. (1986). Little science, big science—and beyond (Exp. ed.). New York: Columbia University Press (original work published 1963). See also A. Weinberg. (1967). Reflections on big science. Oxford: Pergamon.

  37. J. W. Backus, R. W. Beeber, S. Best, R. Goldberg, L. M. Haibit, H. C. Herrick, R. A. Nelson, D. Sayre, P. B. Sheridan, H. Stern, I. Ziller, R. A. Hughes, & R. Nutt. (1957). The FORTRAN automatic coding system. Proceedings of the Western Joint Computer Conference (pp. 188–197). Los Angeles, CA.

  38. Backus, op cit., p. 30.

  39. Ibid., p. 45.

  40. Ibid., p. 30.

  41. Ibid., p. 32.

  42. Ibid., p. 33.

  43. Ibid., p. 36.

  44. Ibid.

  45. IBM. (1956). Programmer’s reference manual: The FORTRAN automatic coding system for the IBM 704 EDPM. New York: IBM Corporation.

  46. Backus et al., op cit.

  47. IBM. (1957). Programmer’s primer for FORTRAN automatic coding system for the IBM 704. New York: IBM Corporation.

  48. Backus, op cit.

  49. Knuth and Pardo, op cit., p. 243.

  50. Quoted in Backus, op cit., p. 39.

  51. Ibid.

  52. Ibid., p. 37.

  53. Ibid., p. 30.

  54. Ibid., p. 29.

  55. See, for example, B. Randell & L. J. Russell. (1964). Algol 60 implementation. New York: Academic Press; J. A. N. Lee. (1967). Anatomy of a compiler. New York: Rheinhold; F. R. A. Hopgood. (1969). Compiling techniques. London: Macdonald.

  56. S. Rosen. (Ed.). (1967). Programming systems and languages. New York: McGraw-Hill.

  57. The source and object codes are functionally equivalent in the sense that if the source program is designed to compute a function F(I) for some input data I, then if the same input is fed to the object code it will execute on its target computer and compute the same function F(I).

  58. For example, as illustrated by the title of the article of R. L. Glass. (1969). An elementary discussion of compiler/interpreter writing. Computing Surveys, 1, 55–77; and by the title of the book by J. S. Rohl. (1975). An introduction to compiler writing. London: Macdonald and Jane’s.

  59. Backus, op cit., p. 34.

  60. Optimization was, of course, an optimistic term because optimal—meaning, the very best—is usually an unattainable, if laudable, goal. In the language of compilers, optimization really meant a systematic process of improving the object program within the practical limits of time for compilation and the cognitive limits of the compiler writer.

  61. Backus, op cit., pp. 34–36.

  62. P. B. Sheridan. (1959). The arithmetic translator-compiler of the IBM FORTRAN automatic coding system. Communication of the ACM, 2, 9–21.

  63. J. Cocke & J. T. Schwartz. (1970). Programming languages and their compilers (pp. 510–515). New York: Courant Institute of Mathematical Sciences.

  64. Hopgood, op cit., pp. 2–3.

  65. G.. M. Hopper. (1981). Keynote address. In Wexelblat (pp. 7–24), op cit., p. 16.

  66. S. Rosen. (1969). Electronic computers: A historical survey. Computing Surveys 1, 7–36 (see especially pp. 10–11).

  67. Hopper, op cit., p. 15.

  68. Ibid.

  69. Ibid., p. 16.

  70. Ibid.

  71. Ibid., p. 17.

  72. J.. E. Sammet. (1981a). The early history of COBOL. In Wexelblat (pp. 199–276), op cit., p. 202.

  73. Ibid., pp. 200–201.

  74. Ibid., p. 203.

  75. Ibid., pp. 208–211.

  76. J. E. Sammet. (1981b). An overview of high level languages. In M. C. Yovits (Ed.), Advances in computers (Vol. 20, pp. 200–260). New York: Academic Press (see especially pp. 214–215).

  77. Ibid., p. 239.

  78. H.. Rutihauser. (1967). Description of Algol 60 (pp. 4–6). Berlin: Springer-Verlag; A. J. Perlis. (1981). The American side of the development of Algol. In Wexelblat (pp. 75–91), op cit., p. 78; P. Naur. (1981). The European side of the last phase of the development of Algol 60. In Wexelblat (pp. 92–139), op cit., p. 94.

  79. Perlis, op cit., p. 76.

  80. Ibid., p. 77.

  81. Ibid.

  82. Rutihauser, op cit., pp. 4–5.

  83. Perlis, op cit., p. 77; Rutihauser, op cit., p. 5.

  84. The full group was, from ACM, John Backus, C. Katz, Alan J. Perlis, and Joseph H. Wegstein; and from GAMM, Friedrich L. Bauer, H. Bottenbruch, Heinz Rutihauser, and Klaus Samelson.

  85. Rutihauser, op cit., p. 6.

  86. Ibid.

  87. Ibid.

  88. Perlis, op cit., p. 79.

  89. Originally, in the manner of the naming of other prior languages, the name was written in upper case, ALGOL. Eventually, it became simply Algol.

  90. J. W. Backus, F. L. Bauer, H. Bottenbruch, C. Katz, and A. J. Perlis (Eds.), H. Rutihauser and K. Samelson (Ed.), J. H. Wegstein (1959). Report on the algorithmic language ALGOL. Numerische Mathematik, 1, 41–60.

  91. Perlis, op cit., p. 83.

  92. Rutihauser, op cit., p. 7.

  93. Perlis, op cit., p. 84.

  94. Later, the IFIP Congress was renamed the IFIP World Computer Congress.

  95. John W. Backus (USA), Friedrich L. Bauer (West Germany), J. Green (USA), C. Katz (USA), John McCarthy (USA), Peter Naur (Denmark), Alan J. Perlis (USA), Heinz Rutihauser (Switzerland), K
laus Samelson (West Germany), Bernard Vauquois (France), Joseph.H. Wegstein (USA), Adriaan van Wijngaarden (Holland), and Michael Woodger (Britain).

  96. P. Naur. (Ed.). (1960). Report on the algorithmic language ALGOL 60. Communications of the ACM, 3, 299–314.

  97. P. Naur (Ed.) et al. (1962–1963). Revised report on the algorithmic language ALGOL 60. Numerische Mathematik, 4, 420–453; see also (1963). Communications of the ACM, 6, 1–17; (1962/63). Computer Journal, 5, 349–367. This report would be republished in many other places, including as a stand-alone publication. References to the report in our story cites, as source, its appearance as an appendix in Rutihauser, op cit.

  98. K. R. Popper. (1972). Objective knowledge. Oxford: Clarendon Press.

  99. J. W. Backus. (1959). The syntax and semantics of the proposed international algebraic language of the Zurich ACM-GAMM Conference. In Proceedings of the 1st International Conference on Information Processing (pp. 125–132). London: Butterworth.

  100. Another expansion of BNF is Backus Naur Form because Naur, editor of the Algol 60 reports, made slight changes to the original notation, but, more important, made the meta-language widely known by way of the two reports.

  101. Rutihauser, op cit., p. 268.

  102. J. W. Backus, responding to a question about the origins of BNF, Wexelblat, op cit., p. 162.

  103. E. Post. (1943). Formal reductions of the general combinatorial decision problem. American Journal of Mathematics, 65, 197–268.

  104. N. Chomsky. (1956). Three models for the description of language. In Proceedings of the Symposium on Information Theory (pp. 113–124). Cambridge, MA: MIT.

 

‹ Prev