The most developed meta-language for describing the operational semantics of a programming language was the Vienna Definition Language invented by a group at the IBM Laboratory in Vienna, inspired by McCarthy’s work.36 A formal definition of the semantics of the IBM programming language PL/I was completed in 1969.37
Operational semantics for proving automatically the correctness of microprograms (see Chapter 13) would be developed during the mid 1970s.38 But, in the realm of structured programming and its concern with humans proving correctness of a program as it is developed, operational semantics did not excite much enthusiasm.39 The seminal work on a semantics appropriate for program verification in the structured programming context was an axiomatic method by Robert Floyd (1936–2001), who at the time was with the Carnegie Institute of Technology (later, Carnegie-Mellon University) in 1967 and, influenced by Floyd, by Anthony Hoare (1934–) in 1969, who at the time was at Queen’s University, Belfast.40 This approach came to known as “Floyd-Hoare logic” or (more commonly) Hoare logic (rather as the Darwin–Wallace theory of evolution by natural selection in biology was reduced to, simply, “Darwinian evolution”).
Although Floyd used a flowchart notation, Hoare defined the axiomatic semantics of an Algol-like language. As in mathematics and logic, Hoare defined a set of axioms describing the properties of the basic syntactic entities in his language, such as the meaning of arithmetic operators and the meaning of the basic statements in the language.
To give an example of axiomatic semantics, we consider the most fundamental execution statement in Algol: the “simple” assignment statement having the general form
x := E
where x is the identifier for a simple variable and E is an expression (such as an arithmetic expression). An English-language operational semantics for the assignments would be as follows: The expression E is evaluated; the resulting value is stored in the variable x. The axiomatic semantics of the assignment is as follows: If the assertion P (x), where P is a truth-valued function—“predicate”—with x as an argument, is to be true after the assignment is executed, then P (E) must be true immediately before the assignment’s execution. This leads to the “axiom of assignment” expressed by the formula
{P0} x := E {P}
Here, x is a variable, E is an expression, and P0 is obtained from P by substituting E for all occurrences of x in P.
So if the assignment
x := x + 1
leads to the assertion P: x ≥ 1 after the assignment, then the assertion P0: x + 1 ≥ 1 must hold before the assignment. Notationally, this situation is represented by the Hoare formula:
{x + 1 ≥ 1} x : = x + 1 {x ≥ 1}
The meanings of the basic statement types in a programming language (the assignment, if then, for, and so forth, in the case of Algol 60) were given by Hoare formulas of the general form
{P} S {Q}. Here P, Q are true assertions and S is some legal program statement. The Hoare formula is to be read as follows: If the assertion P is true before the execution of S, then the assertion Q is true after S’s execution. P and Q were named, respectively, precondition and postcondition.
In the manner of the axiomatic approach in mathematics and logic, Hoare logic, then, consists of axioms that define the meaning of the primitive concepts in the programming language, along with rules of inferences that define the semantics of “composite statements.” For example, the rule of sequential composition says that if for a statement S1 the Hoare formula {P1} S1 {P2} is true, and if for another statement S2 the Hoare formula {P2} S2 {P3} is true, then we may infer that for the sequential composition (or sequential statement) S1;S2, the formula {P1} S1;S2 {P3} will be true.
The beauty of Hoare logic lay in that the assertions and inferences could be made on the program text without actually having actually to simulate the execution of the statements in the program. And we see how elegantly Dijkstra’s structured programming principle can be married with Hoare logic.
As a very trivial example, consider the following sequence of assignment statements:
x := x + 1;
y := x + z
Suppose the variable x has some value symbolized as α and the variable z has the symbolic value β immediately before the execution of this sequence. We want to prove that after the sequence, the assertion y = α + β + 1 is true. We can prove this as follows:
By axiom of assignment, the Hoare formula
{x = α, z = β} x := x + 1 {x = α + 1, z = β}
is true. By the same axiom, the Hoare formula
{x = α + 1, z = β} y := x + z {y = α + β + 1, x = α + 1, z = β}
is also true. By applying the rule of sequential composition, the Hoare formula
{x = α, z = β} x: = x + 1; y: = x + z {y = α + β + 1, x = α + 1, z = β}
is true, in which case our objective, to show that the postcondition of the sequential statement, y = α + β + 1, has been proved formally.
As remarked earlier, structured programming became a fertile area of research during the 1970s. Important texts and papers would follow,41 but the foundations were laid by Dijkstra, Floyd, and Hoare before the end of the 1960s.
NOTES
1. E. W. Dijkstra. (1965a). Programming considered as a human activity. In Proceedings of the 1965 IFIP Congress (pp. 213–217). Amsterdam: North-Holland. Reprinted in E. N. Yourdon. (Ed.). (1979). Classics in software engineering (pp. 3–9). New York: Yourdon Press (see especially p. 5). All citations refer to the reprint.
2. Yourdon, op cit., p. 1.
3. K. R. Apt. (2002). Edsger Wybe Dijkstra (1930–2002): A portrait of a genius (obituary). Formal Aspects of Computing, 14, 92–98.
4. Quoted by Dijkstra, op cit., p. 3.
5. Ibid., p. 4.
6. G. H. Hardy. (1940). A mathematician’s apology (p. 4). Cambridge, UK: Cambridge University Press.
7. Dijkstra, 1965a, op cit., p. 5.
8. Ibid., op cit., p. 6.
9. E. W. Dijkstra. 1968a. Goto statements considered harmful. Communications of the ACM, 11, 147–148 (letter to the editor).
10. Dijkstra, 1965a, op cit.
11. E. W. Dijkstra. (1968b). The structure of the “THE” multiprogramming system. Communications of the ACM, 11, 341–346. Reprinted in E. Yourdon. (Ed.). (1982). Writings of the revolution: Selected readings on software engineering (pp. 89–98). New York: Yourdon Press (see especially p. 91). All citations refer to the reprint.
12. Dijkstra, 1965a, op cit., p. 5.
13. Ibid.
14. E. W. Dijkstra. (1969). Structured programming. In J. N. Buxton, P. Naur, & B. Randell (Eds.), (1976). Software engineering: Concepts and techniques. New York: Litton. Reprinted in Yourdon (pp. 43–48), 1979, op cit. All citations refer to the reprint.
15. Dijkstra, 1965a, op cit., p. 6.
16. Ibid.
17. Ibid.
18. Dijkstra, 1968b, op cit.
19. Dijkstra, 1968b, op cit., p. 91.
20. Ibid.
21. Ibid., 1968b, op cit., p. 92.
22. Ibid.
23. Ibid., pp. 95–98.
24. Ibid., p. 92. The assumption is, though, that there will be an underlying processor available to execute a process. Thus, sequential processes are liminal rather than purely abstract artifacts.
25. E.. W. Dijkstra. (1965b). Cooperating sequential processes. Technical report, Mathematics Department, Technische Universiteit Eindhoven, Eindhoven.
26. Dijkstra, 1968b, op cit., p. 92.
27. Ibid.
28. Ibid., p. 97.
29. Ibid., p. 93.
30. E.. W. Dijkstra. (1971). Hierarchical ordering of sequential processes. Acta Informatica, 1, 115–138.
31. Dijkstra, 1969, op cit., p. 44.
32. In particular, K. R. Popper. (1965). Conjectures and refutations. New York: Harper & Row; K. R. Popper. (1968). The logic of scientific discovery. New York: Harper & Row.
33. A. M. Turing. (1949). Checking a large routine. In Anon Report on the conference on high-speed autom
atic calculating machines (pp. 67–68). Cambridge, UK: University Mathematical Laboratory.
34. F. L. Morris & C. B. Jones. (1984). An early program proof by Alan Turing. Annals of the History of Computing, 6, 139–147.
35. J. McCarthy. (1963). Towards a mathematical science of computation. In Proceedings of the IFIP Congress 63 (pp. 21–28). Amsterdam: North-Holland; P. J. Landin. (1964). The mechanical evaluation of expressions. Computer Journal, 6, 308–320.
36. P. Wegner. (1972). The Vienna Definition Language. ACM Computing Surveys, 4, 5–63.
37. O. Lucas & K. Walk. (1969). On the formal description of PL/I. In Annual Review in Automatic Programming (pp. 105–182). Oxford: Pergamon Press.
38. A. Birman. (1974). On proving correctness of microprograms. IBM Journal of Research & Development, 9, 250–266.
39. See, for example, J. de Bakker. (1980). Mathematical theory of program correctness (p. 4). London: Prentice-Hall International.
40. R. W. Floyd. (1967). Assigning meaning to programs. In Mathematical aspects of computer science (Vol. XIX, pp. 19–32). Providence, RI: American Mathematical Society; C. A. Hoare. (1969). An axiomatic basis for computer programming. Communications of the ACM, 12, 576–580, 583.
41. See, for example, O.-J. Dahl, E. W. Dijkstra, & C. A. R. Hoare. (1972). Structured programming. New York: Academic Press; D. G. Gries. (Ed.). (1978). Programming methodology. New York: Springer-Verlag; Yourdon, 1979, op cit.; Yourdon, 1982, op cit.
Epilogue
I
HISTORY NEVER ENDS, despite what American political scientist Francis Fukuyama (1952–) claimed.1 Neither, consequently, does the writing of history. But, the writing of a history, like any other story, must have an ending.
The choice of where to stop a historical narrative is a matter of the narrator’s judgment and, to some extent, arbitrary. English historian Edward Hallett Carr (1992–1982) famously pointed out that the historian is necessarily selective in what he or she chooses to include in the narrative.2 That selectivity applies as much to where the narrative ends.
The story I have told here suffers from this subjectivity—both in where it begins and where it ends. This history begins in 1819, when Charles Babbage started to think about a fully automatic computing engine, and it ends in 1969—150 years later. The beginning and the ending were matters of my choice (as was everything in between). Other writers may differ in their choice of historical span as well as the contents of the history itself; but there is, I believe, a sense in where this story began and where it ended.
II
There is, I believe, a sense in beginning with Babbage. The year 1819 saw the origin of a train of thinking that led Babbage to the design of a fully automatic, programmable computing machine. English philosopher Alfred North Whitehead (1861–1947) once wrote that European philosophy was comprised of a series of footnotes to Plato. We cannot claim that the origin and evolution of computer science consisted of a series of footnotes to Babbage. As we have seen, most of the early creators were ignorant of Babbage. But, we can certainly claim that Babbage invented a machine architecture and a principle of programming that anticipated remarkably what would come a century later. There is a perceptible modernity to Babbage, and we recognize it because we see it in much that followed. Babbage’s ghost, as it were, haunts the intellectual space of what became computer science.
III
As for ending this history in 1969, here, too, there is some rationale. It was during the 1960s that computer science assumed a distinct identity of its own. By the act of naming itself, it broke free the umbilical cords that had tied it to mathematics and electrical engineering. Universities and other academic and research institutions founded departments with names that varied slightly (computer science, computing science, computation, computing, computer and information science in the English-speaking world; informatik, informatique, and datalogy in Europe), but with references that were unmistakable. The business of these academic units was automatic computing—the concept itself, its nature, the mechanism to achieve it, and all phenomena surrounding the concept.
The stored-program computing paradigm emerged between 1945 and 1949. The former year marked the appearance of the seminal EDVAC report authored by John von Neumann in which he laid out the logical principles of stored-program computing; the latter year was when these principles were tested empirically and corroborated by way of the Cambridge EDSAC and the Manchester Mark I. In Thomas Kuhn’s terms, all that preceded this period were “preparadigmatic.”3 However, this is neither to dismiss nor denigrate the work preceding 1945, because much of what happened before the EDVAC report led to the stored-program computing principles. And, although the seemingly far-removed abstractions Alan Turing created in 1936 may or may not (we do not know for sure) have shaped von Neumann’s composition of the EDVAC report, the significance of the Turing machine would emerge after the founding of the stored-program computing paradigm. The Turing machine formalism offered, in significant ways, the mathematical foundations of the paradigm. In fact, some would claim that it was Turing’s Entscheidungsproblem paper rather than the EDVAC report that really constituted the stored-program computing paradigm.
Now, if we were “pure” Kuhnians, we would believe that much of the excitement was over by 1949, that what followed was what Kuhn called “normal science”—essentially, puzzle solving. In fact, it was nothing like that at all. What actually happened from 1945 through 1949 was the creation of a core of a new paradigm. From a cognitive point of view, it marked the germination, in some minds, of a core schema.
It is tempting to draw a parallel with axiomatic mathematics. The basic definitions and axioms form the starting point for some branch of mathematics (such as Euclid’s postulates in plane geometry or Peano’s axioms in algebra), but the implications of these axioms and definitions are far from obvious—thus mathematicians’ goal to explore and discover their implications and produce, progressively, a rich structure of knowledge (theorems, identities, and so forth) beginning with the axioms.
So also, the stored-program computing principle became the starting point. The implications of these principles circa 1949 were far from obvious. The 1950s and 1960s were the decades during which these implications were worked out to an impressive depth; the elemental schema lodged in people’s minds about the nature of automatic computation was expanded and enriched. The paradigm did not shift; it was not overthrown or replaced by something else. It was not a case of a “computing revolution” as a whole; rather, new subparadigms, linked to the core, were created. If there were revolutions, they were local rather than global. The outcome, however, was that the paradigm assumed a fullness, a richness. And, as we saw, the 1960s, especially, witnessed what I have described as an “explosion of subparadigms”.
IV
In the meantime, the subject that had motivated the creation of automatic computing in the first place—numeric mathematics (or numeric analysis)—grew in sophistication. But, in a certain way, it stood apart from the other subparadigms. Numeric mathematics concerned itself with “the theory and practice of the efficient calculations of approximate solutions of continuous mathematical problems”4—and insofar as it dealt with approximating continuous processes (polynomial functions, differential equations, and so on), it formed the link between the “new” computer science and the venerable world of continuous mathematics.
With the emergence of all the other new subparadigms, numeric mathematics was “decentered,” so to speak. Much as, thanks to Copernicus, the earth became “just” another planet orbiting the sun, so did numeric mathematics become “just” another subparadigm. Unlike the others, it had a long pedigree; but, as a subparadigm linked to the stored-program computing core, numeric mathematics was also enormously enriched during the 1950s and 1960s. As distinguished numeric analyst Joseph Traub wrote in 1972, in virtually every area of numeric mathematics, the current best algorithms had been invented after the advent of the electronic digital computer. The she
er exuberance and promise of this artifact breathed new life into a venerable discipline.5
V
If we take Kuhn’s idea of paradigms seriously, we must also recognize that there is more to a paradigm than its intellectual and cognitive aspects. The making of a paradigm entails social and communicative features.
Thus, another marker of the independent identity of the new science was the launching, during the 1950s and 1960s, of the first periodicals dedicated solely to computing—yet another severance of umbilical cords. In America, the ACM, founded in 1947, inaugurated its first journal, the Journal of the ACM in 1954; and then, in 1958, what became its flagship publication, the Communications of the ACM; and in 1969, the ACM launched the first issues of Computing Surveys.
Also in America, the Institute of Radio Engineers (IRE) brought forth, in 1952, the IRE Transactions on Electronic Computers. After the IRE merged with the American Institute of Electrical Engineers in 1963, forming the Institute of Electrical and Electronics Engineers (IEEE), a suborganization called the Computer Group was formed in 1963/1964, which was the forerunner of the IEEE Computer Society, formed in 1971. The Computer Group News was first published in 1966.
In Britain, the British Computer Society, founded in 1957, published the first issue of the Computer Journal in 1958. In Germany, Numerische Mathematik was started in 1959. In Sweden, a journal called BIT, dedicated to all branches of computer science came into being in 1961.
Commercial publishers joined the movement. In 1957, Thompson Publications in Chicago, Illinois, began publishing Datamation, a magazine (rather than a journal) devoted to computing. Academic Press launched a highly influential journal named Information & Control in 1957/1958 dedicated to theoretical topics in information theory, language theory, and computer science.
VI
The appearance of textbooks was yet another signifier of the consolidation of academic computer science. Among the pioneers, perhaps the person who understood the importance of texts as much as anyone else was Maurice Wilkes. As we have seen, The Preparation of Programs for an Automatic Digital Computer (1951), coauthored by Wilkes, David Wheeler, and Stanley Gill, was the first book on computer programming. Wilkes’s Automatic Digital Computers (1956) was one of the earliest (perhaps the first) comprehensive textbooks on the whole topic of computers and computing, and he would also write A Short Introduction to Numerical Analysis (1966) and the influential Time Sharing Computer Systems (1968). Another comprehensive textbook (reflecting, albeit, an IBM bias) was Automatic Data Processing (1963), authored by Frederick P. Brooks, Jr., and Kenneth E. Iverson, both (then) with IBM. In the realm of what might be generally called computer hardware design, IBM engineer R. K. Richards published Arithmetic Operations in Digital Computers (1955), a work that would be widely referenced for its treatment of logic circuits. Daniel McCracken (who became a prolific author) wrote Digital Computer Programming (1957) and, most notably, a bestseller—A Guide to FORTRAN Programming (1961)—the first of several “guides” on programming he would write throughout the 1960s.
It Began with Babbage Page 40