It Began with Babbage
Page 41
Among trade publishing houses, Prentice-Hall launched its Prentice-Hall Series on Automatic Computation during the 1960s. Among its most influential early texts was Marvin Minsky’s Computation: Finite and Infinite Machines (1967), a work on automata theory. By the time this book appeared, there were already more than 20 books in this series, on numeric analysis; the programming languages PL/I, FORTRAN, and Algol; and applications of computing. McGraw-Hill also started its Computer Science Series during the 1960s. Among its early volumes was Gerard Salton’s Automatic Information Organization and Retrieval (1968). The author was one of the progenitors of another subparadigm in computer science during the 1960s, dedicated to the theory of, and techniques for, the automatic storage and retrieval of information stored in computer files, and this branch of computer science would link the field to library science. And, as we have seen, Addison-Wesley, as the publisher of the Wilkes/Wheeler/Gill text on programming in 1951, can lay claim to be the first trade publisher in computer science. It also published, during the 1960s, the first two volumes of Donald Knuth’s The Art of Computer Programming (1968 and 1969, respectively). Another publisher, Academic Press, distinguished for its dedication to scholarly scientific publications, inaugurated in 1963 its Advances in Computers series of annual volumes, each composed of long, comprehensive, and authoritative chapter-length surveys and reviews of specialized topics in computer science by different authors.
The explosion of subparadigms during the 1960s was, thus, accompanied by a proliferation of periodicals (and, thus, articles) and books.
VII
The computer science paradigm that had emerged by the end of the 1960s, then, constituted a core practical concept and a core theory: the former, the idea of the stored-program computer; the latter, a theory of computation as expressed by the Turing machine. These core elements were surrounded by a cluster of subparadigms, each embodying a particular aspect of automatic computation, each nucleating into a “special field” within (or of) computer science, to wit: automata theory, logic design, theory of computing, computer architecture, programming languages, algorithm design and analysis, numeric analysis, operating systems, artificial intelligence, programming methodology, and information retrieval. Looking back from the vantage of the 21st century, these can be seen as the “classic” branches of computer science. They were all, in one way or another, concerned with the nature and making of computational artifacts—material, abstract, and liminal.
We have also seen that a central and vital methodology characterized this paradigm: the twinning of designas-theory (or the design process-as-theory construction) and implementation-as-experimentation. Even abstract computational artifacts (algorithms and computer languages) or the abstract faces of liminal artifacts (programs, computer architectures, sequential machines) are designed. The designs are the theories of these artifacts. And even abstract artifacts are implemented; the implementations become the experiments that test empirically the designs-as-theories. Algorithms are implemented as programs, programs (abstract texts) become executable software, programming languages by way of their translators become liminal tools, computer architectures morph into physical computers, and sequential machines become logic or switching circuits. Turing machines are the sole, lofty exceptions; they remain abstract and, although they are designed, they are never implemented.
This methodology—designas-theory/implementation-as-experimentation—is very much the core methodology of most sciences of the artificial, including the “classical” engineering disciplines. It is what bound the emerging computer science to the other artificial sciences on the one hand and separated it from both mathematics and the natural sciences on the other. And it was this synergy of design and implementation that made the computer science paradigm as a fundamentally empirical, rather than a purely mathematical or theoretical, science.
Another feature of the paradigm we have seen emerge is that, although computer scientists may have aspired for universal laws in the spirit of the natural sciences, they were rather more concerned with the individual. A design of a computational artifact is the design of an individual artifact; it is a theory of (or about) that particular artifact, be it an algorithm, a program, a language, an architecture, or whatever. Computer science as a science of the artificial is also, ultimately, a science of the individual.
VIII
This has been a narrative about the genesis of computer science—not about computers per se, nor of the “information age” or the “information society.” Thus, it is not a social history of technology. However, insofar as it is concerned with computational artifacts, and insofar as artifacts help define cultures, the history I have outlined here belongs, in part, to cultural history.6
More fundamentally, though, this story straddles intellectual history on the one hand and cognitive history on the other.
It is intellectual history, in the older sense, celebrated by American scholar Arthur O. Lovejoy (1833–1962) in his The Great Chain of Being (1936) as the history of ideas—how ideas are consciously born, propagated, and transformed over time, and how they spawn new ideas.7 The “newer,” postmodern meaning of intellectual history is somewhat different. Rather than ideas, the focus has shifted to texts and the way language is used.8 The history of the idea we have followed here is, of course, that of automatic computing. What does it mean? How does one carry it out? How do we render it practical? How do we guarantee its correctness? How do we improve its efficiency? How do we describe it? What are its limits?
But this story is also cognitive history, a term of quite recent vintage. Cognitive history attempts to understand the creative past in terms of (conscious and unconscious) thought processes that created that past. It involves relating goals, purpose, knowledge, even emotions, styles of doing and thinking, and how they interact in the creative moment—regardless of whether what is created is an idea, a symbolic system, or a material artifact.9
This particular history I have told here thus straddles the cultural, the intellectual, and the cognitive. Computer science is a science of the artificial wherein culture (artifacts and symbols), cognition (purpose, knowledge, beliefs), and ideas intersect.
IX
In this postmodern (or perhaps “postpostmodern”) age, there is a social aspect to this history that is markedly visible. The protagonists in this story are all white and almost all male. The only women we have encountered are Ada, Countess of Lovelace during the mid 19th century and Grace Murray Hopper during the mid 20th century. It would be unfair to say that there were no other women who played roles in this history. This book mentions such authors as Adele K. Goldstine, Herman Goldstine’s wife, collaborator, and coauthor; and Alice C. Burks, Arthur Burks’s wife and coauthor. We also know of a “Miss B. H. Worsley,” who was a member of Maurice Wilkes’s EDSAC team and who was credited with the preparation of the “account” of the first EDSAC demonstration in June 1949.10 The ENIAC project was, in fact, peopled by several women “computers,” including Kay McNulty (later Kay Mauchly, John Mauchly’s wife), who were involved in programming the ENIAC. Still, the sparseness of women in this history is glaring, to say the least. As in the case of other sciences (natural and artificial), as in art, this story is largely a story of men.11
It is also, as we have seen, a history of only white European-Americans (see Dramatis Personae). People of other ethnicities have not figured in this account. Whether these aspects—gender and race—of the social, intellectual, cognitive, and cultural history of computer science change during the 1970s and thereafter and, if so, in what manner, remains to be told in another tale.
X
We began this story with a brief discourse on the fundamental nature of computer science—what philosophers would call its ontology. Its particularity and peculiarity (we argued) stemmed from the view that computer science is the science of automatic computation, and that it entails the interplay of three kinds of computational artifacts (material, abstract and liminal), that it is a science o
f symbol processing, a science of the artificial, a science of the “ought,” and (mainly) a science of the individual (see Prologue).
That discourse was not intended to be part of the historical narrative, but rather a 21st-century meditation that belongs more to the philosophy of computer science (its analysis and interpretation) than its history. And yet, as the 1960s ended (as does this story), and the computational paradigm expanded with the “explosion of subparadigms” (see Chapter 15), and the first (undergraduate and graduate) students wended their way through the first academic degree programs in computer science, there were philosophical rumblings about the discipline that were part of the historical narrative itself. There were skeptics who questioned the very idea of computer science as a science. And it lay with the members of the embryonic computer science community to defend their paradigm, their newly gotten intellectual territory, and insist on the distinct scientific identity of their new discipline.
This defense was carried into the sanctum sanctorum of natural science itself, into the pages of the weekly Science, arguably America’s most prestigious and widely read periodical devoted to all branches of (especially natural) science. The defenders were three influential members of this new community: Alan Perlis, a major participant in the development of the Algol programming language (see Chapter 13, Section XIV), and Allen Newell and Herbert Simon, two of the creators of heuristic programming and artificial intelligence (see Chapter 14, Sections II–V).
Perlis, Simon, and Newell were the founding faculty, in 1965, of the computer science department at the Carnegie Institute of Technology (later, Carnegie-Mellon University) in Pittsburgh, with Perlis as its first head of department.12 In 1967, Science published a very short article by these scientists titled “What Is Computer Science?,” which they began by noting, a mite ruefully perhaps, that computer science professors are often asked by skeptics whether there really existed a discipline of computer science and, if so, what was its nature.13 Their answer was categorical: a science comes into being when some domain of phenomena requires description and explanation. Computers and the phenomena surrounding them constitute such a domain; computer science is quite simply the study of computers and their associated phenomena.
But the disbelievers (not specifically named in the article) have raised many objections that Newell and colleagues were willing to confront—that the sciences deal with natural phenomena, whereas computers belong to the world of artifacts; that science is a series of quests for universal laws whereas artifacts cannot obey such laws; that computers are instruments and the behavior of instruments “belongs” to the sciences that gave rise to them (such as the electron microscope “belongs” to physics); that different parts of computer science can be parceled out to more traditional branches such as electronics, mathematics, and psychology (thus leaving nothing intrinsically that is computer science); that computers belong to the realm of engineering, not science.
Newell and colleagues refuted (rather more tersely than one might have expected) these objections. They argued, for example, that even though computers are artificial, computational phenomena are described and explained on a “daily” basis; that even if the computer is an instrument, its complexity, richness, and uniqueness are such that its behavior cannot be described or explained adequately by any other existing science. As to computers belonging to electronics or mathematics or psychology, some parts of computing do, indeed, fall within these domains, but in their entirety they belong to no one existing science. Regarding the claim that computers belong to engineering and not science, Newell, Perlis, and Simon countered that computers belong to both, just as electricity (as a phenomenon) belongs to physics and electrical engineering, and plants to both botany and agriculture.
So we see that ruminations about the ontology of computer science were integral to the history of its genesis. The very identity of computation as a distinct paradigm, of computer science as a distinct science of its own, had to be defended and justified by the first people who called themselves computer scientists.
Was this anxiety dispelled with time, during the 1970s and thereafter? In fact, this debate has continued, sporadically, into the 21st century. The Prologue of this book happens to be my own perspective on this issue, but like the aspects of gender and ethnicity in computer science, the later evolution of the ontological status of computer science remains yet another story to be told.
NOTES
1. F. Fukuyama, 1992. The End of history and the last man. New York: The Free Press.
2. E. H. Carr. (1964). What is history? (p. 12). Harmondsworth, UK: Penguin Books (original work published 1961).
3. T. S. Kuhn. (1970). The structure of scientific revolutions (2nd ed.). Chicago, IL: University of Chicago Press.
4. J. F. Traub. (1972). Numerical mathematics and computer science. Communications of the ACM, 15, 531–541 (see especially p. 538).
5. Ibid., p. 538.
6. P. Burke. (2008). What is cultural history? Cambridge: Polity.
7. A. O. Lovejoy. (1936). The great chain of being. Cambridge, MA: Harvard University Press.
8. See, for example, D. LaCapra. (1983). Rethinking intellectual history. Ithaca, NY: Cornell University Press; A. Brett. (2002). What is intellectual history now? In D. Cannadine (Ed.), What is history now? (pp. 113–131). Basingstoke, UK: Palgrove Macmillan.
9. See, for example, N. Nersessian. (1995). Opening the black box: Cognitive science and history of science. Osiris, 10, 196–215; D. B. Wallace & H. E. Gruber. (Eds.). (1989). Creative people at work. New York: Oxford University Press; S. Dasgupta. (2003). Multidisciplinary creativity: The case of Herbert A. Simon. Cognitive Science, 27, 683–707.
10. Anon. (1950). Report of a conference on high speed automatic calculating machines, 22–25 June (p. 12). Cambridge, UK: University Mathematical Laboratory.
11. For the presence of women in the history of the natural sciences and mathematics, see, for example, L. Pyenson & S. Sheets-Pyenson. (1999). Servants of nature (pp. 335–349). New York: W.W. Norton. For the place of women in art, see W. Chadwick. (2007). Women, art and society (4th ed.). London: Thames & Hudson.
12. History. Carnegie-Mellon University. Available: http://www.csd.cs.cmu.edu.
13. A. Newell, A. J. Perlis, & H. A. Simon. (1967). What is computer science? Science, 157, 1373–1374.
Dramatis Personae1
I
Howard H. Aiken (1900–1973). American physicist and designer of the Harvard-IBM Mark I and Mark II electromechanical computers. Organized the first American conference on computing.
Gene Amdahl (1922–). American computer designer. Coarchitect of the IBM System/360 computer.
John Vincent Atanasoff (1903–1995). American physicist. Co-inventor and implementer of the electronic Atanasoff-Berry Computer (ABC).
Charles Babbage (1791–1871). British mathematician, scientist, and economist. Inventor and designer of the Difference Engine and the Analytical Engine.
John Backus (1924–2007). American mathematician, and programming language and meta-language designer. Invented the FORTRAN programming language. Codeveloper of the FORTRAN compiler. Inventor of the Backus Normal Form (or Backus Naur Form; BNF) notation for syntactic descriptions of programming languages.
Friedrich L. Bauer (1924–). German mathematician and computer scientist. Inventor of a method of mechanically evaluating arithmetic expressions. Contributor to the development of Algol 58 and Algol 60 programming languages.
Clifford Berry (1918–1963). American electrical engineer. Co-inventor and implementer of the Atanasoff-Berry Computer (ABC).
Julian Bigelow (1913–2003). American mathematician and computer designer. Contributed to the founding of cybernetics. Codeveloper of the IAS computer.
Gerrit Blaauw (1924–). American computer systems designer. Coarchitect of the IBM System/360 computer.
Corrado Böhm (1923–). Italian computer theorist. Designed and implemented an early programming language.
L�
�on Bollée (1830–1913). French inventor and manufacturer. Invented a multiplication algorithm and a mechanical multiplication machine.
George Boole (1819–1864). British mathematician. Inventor of Boolean algebra, a calculus for symbolic logic.
Andrew D. Booth (1918–2009). British physicist. Early explorer of automatic machine translation of natural languages.
John G. Brainerd (1904–1988). American electrical engineer. Codeveloper of the ENIAC electronic programmable computer.
Fredrick P. Brooks (1931–). American computer systems designer and software design manager. Coarchitect of the IBM System/360 computer and team manager of the IBM operating system OS/360 projects.
Arthur W. Burks (1915–2008). American mathematician, engineer, computer theorist, and philosopher of science. Codeveloper of the ENIAC electronic programmable computer, historian of the ENIAC project, and writer on cellular automata theory.