Book Read Free

It Began with Babbage

Page 23

by Dasgupta, Subrata


  IV

  We observe a similar pursuit of operational knowledge in Gill’s dissertation. As the title states, his concern was to show how the stored-program computer in general, and the EDSAC in particular, could solve problems in mathematics and physics. The basic schema for doing this, as his chapters explicate, was (a) describe the problem, (b) discuss the general approach to its solution, and (c) specify the process.

  The end product, the desired result, were procedures—in his case, in the form of EDSAC programs (specifically, subroutines). These subroutines were the knowledge Gill produced, as he acknowledged.16

  However, it is not enough to describe the automatic procedures embedded in a subroutine. One must also describe the rules humans have to follow when deploying the subroutines. So human actions or choices are also knowledge products. Gill stipulates that one of the subroutines he had developed “may be placed anywhere” in the EDSAC memory;17 that another subroutine “requires” some additional subroutines to be used in conjunction with it.18 He writes that certain variables “are [to be] stored” in a certain manner in memory, that a certain number of memory locations “are required” by a certain subroutine, that a certain parameter “should be chosen” to meet a certain condition,19 that the first order of a particular subroutine “must be placed” in a certain location in memory.20

  The words in quotes specify what a user has to do to enable the subroutines to work successfully. Gill’s awareness of the nature of the knowledge he produced was revealed at the conclusion of one of the chapters wherein he claimed that the originality of his work lay in the particular subroutines described in that chapter, along with the programming techniques described for solving differential equations.21

  The operational knowledge produced by Wheeler and Gill was a blend of human and machine operational principles—very specifically, computer programs, procedures, notation (language) to use, and rules. Their contributions acknowledged the significance of the human being and of the necessity of a symbiosis of human and machine in automatic computing.22

  Moreover, they were not just concerned with the method of programming the EDSAC, although that machine was the vehicle of their thinking. Their concern was more universal: the general problem of program development (in present-centered language, programming methodology). Indeed, Gill must surely be among the very first to regard this new kind of artifact called a computer program in terms of what psychologists would call developmental and biologists would call ontogenetic. I have used the word ontogeny earlier in this story (see Chapter 7, Section I). Biologists use this word to mean the “life history of an individual both embryonic and postnatal,”23 whereas, to the psychologist, development refers broadly to the expansion of mental capabilities with respect to experience and competence, from infancy to adulthood.24 Both biologists and psychologists view development as a record of change (to the body in one case, the mind in the other) from an elemental, embryonic, or undifferentiated state into a complex, differentiated, mature, and fully functioning state. This same notion was carried into the domain of programming by Gill. Thus, he explains that when using an automatic computer to solve a problem, one must first express the problem in mathematical terms, then formulate the latter in terms of the operations the computer must carry out. There is, then, a human process preceding the machine process, and the former is what Gill called “the development of the programme.” He further noted that this development process causes the “programme” to change “form,” both structurally and logically.25

  He identified three significant kinds of change: (a) transformation of “human-oriented” statements to “machine-oriented” statements—the former more easily comprehended by human beings, and the latter interpretable by the machine; (b) transition from a sequence of general statements to a precise, machine-executable sequence of operations; and (c) transformation from symbolic statements to assertions that relate to particular circuits within the machine.26 Gill noted that all these changes must happen before automatic computation can even begin. And the user’s aim was to effect these changes simply and economically.27

  So these changes are to be effected as part of program development “outside the machine.”28 However, program development does not end there; further changes must occur “during the input of the programme.”29 This is when the machine has its first role in this process—for example, in converting decimal numbers to binary form30 or in inserting actual values to parameters within instructions. These tasks are performed by the initial orders. Other tasks performed by the machine in program development were the process of assembly and loading (see Chapter 9, Section VI), and the loading of subroutines into memory.

  Last, program development also occurred “during the course of the calculation”31—that is, while a program was actually executed. Here, the agent of development was entirely the machine. A particular example was the inclusion of “interpretive routines,” which is a routine held in a computer’s memory and that executes instructions in another routine also held in memory, one at a time. In other words, an interpretive routine translates an instruction into a set of actions that are executed immediately before the next instruction is interpreted.32

  The term interpretive routine33 would later enter the mainstream language of programming as interpreter.34 Interestingly, Gill likened the action of an interpretive routine to the operation of Alan Turing’s “universal computing machine” of 1936 (see Chapter 4, Section V), which can imitate another machine.35 This is a very rare instance of the early innovators of practical stored-program computers making an explicit reference to Turing’s theoretical work.

  Gill clearly believed that an understanding of the stages in the development of a program—with humans and machines as co-agents in the process—was important, that the articulation of this understanding constituted an important body of operational knowledge. Thus, his claim to the originality of his PhD dissertation lay as much (as he stated) in his contribution to this overall understanding as to the development of particular procedures (subroutines) for solving particular problems in mathematics and physics.36

  V

  These two dissertations for which PhDs were conferred by the University of Cambridge are invaluable in what they reveal about an emerging science of computing. They offer glimpses of what we may call a scientific style in the realm of computing research.

  Discussions of style belong predominantly in the realm of art, architecture, and literature. Indeed, for some art historians the history of art is the history of artistic style,37 just as the history of architecture is the history of architectural style.38 For art and architectural historians, style refers to features of paintings, sculptures, buildings that allow works to be “placed” in their historical setting.39

  But style also refers to the way in which something is done. In this sense, it characterizes some pattern in the way one perceives, visualizes, thinks, reasons, symbolizes, represents, and draws on knowledge in the pursuit of doing something. Generally speaking, we may call this cognitive style; understanding a person’s creativity may entail unveiling his cognitive style.40 In the realm of science, we can detect an individual scientist developing her own particular cognitive style of doing science, her scientific style.41

  We can also imagine a community of like-minded people coming to share a cognitive style. This is well understood in the realms of painting (such as impressionism, surrealism), and literary writing (such as magical realism); it may also produce, among a community of scientists, a shared scientific style.

  It is in this sense that Wheeler’s and Gill’s dissertations document a scientific style. Its fundamental trait was operationalism—the search for rules, procedures, operations, methods—but its product is operational knowledge. If a computer science was slowly emerging, the production or generation of operational knowledge about humans and machines cooperating in support of automatic computing was certainly one of its first manifestations.

  NOTES

  1. D. J.
Wheeler. (1951). Automatic computing with the EDSAC. PhD dissertation, University of Cambridge.

  2. S. Gill. (1952). The application of an electronic digital computer to problems in mathematics and physics. PhD dissertation, University of Cambridge.

  3. S. H. Lavington. (1998). A history of Manchester computers. London: The British Computer Society (original work published 1976); S. H. Lavington & C. Burton. (2012). The Manchester machines; S. H. Lavington (ed.). (2012). Alan Turing and his contemporaries (chapter 4). London: British Computer Society.

  4. A. Newell, A. J. Perlis, & H. A. Simon. (1967). What is computer science? Science, 157, 1373–1374.

  5. P. Wegner. (1970). Three computer cultures: Computer technology, computer mathematics, and computer science. In F. L. Alt (Ed.), Advances in computers (Vol. 10, pp. 7–78). New York: Academic Press.

  6. P. S. Rosenbloom. (2013). On computing. Cambridge, MA: MIT Press; P. J. Denning. (2007). Computing is a natural science. Communications of the ACM, 50, 13–18; P. J. Denning & P. A. Freeman. (2009). Computing’s paradigm. Communications of the ACM, 52, 28–30.

  7. Wheeler, op cit., Preface.

  8. Gill, op cit., Preface.

  9. Ibid.

  10. Wheeler, op cit., p. 25.

  11. Ibid., p. 26.

  12. Ibid., p. 49.

  13. S. Dasgupta. (1996). Technology and creativity (pp. 33–34). New York: Oxford University Press.

  14. M. Polanyi. (1962). Personal knowledge (p. 176). Chicago, IL: University of Chicago Press.

  15. Dasgupta, op cit., pp. 157–158.

  16. Gill, op cit., p. 40.

  17. Ibid., p. 41.

  18. Ibid., p. 203.

  19. Ibid.

  20. Ibid., p. 204.

  21. Ibid., p. 49.

  22. Ibid., pp. 62–87.

  23. S. J. Gould. (1977). Ontogeny and phylogeny (p. 483). Cambridge, MA: Belknap Press of Harvard University Press.

  24. See, for example, J. Piaget. (1976). The child & reality. Harmondsworth, UK: Penguin Books; M. Donaldson. (1992). Human minds: An exploration (p. 190). Harmondsworth, UK: Penguin Books.

  25. Gill, op cit., p. 63.

  26. Ibid.

  27. Ibid.

  28. Ibid., p. 67.

  29. Ibid., p. 71.

  30. Ibid., p. 72.

  31. Ibid., p. 77.

  32. Ibid., p. 78.

  33. The term was apparently coined by another member of the EDSAC group, an Australian, John Bennett (1921–2010–), who was, in fact, the first research student to join the Mathematical Laboratory in Cambridge. See M. V. Wilkes. (1985). Memoirs of a computer pioneer (p. 140). Cambridge, MA: MIT Press. See also Gill, op cit., p. 78.

  34. F. P. Brooks, Jr. & K. E. Iverson. (1969). Automatic data processing: System/360 edition (pp. 365 ff). New York: Wiley.

  35. Gill, op cit., p. 80.

  36. Ibid., p. 87.

  37. H. Wolfflin. (1932). Principles of art history. New York: Dover Publications.

  38. N. Pevsner. (1962). An outline of European architecture. Harmondsworth, UK: Penguin Books.

  39. R. Wollheim. (1984). Painting as an art (p. 26 et seq.). Princeton, NJ: Princeton University Press.

  40. See, for example, S. Dasgupta. (2003). Multidisciplinary creativity: The case of Herbert A. Simon. Cognitive Science, 27, 683–707.

  41. Ibid.

  11

  I Compute, Therefore I Am

  I

  THE 1940S WITNESSED the appearance of a handful of scientists who, defying the specialism characteristic of most of 20th-century science, strode easily across borders erected to protect disciplinary territories. They were people who, had they been familiar with the poetry of the Nobel laureate Indian poet–philosopher Rabindranath Tagore (1861–1941), would have shared his vision of a “heaven of freedom”:

  Where the world has not been broken up into

  fragments by narrow domestic walls.1

  Norbert Wiener (1894–1964), logician, mathematician, and prodigy, who was awarded a PhD by Harvard at age 17, certainly yearned for this heaven of freedom in the realm of science as the war-weary first half of the 20th century came to an end. He would write that he and his fellow scientist and collaborator Arturo Rosenbluth (1900–1970) had long shared a belief that, although during the past two centuries scientific investigations became increasingly specialized, the most “fruitful” arenas lay in the “no-man’s land” between the established fields of science.2 There were scientific fields, Wiener remarked, that had been studied from different sides, each bestowing its own name to the field, each ignorant of what others had discovered, thus creating work that was “triplicated or quadruplicated” because of mutual ignorance or incomprehension.3

  Wiener, no respecter of “narrow domestic walls” would inhabit such “boundary regions” between mathematics, engineering, biology, and sociology, and create cybernetics, a science devoted to the study of feedback systems common to living organisms, machines, and social systems. Here was a science that straddled the no-man’s land between the traditionally separate domains of the natural and the artificial. Wiener’s invention of cybernetics after the end of World War II was a marker of a certain spirit of the times when, in the manner in which Wiener expressed his yearning, scientists began to create serious links between nature and artifact.

  It is inevitable that this no-man’s land between the natural and the artificial should be part of this story. Ingenious automata—devices that replicated, of their own steam (so to speak) certain kinds of actions performed by living things, including humans—had been known since antiquity (see Chapter 3, Section IX). However, the computer was an entirely new genus of automata for it seemed to replicate, not action, but human thought.

  Ada, Countess of Lovelace, had cautioned her reader not to confuse the Analytical Engine as anything but a machine. It had no power to initiate any thing; it could only do what humans had “ordered” it to do (see Chapter 2, Section VIII). However, by the early 1940s, even before the stored-program digital computer of any kind had been conceived, but stimulated by such analog machines as the differential analyzer, human imagination had already stepped into the boundary region separating man from machine, the natural from the artificial—had straddled and bridged the chasm. The year 1943 was noteworthy in this respect on both sides of the Atlantic.

  II

  That year, in Cambridge, England, Kenneth Craik (1914–1945), trained as a philosopher and psychologist and, like his contemporary Maurice Wilkes, a fellow of St. John’s College, published a short book called The Nature of Explanation. In a chapter titled “Hypothesis on the Nature of Thought,” he explored the neural basis of thought. He suggested that the essence of the thought process is symbol processing, of a kind similar to what we are familiar with in mechanical calculating devices.4 He drew this analogy, let us remember, in a time when, in Cambridge, the digital computer was still a few years away, when the archetypal calculating machine that he knew was the model differential analyzer,5 that he may have seen in use in the department of physical chemistry at the university (see Chapter 8, Section XI).

  Indeed, Craik argued, it was not merely that thought uses symbol processing, but that all of thought is symbol processing.6 The process of thinking, as Craik conceived it, involved “the organism” carrying symbolic representations in the head of aspects of the external world, and symbolic representations of the organism’s actions. Thought, then, entails the manipulation of the symbolic models by the represented actions—that is, by simulating actions symbolically and their effects on external reality. Such symbolic simulation parallels the way analog computers (such as the differential analyzer) represent analogically a system and computes on the representation.

  Craik, as Wilkes recalled in his memoir, was perhaps unusual among philosophers and psychologists because he was seriously interested in gadgets. Apparently, he made pocket-size models of objects like steam engines7—thus, perhaps, the analogy between thinking and mechanical calculation. Unfortunately, h
e had no chance to pursue his hypothesis for he was hit and killed by a car while bicycling on a Cambridge street on May 7, 1945, the eve of VE Day.8

  Craik’s insight that thinking involves symbolic representations in the nervous system of things in the world, and the processing of such symbolic representations, speculative though it was, makes him one of the earliest figures in the emergence of what much later came to be named cognitive science—the study of mental processes by which humans (and some animals) make meaning of their experiences in the world.9 However, although his ideas were widely discussed by neurophysiologists and psychologists in Britain,10 he had no apparent impact on the other side of the Atlantic. But then, America had its own first explorers of the relationship between cerebration and computation who advanced their own, very different, and somewhat more precise views of this relationship. By coincidence, these explorers published their first work on this relationship also in 1943.

  III

  That year, an article titled “A Logical Calculus of the Ideas Immanent in Nervous Activity” was published by Warren McCulloch (1898–1968), a physician-turned-neurophysiologist who, like John von Neumann, was a polymath of the kind that would have warmed (and no doubt did warm) the cockles of Wiener’s heart, and Walter Pitts (1923–1969), a mathematical logician. The journal in which the article appeared, Bulletin of Mathematical Biophysics, suggests that the target reader was a theoretical biologist. This, despite the fact, that the paper cited only three references, all authored by world-renowned logicians.11

  These authors were interested in constructing a formalism for describing neural activity. According to current thinking in theoretical neurophysiology, the nervous system comprised a network of nerve cells, or neurons. A neuron connects to others through nerve fibers called axons, which branch out through finer structures called dendrites, and these end on the surfaces of other neurons in the form of entities called synapses.

 

‹ Prev