It Began with Babbage

Home > Other > It Began with Babbage > Page 35
It Began with Babbage Page 35

by Dasgupta, Subrata


  LISP incorporates a number of basic functions. They include the following. Suppose x, y are lists. Then

  CAR x returns the first element of x as a list

  CDR x returns x with the first element deleted

  CDDR x returns x with the first two elements deleted

  CONS x, y returns a list in which list y is appended to list x

  NULL x returns T (true) if list x is empty

  A list of elements are always depicted in parenthesis separated by a space. For example, (A B C) is a list of elements A, B, C each of which may be an “atomic” element or a list. So as examples of the use of the previously mentioned operations, we may have:

  Now suppose we wish to define a function ALT that, given a list, returns the alternative elements of the list beginning with the first element. Then, in LISP, the function will be

  What this says is that if X is either the empty list or if it consists of just one element (that is, CDR X is empty), then the function ALT returns as its value X. Otherwise, select CAR X and append to it the list returned by recursively calling the function ALT with CDDR X as argument.

  In Algol-like notation, this same function would appear as follows:

  For example, if X is the empty list ( ), then ( ) will be returned. If X is the list containing a single element (A), then (A) will be returned. Suppose X is a list (A B C D E). Then, ALT will return the list (A C E).

  LISP was fundamentally different from such languages as Algol and FORTRAN in that it was a language that was intended to cope with a different kind of computer culture. In its implementation, it differed from procedural languages in that programs written in the latter languages were compiled into a machine’s “native” language before execution, whereas LISP (like APL; see Chapter 13, Sections XXII et seq.) was an interpretive language. The LISP interpreter examines a LISP program statement; it determines what each statement specifies and executes it on the computer before moving to the next statement.

  NOTES

  1. G. Polya. (1957). How to solve it (2nd ed.). Princeton, NJ: Princeton University Press (original work published 1945).

  2. Ibid., p. vii.

  3. Ibid., p. 113.

  4. Ibid.

  5. H. A. Simon. (1991). Models of my life (p. 199). New York: Basic Books.

  6. Ibid.

  7. S. Dasgupta. (2003). Multidisciplinary creativity: The case of Herbert A. Simon. Cognitive Science, 27, 683–707.

  8. H. A. Simon. (1976). Administrative behavior (3rd ed.). New York: Free Press (original work published 1947).

  9. H. A. Simon. (1947). The axioms of Newtonian mechanics. Philosophical Magazine, 7, 889–905.

  10. H. A. Simon, D. R. Smithburg, & V. A. Thompson. (1950). Public administration. New York: Alfred A. Knopf.

  11. Simon, 1991, op cit., p. 101.

  12. H. A. Simon. (1952a). On the definition of the causal relation. Journal of Philosophy, 49, 517–528.

  13. H. A. Simon. (1952b). Application of servomechanism theory to production control. Econometrica, 20, 247–268.

  14. H. A. Simon. (1952c). A formal theory of interaction in social groups. American Sociological Review, 17, 202–211.

  15. Simon, 1991, op cit. pp 197-198.

  16. S. Carlson. (1979). The prize for economic science. In Les Prix Nobel 1978. Stockholm: The Nobel Foundation.

  17. C. I. Barnard. (1938). The function of the executive. Cambridge, MA: Harvard University Press.

  18. Simon, 1976, op cit., p. 76.

  19. H. A. Simon. (1957). Rationality in administrative decision making. In H. A. Simon. Models of man (pp. 196–206). New York: Wiley.

  20. H. A. Simon. (1950). Administrative aspects of allocative efficiency. Cowles Commission discourse paper, economics no. 281.

  21. H.. A. Simon to T. C. Koopmans, September 29, 1952. Herbert A. Simon papers. Carnegie-Mellon University Archives, Pittsburgh, PA. Further reference to material contained in these archives will be abbreviated HASP.

  22. H.. A. Simon to J. von Neumann, June 24, 1953, HASP, op cit.

  23. Simon, 1991, op cit., p. 201.

  24. Ibid.

  25. Ibid.

  26. A. Newell. (1955). The chess machine: An example of dealing with a complex task by adaptation. Proceedings of the Western Joint Computer Conference, 7, 101–108.

  27. H. A. Simon. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69, 99–118.

  28. Ibid., p. 99.

  29. Ibid., p. 104.

  30. H. A. Simon. (1956). Rational choice and the structure of the environment. Psychological Review, 63, 129–138 (see especially p. 129).

  31. Simon, 1991, op cit., p. 201.

  32. A. Newell & H. A. Simon. (1956). The Logic Theory machine: A complex information processing system. Technical report P-868. Santa Monica, CA: The RAND Corporation. Also published in IRE Transactions on Information Theory, IT-2, 61–79. Citations of this work refers to the RAND report.

  33. Newell & Simon, op cit., p. 27.

  34. Ibid.

  35. Simon, 1991, op cit., p. 206.

  36. Ibid., p. 205.

  37. Ibid.

  38. Newell & Simon, op cit., p. 26.

  39. Ibid., p. 30.

  40. Ibid.

  41. Ibid.

  42. Ibid., p. 37.

  43. H. A. Simon to B. Russell, October 2, 1956; B. Russell to H. A. Simon, November 2, 1956. HASP, op cit.

  44. A. Newell, J. C. Shaw, & H. A. Simon. (1958). Elements of a theory of human problem solving. Psychological Review, 65, 151–166 (see especially p. 151).

  45. Ibid.

  46. A. Newell & H. A. Simon. (1976). Computer science as empirical inquiry: Symbols and search (Turing Award Lecture). Communications of the ACM, 19, 113–126. Reprinted in Anon. (1987). ACM Turing Award lectures: The first twenty years 1966–1985 (pp. 287–313). New York: ACM Press (see especially p. 290).

  47. Newell, Shaw, & Simon, op cit., p. 156.

  48. Ibid., p. 165.

  49. There were three others who were the proposal’s “originators”: Marvin Minsky (1927–), then a Harvard Junior Fellow and later, cofounder, with McCarthy, of the Artificial Intelligence Project at MIT; Nathaniel Rochester (1919–2001) of IBM, a designer of the IBM 701 and, at the time, manager of information research in IBM’s Poughkeepsie laboratory; and Claude Shannon, whom we have already encountered, a mathematician at Bell Telephone Laboratories.

  50. J. McCarthy, M. L. Minsky, N. Rochester, & C. E. Shannon. (1955). A proposal for the Dartmouth summer research project on artificial intelligence [On-line]. August 31. Available: http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

  51. B. von Eckerdt. (1993). What is cognitive science? Cambridge, MA: MIT Press; Z. W. Pylyshyn. (1984). Computation and cognition. Cambridge, MA: MIT Press.

  52. J. Bruner. (1990). Acts of meaning. Cambridge, MA: Harvard University Press.

  53. See, for example, R. Lachman, J. L. Lachman, & E. C. Butterfield. (1979). Cognitive psychology and information processing. Hillsdale, NJ: Lawrence Erlbaum Associates.

  54. A. L. Samuel. (1959). Some studies in machine learning using the game of checkers. IBM Journal of Research & Development, III, 210–229.

  55. H. Gerlenter. (1959). Realization of a geometry theorem proving machine. In Proceedings of the International Conference on Information Processing (pp. 273–282). London: Butterworth.

  56. B. L. Whorf. (1956). Language, thought and reality. Cambridge, MA: MIT Press.

  57. Simon, 1991, op cit., p. 213.

  58. Ibid.

  59. J. McCarthy. (1981). History of LISP. In R. L. Wexelblat (Ed.), History of programming languages (pp. 173–185). New York: Academic Press.

  15

  An Explosion of Subparadigms

  I

  IN 1962, PURDUE University in West Lafayette, Indiana, in the United States opened a department of computer science with the mandate to offer master’s and doctoral degrees in computer science.1 Two years later, the Univers
ity of Manchester in England and the University of Toronto in Canada also established departments of computer science.2 These were the first universities in America, Britain, and Canada, respectively, to recognize a new academic reality formally—that there was a distinct discipline with a domain that was the computer and the phenomenon of automatic computation.

  Thereafter, by the late 1960s—much as universities had sprung up all over Europe during the 12th and 13th centuries after the founding of the University of Bologna (circa 1150) and the University of Paris (circa 1200)—independent departments of computer science sprouted across the academic maps on North America, Britain, and Europe. Not all the departments used computer science in their names; some preferred computing, some computing science, some computation. In Europe non-English terms such as informatique and informatik were used. But what was recognized was that the time had come to wean the phenomenon of computing away from mathematics and electrical engineering, the two most common academic “parents” of the field; and also from computer centers, which were in the business of offering computing services to university communities. A scientific identity of its very own was thus established. Practitioners of the field could call themselves computer scientists.

  This identity was shaped around a paradigm. As we have seen, the epicenter of this paradigm was the concept of the stored-program computer as theorized originally in von Neumann’s EDVAC report of 1945 and realized physically in 1949 by the EDSAC and the Manchester Mark I machines (see Chapter 8). We have also seen the directions in which this paradigm radiated out in the next decade. Most prominent among the refinements were the emergence of the historically and utterly original, Janus-faced, liminal artifacts called computer programs, and the languages—themselves abstract artifacts—invented to describe and communicate programs to both computers and other human beings. We have seen that a curious aspect of this new science was its concern for, indeed preoccupation with, procedures. The knowledge that had accrued and that characterized the field by the end of the 1950s was fundamentally procedural knowledge—“know-how” knowledge. Algorithms, numeric methods, heuristic search, linguistic tools, design methods were what this new breed of scientists were concerned with. Methodology was as much a domain of inquiry as the physical artifacts themselves.

  There was strangeness to this new paradigm. There was strangeness to this new identity, for here was invented a new intellectual tradition. The mature natural sciences—geology, biology, physics, chemistry, astronomy—and the mature engineering sciences on which the engineering fields rested—mechanics, thermodynamics, fluid mechanics, physical metallurgy, metal physics, process chemistry, and so forth—had not seen the likes of such a science. Even electrical engineering and mathematics, both historically concerned with continuous phenomena, the two disciplines in which the first generation of professors of computer science were mostly trained, stood some distance from computer science.

  II

  But the computer science that was in place when the first departments of computer science emerged in the groves of academe in 1962 to 1965 was far from stable. If the appearance of a dominant paradigm is supposed to mark a mature science, then the computer science of the early 1960s challenged this view. As a paradigm, computer science was far from mature. It was volatile; it was dynamic. The core of the paradigm was not in any danger of being overthrown. No one suggested an alternative to the concept of the stored-program computer as a rival computational model. That would occur in the mid 1970s with what would be called the data flow computing style3 and the functional programming style.4 Yet, no one would call what transpired during the 1960s “normal science” in Thomas Kuhn’s unflattering sense (see Chapter 6, Section II).

  Rather, within the stored-program computing paradigm as a whole there were regions where subparadigms were being created—local earthquakes that shifted the regions with consequences for neighboring regions, as it were. The outcome was to alter significantly “local landscapes” within the paradigm without destabilizing the paradigm itself. In our story, the end of the birth of computer science was marked not by a whimper but a rather big bang—by an explosion of subparadigms created during the 1960s.

  III

  Of course, to partition this chapter of our story neatly into a split between calendrical decades is too facile. Some of the subparadigm shifts began during the 1950s and reached maturity during the 1960s. A notable instance was the development of a theory of what came to be called variously and synonymously sequential circuits, sequential machines, finite-state machines, finite automata.

  A physical computer comprises, ultimately, of switching circuit components. Some of these take (Boolean) values as inputs and, after a certain delay, produce (Boolean) outputs. But these circuits have no “memory” of their past behavior. These circuits came to be called combinational circuits and were the kind Claude Shannon analyzed using Boolean algebra in 1938 (see Chapter 5, Sections III and IV). The circuits in a computer that decode instructions and then encode the decoder’s output and issue control signals that cause the instruction to be executed are combinational circuits (see Chapter 12, Section IV).

  Other circuits have “memory” of their immediate past states. A counter that counts from 0 to 9 (and then returns to 0) is an example. At each time step, a new pulse causes the state of the device to change, so that if the present state represents the digit n, then its next state would represent the digit n + 1 (for 0 ≤ n ≤ 8), and it would represent 0 if n = 9. Such devices with response to inputs that depend on its internal state are sequential circuits or machines. Registers, accumulators, shift registers, main memory, control units—all components of a computer—are instances of sequential circuits. The most basic sequential circuit element is the “flip-flop,” which can have only one of two states: 1 and 0. Depending on its present state, and the input value (1 or 0), flip-flops either stay in its present state or switch to the other state.

  The structure of combinational and sequential circuits involve components that are abstractions of physical—electronic, electrical—circuit elements. Their behavior is, correspondingly, abstractions of such things as currents, voltages, and resistances. It does not matter whether these circuits are built from relays, vacuum tubes, or transistors, which were invented in 1948 by Willam Shockley (1910–1989), John Bardeen (1908–1991), and Walter Brattain (1902–1987), all of Bell Laboratories, for which they shared the Nobel prize for physics in 1956; or integrated semiconductor “chips” of the kind that appeared during the early 1960s. What was important was that as combinational or sequential circuits, their structure and behavior were described in terms of “logical”/binary/Boolean values and Boolean algebra.

  Abstracting physical circuits up to the level of combinational and sequential circuits created a new kind of design activity that came to be called logic design and that provided a bridge between the machine, seen as a complex network of physical devices (diodes, triodes, resistors, capacitors, ferrite cores) obeying the laws of physics, and the machine, seen as a unified, functionally organized digital stored-program computer.

  Shannon invented this abstraction level in the realm of combinational circuits in 1938. However, the creation of a theory of sequential circuits/machines, more complex because of the presence of memory, was initiated during the mid 1950s. This theory began with the recognition of the vital concept of an internal state of a digital device, and this concept was recognized independently by a number of people.5

  In its most general form, a sequential machine has the following characteristics: (i) it consists of a finite set of possible states in which the machine can be; (ii) it can accept (recognize) one of a finite set of possible input values (that is, has a finite input alphabet); (iii) it can produce one of a finite set of output values (that is, has a finite output alphabet); (iv) when an input is received the circuit changes from its present state to a next state, depending on the input and the present state; and (v) the machine produces an output either depending (a) only on
the present state or (b) on the present state and the input.

  The overall structure of a sequential machine is shown in Figure 15.1. Notice that the (mathematical) function that produces the next state Sk (next-state function) and the function that produces the output Oj (the output function) are realized by combinational logic circuits to which the inputs are the inputs Ij and the present state Si. The “memory” device is that part of the sequential machine that holds the machine’s state at any point of time.

  The alternative ways in which the output function may be defined give rise to two alternative models of sequential machines that are identical with respect to (i) through (iv), but differ according to whether the output function follows (v(a)) or (v(b)). The first model came to be called the Moore machine, after Edward F. Moore (1925–2003), its inventor in 1956, who was then on the research staff at Bell Laboratories and, later, professor of computer sciences at the University of Wisconsin, Madison. The second model came to be called the Mealy machine after George H. Mealy (1927–2010), who conceived it in 1955. Mealy was also, at the time, with Bell Laboratories; later, he became a professor of computer science at Harvard University.

  These two models together formed the seed of a new subparadigm, and around this seed an elaborate and mathematically sophisticated theory of sequential machines emerged during the course of the 1960s6—a theory that contributed, on the one hand, to automata theory7 and, on the other, to an elegant and formal foundation for logic design.8 As an example of the latter, a computer’s control unit (see Chapter 12) is an instance of a sequential machine. At any given time the device is in one of a finite set of states, S = {s1, s2, …, sm}. When in state si (that is, when the present state of the control unit is si), the device issues one or more control signals from a finite alphabet of control signals C = {c1, c2, …, cn}; the signals issued constitute collectively the unit’s output oj. The input iq to the control unit comes from the “rest of the computer.” Thus, the next state of the control unit sk will be determined by both the present state si and the input iq. Such a control unit is an instance of the Moore machine.

 

‹ Prev