It Began with Babbage

Home > Other > It Began with Babbage > Page 26
It Began with Babbage Page 26

by Dasgupta, Subrata


  Turing then suggested replacing either M or W by a computer C. In this situation, I offers questions to either a human H or a computer C, and the task for I is to ascertain from the answers which is a human and which is the machine.

  The original question, “Can machines think?” could now be reformulated as: Are there imaginable digital computers that would do well in the imitation game?87 The former question Turing dismissed as too meaningless. As for the latter, he predicted that, within 50 years, computers with an adequate memory capacity (implying, it would seem, that this was the crucial factor) would be able to play the imitation game successfully and pass his test criterion.88 Indeed, he further predicted that, by the end of the 20th century, the idea of thinking machines would be deemed commonplace.89

  We are reminded here of Austrian–British philosopher of science Sir Karl Popper (1902–1994) famously insisting that science progresses through a succession of “bold conjectures” followed by attempted refutations of the conjectures.90 Turing defended his prediction precisely as such a bold conjecture, arguing that conjectures are so often the means for pursuing promising paths of research.91

  Turing’s imitation game, in which the machine’s responses to the interrogator’s questions might fool the latter into thinking that the machine is the human, has come to be called the Turing test. Any machine that can fool the interrogator at least 30% of the time would, in Turing’s view, be deemed an intelligent or thinking machine. The essence of the game was, of course, that the interrogator could ask any question whatsoever, spanning the whole range of human experience.

  Turing expended considerable space to countering anticipated objections to his proposal. He discussed these under a number of broad headings of which perhaps the most interesting were the following.92

  Theological objection: Thinking is a function of man’s immortal soul. God has given an immortal soul to humans only, and not to animals or machines—hence, no animal or machine can think.

  Mathematical objection: Here, Turing referred to the implications of Kurt Gödel’s Incompleteness Theorem (see Chapter 4, Section I)—that there are certain limits to the power of purely mechanical (or formal) systems and procedures, even computing machines. Thus, machines suffer from certain kinds of “disabilities” that humans (who are not mechanical or formal systems) are not prone to.

  Argument from consciousness: That machines cannot “feel” in the sense that humans feel emotions, in which case one cannot identify machines with humans.

  In each of these cases, Turing advanced a response. For example, he was not “impressed” with the theological argument for various reasons, including its arbitrary separation of humans from animals, and the fact that this objection was Christianity based. Moreover, theological arguments in the past have been falsified by advances in (scientific) knowledge.

  As for the mathematical objection, Turing pointed out that there may well be similar limitations to the human intellect. It has never been proved that the human intellect does not suffer from similar disabilities as do formal systems.

  About the argument from consciousness, Turing speculates on a version of the imitation game involving just I and C in which C responds to I’s questions or comments about a sonnet in such a fashion that leads an observer to conclude that C’s response is convincingly humanlike.

  Turing also addresses Lovelace’s caution—that a programmable computing machine (in her case, the Analytical Engine) could not originate anything but, rather, it could do only what it was programmed to do (see Chapter 2, Section VIII). In response, Turing quoted David Hartree (whom we encountered in Chapter 8, Section XI) who, in his book Calculating Instruments and Machines (1949), considered the possibility of a computer as learning from its past “experiences.”93 Suppose, Turing writes, the existence of a real, contemporary computer with such learning capacity. Because the Analytical Engine of Lovelace’s time was a general-purpose (“universal”) computing machine, this, too, could be programmed to “mimic” the learning machine.94

  Another version of Lovelace’s warning could be that machines “never take us by surprise.” But this, Turing argued, was an empirical question. Machines, he said, frequently take him by surprise, because he—like others—is himself constrained in his capacity to reason, calculate, make decisions, and so on. He pointed out that philosophers and mathematicians alike are prone to the fallacy that as soon as a fact is presented to someone all the consequences of the fact are revealed immediately.95 If this were so, then perhaps the fact that a computer is programmed to do something means that one would know all that the computer produces by executing the program, and there would, indeed, be no surprises. But that presumption is quite false.

  NOTES

  1. R. Tagore. (1912). Gitanjali (Song Offerings) (poem 35). London: The India Society. This collection of poems, which won Tagore the Nobel Prize the year after their publication, has been republished or anthologized many times. See, for example, A. Chakravarty. (Ed.). (1961). A Tagore reader (pp. 294–307). Boston, MA: Beacon Press. The poem from which the lines are taken here appears on p. 300.

  2. N. Wiener. (1961). Cybernetics: Or control and communication in the animal and the machine (2nd ed., p. 2). Cambridge, MA: MIT Press (original work published 1948).

  3. Ibid.

  4. K. J. W. Craik. (1967). The nature of explanation (p. 52). Cambridge, UK: Cambridge University Press (original work published 1943).

  5. Ibid., p. 60.

  6. Ibid., p. 58.

  7. M. V. Wilkes. (1985). Memoirs of a computer pioneer (p. 23). Cambridge, MA: MIT Press.

  8. Ibid.

  9. J. Bruner. (1990). Acts of meaning. Cambridge, MA: Harvard University Press.

  10. M. A. Boden. (2006). Mind as machine: A history of cognitive science (Vol. 1, pp. 216–217). Oxford: Clarendon Press.

  11. W. S. McCulloch & W. Pitts. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115–133. Reprinted in J. A. Anderson & E. Rosenfield. (Eds.). (1988). Neurocomputing (pp. 18–27). Cambridge, MA: MIT Press. All citations here refer to the reprinted version.

  12. Ibid., p. 19.

  13. Ibid.

  14. J. von Neumann. (1945). First draft of a report on the EDVAC (p. 4). Unpublished report. Philadelphia, PA: Moore School of Electrical Engineering.

  15. Ibid., p. 5.

  16. Ibid.

  17. Ibid., p. 9.

  18. Ibid.

  19. Ibid., pp. 10–17.

  20. L. A. Jeffress. (Ed.). (1951). Cerebral mechanisms in behavior: The Hixon Symposium. New York: Wiley.

  21. J. von Neumann. (1951). The general and logical theory of automata. In Jeffress, op cit., pp. 1–41. Reprinted in A. H. Taub. (Ed.). (1961–1963). John von Neumann: Collected works (Vol. 5, pp. 288–326). Oxford: Clarendon Press. All citations refer to the reprinted article.

  22. Ibid., p. 289.

  23. Ibid.

  24. Ibid., pp. 289–290.

  25. Ibid., p. 290.

  26. Ibid.

  27. Ibid., p. 297.

  28. Ibid.

  29. Ibid.

  30. Ibid. pp. 297–298.

  31. Ibid.

  32. Ibid., p. 298.

  33. Ibid.

  34. Ibid., p. 300.

  35. Ibid., p. 298.

  36. Ibid., p. 300.

  37. Ibid.

  38. Ibid., p. 309.

  39. Ibid., p. 314.

  40. Ibid., p. 315.

  41. I have borrowed these terms from J. R. Sampson. (1976). Adaptive information processing (p. 58). New York: Springer-Verlag.

  42. von Neumann, op cit., p. 317.

  43. J. von Neumann. (1966). Theory of self-reproducing automata. In A. W. Burks (Ed.), Urbana, IL: University of Illinois Press; E. F. Codd. (1968). Cellular automata. New York: Academic Press; A. W. Burks. (Ed.). (1970). Essays on cellular automata. Urbana, IL: University of Illinois Press.

  44. G. G. Langdon, Jr. (1974). Logic design: A review of theor
y and practice. New York: Academic Press.

  45. C. E. Shannon. (1948). A mathematical theory of communication. Bell Systems Technical Journal, 27, 379–423, 623–656. Also available online: http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf

  46. C. E. Shannon & W. Weaver. (1949). The mathematical theory of communication. Urbana, IL: University of Illinois Press.

  47. See, for example, C. Cherry. (1968). On human communication. Cambridge, MA: MIT Press.

  48. Shannon, op cit., p. 379. So far as is known, this article was the first to introduce the term bit in the published literature.

  49. Cherry, op cit., pp. 41–52.

  50. C. E. Shannon. (1950a). Programming a computer for playing chess. Philosophical Magazine, 41, 256–275. Also available online: http://archive.computerhistory.org/projects/chess/related_materials/text/2-0%. Citations to this article refer to the online edition, which is not paginated. This quote is from p. 1.

  51. Ibid.

  52. Ibid.

  53. In February 1950, before Shannon’s article in Philosophical Magazine appeared, a more popular and briefer article by Shannon was published in an American science periodical: C. E. Shannon. (1950b). A chess-playing machine. Scientific American, 182, 48–51.

  54. Shannon, 1950a, op cit., p. 2.

  55. J. von Neumann & O. Morgernstern. (1944). Theory of games and economic behavior. Princeton, NJ: Princeton University Press.

  56. Shannon, 1950a, op cit., p. 3.

  57. See, for example, A. Barr & E. A. Feigenbaum. (Eds.). (1981). The handbook of artificial intelligence (Vol. I, pp. 26–27). Stanford, CA: HeurisTech Press.

  58. A. D. De Groot. (2008). Thought and choice in chess. Amsterdam: Amsterdam University Press. The original 1946 edition was in Dutch.

  59. Shannon, 1950a, op cit., p. 4.

  60. Ibid., p. 4.

  61. Ibid., p. 5.

  62. Ibid., pp. 6–7.

  63. Barr & Feigenbaum, op cit., pp. 84–85. See also E. Charniak & D. McDermott. (1985). Introduction to artificial intelligence (pp. 281–290). Reading, MA: Addison-Wesley.

  64. Shannon, 1950a, op cit., p. 9.

  65. Ibid., p. 1.

  66. Ibid.

  67. W. Weaver. (1949). Translation. Memorandum. New York: The Rockefeller Foundation. Also available online: http://www.mt_archive.info/weaver-1949.pdf

  68. Anon. (1998). Milestones in machine translation, no.2: Warren Weaver’s memorandum 1949. Language Today, 6, 22–23; Y. Bar-Hillel. (1960). The present status of automatic translation of languages. In F. L. Alt (Ed.), Advances in computers (Vol. I, pp. 91–163). New York: Academic Press.

  69. G. Steiner. (1975). After Babel: Aspects of language and translation. Oxford: Oxford University Press; S. Chaudhuri. (1999). Translation and understanding. New Delhi: Oxford University Press.

  70. Weaver, op cit., p. 6.

  71. Ibid.

  72. Ibid.

  73. Ibid., p. 10.

  74. Ibid.

  75. Ibid., p. 2.

  76. Ibid.

  77. Ibid.

  78. Ibid., p. 11.

  79. J. H. Greenberg. (Ed.). (1963). Universals of language. Cambridge, MA: MIT Press.

  80. Weaver, op cit., p. 12.

  81. Wilkes, op cit., p. 195.

  82. Ibid., pp. 195–197.

  83. E. C. Berkeley. (1949). Giant brains, or machines that think. New York: Wiley.

  84. A. M. Turing. (1945). Proposal for the development of an electronic computer. Unpublished report. Teddington: National Physical Laboratory. Printed in D. C. Ince. (Ed.). (1992). Collected works of A.M. Turing. Amsterdam: North-Holland.

  85. A. M. Turing. (1948). Intelligent machinery. Unpublished report. Teddington: National Physical Laboratory. Printed in B. Meltzer & D. Michie. (Eds.). (1970). Machine intelligence 5 (pp. 3–23). New York: Halsted Press.

  86. A. M. Turing. (1950). Computing machinery and intelligence. Mind, LIX, 433–460. Reprinted in M. Boden. (Ed.). (1990). Philosophy of artificial intelligence (pp. 40–66). Oxford: Oxford University Press. All citations refer to the reprinted article.

  87. Ibid., p. 48.

  88. Ibid., p. 49.

  89. Ibid.

  90. K. R. Popper. (1968). Conjectures and refutations: The growth of scientific knowledge. New York: Harper & Row.

  91. Turing, op cit., p. 49.

  92. Ibid., pp. 49–55.

  93. D. R. Hartree. (1949). Calculating instruments and machines. Urbana, IL: University of Illinois Press.

  94. Turing, op cit., p. 56.

  95. Ibid., p. 57.

  12

  “The Best Way to Design …”

  I

  IN FEBRUARY 1951, the Ferranti Mark I was delivered to the University of Manchester. This was the commercial “edition” of the Manchester Mark I (see Chapter 8, Section XIII), the product of a collaboration between town and gown, the former being the Manchester firm of Ferranti Limited.1 It became (by a few months) the world’s first commercially available digital computer2 (followed in June 1951 by the “Universal Automatic Computer” [UNIVAC], developed by the Eckert-Mauchly Computer Corporation3).

  The Ferranti Mark I was unveiled formally at an inaugural conference held in Manchester, June 9 to 12, 1951. At this conference, Maurice Wilkes delivered a lecture titled “The Best Way to Design an Automatic Calculating Machine.”4 This conference is probably (perhaps unfairly) more known because of Wilkes’s lecture than for its primary focus, the Ferranti Mark I. For during this lecture, Wilkes announced a new approach to the design of a computer’s control unit called microprogramming, which would be massively consequential in the later evolution of computers.

  Wilkes’s lecture also marked something else: the search for order, structure, and simplicity in the design of computational artifacts; and an attendant concern for, a preoccupation with, the design process itself in the realm of computational artifacts.

  We have already seen the first manifestations of this concern with the design process in the Goldstine-von Neumann invention of a flow diagram notation for beginning the act of computer programming (see Chapter 9, Section III), and in David Wheeler’s and Stanley Gill’s discussions of a method for program development (Chapter 10, Section IV). Wilkes’s lecture was notable for “migrating” this concern into the realm of the physical computer itself.

  II

  We recall that, in May 1949, the Cambridge EDSAC became fully operational (see Chapter 8, Section XIII). The EDSAC was a serial machine in that reading from or writing into memory was done 1 bit at a time (bit serial)5; and, likewise, the arithmetic unit performed its operations in a bit-by-bit fashion.6 Soon after the EDSAC’s completion, while others in his laboratory were busy refining the programming techniques and exploring its use in scientific applications (see Chapter 9, Sections V–VIII; and Chapter 10), Wilkes became preoccupied with issues of regularity and complexity in computer design and their relation to reliability.7

  Reliability, in Wilkes’s view, depended on the amount of equipment the machine has, its complexity, and the degree of repetition of the units.8 As for complexity, by this he meant the extent to which the physical connections between the units within a computer convoluted their logical relationships. A machine can be built more easily if its components were designed and implemented by different people, then they could go about their business without interfering, or having to interact, with one another. Likewise, if the components are connected in a transparent way, the machine is easier to repair.9

  Thus, for Wilkes, complexity was related to the machine’s internal organization. A regular or orderly organization lessened the obscurity of the interrelatedness of the components. In the EDSAC, the paragon of such virtue was the main memory unit, which consisted of 32 independent mercury tanks connected to common input and output “buses” (that is, communication paths).

  The culprits, in contrast, were the EDSAC arithmetic unit and the control circuits. Because the arithmetic unit was a serial device (performing its operations in a
bit-by-bit manner, rather as humans add two numbers in a digit-by-digit manner), it, too, was unstructured and irregular. However, during summer and early autumn 1950, Wilkes visited the United States and, during the course of this visit, he met Julian Bigelow (1913–2003)—one of the cofounders, with Norbert Wiener and Arturo Rosenblueth of cybernetics—in Princeton. Bigelow was then engaged in the development of the IAS computer at the Institute of Advanced Study under von Neumann’s direction (see Chapter 8, Section XV). Through their discussions, Wilkes came to realize that a parallel arithmetic unit would have the same kind of regularity as the memory unit.10

  Indeed, as Wilkes admitted in his Manchester lecture, regularity and simplicity is obtained, in general, when identical, repetitive units are used rather than a collection of different units—even if the number of identical units is more than the number of distinct units.11 And just as the EDSAC memory, comprised of 32 identical memory tanks, was orderly, regular, and thus not complex (see Figure 8.2), so also would be a parallel arithmetic unit that consisted of an array of identical circuits performing semi-independent operations in parallel, on the different bit pairs corresponding to the two numbers involved (Figure 12.1).12

  FIGURE 12.1 An Adder Unit.

  So if one builds a parallel machine, one has a good example, in a parallel arithmetic unit, of a functional component consisting of multiple identical units.13

  There remains the control unit within the computer—that is, as Wilkes put it, everything else in a machine apart from the memory unit and the registers and adding circuits comprising the arithmetic unit.14 The control circuits in the EDSAC were responsible for issuing control signals to all other parts of the machine so that the latter could execute the EDSAC instructions in the desired sequence. The problem was that there was no systematic procedure to design the control unit. It was an ad hoc enterprise, the outcome of which was a control circuit that obscured completely the interrelationship of its elements.15 There was no transparency. The resulting logical design of the control circuit had no structure, no orderliness. Computer designers would call such a structure random logic.16 And so Wilkes arrived at the main thrust of his lecture: to propose a way in which the control unit could be made more systematic and, thus, simpler in organization and design.17

 

‹ Prev