It Began with Babbage

Home > Other > It Began with Babbage > Page 27
It Began with Babbage Page 27

by Dasgupta, Subrata


  III

  The modern reader of this article by Wilkes will be struck by the informality of its mode of presentation; the writing style is artless, more along the nature of a lecture (as indeed it was). There are no references, for example. In approximately 1300 words, aided by a single diagram, Wilkes laid out a “best way” of designing a computer’s control unit—a technique he called microprogramming.

  What is especially striking is what the lecture reveals of Wilkes’s mentality. Although an applied mathematician by formal training, in the realm of computers he was very much the engineer. It is worth noting that Wilkes’s academic position in Cambridge University at the time of his retirement in 1980 was (besides being director of the University Computer Laboratory) professor of computer technology.

  If Alan Turing and John von Neumann represented the beginning of theoretical computer science, Wilkes was the archetypal empirical computer scientist. We cannot imagine him speculating on such abstractions as Turing machines or self-reproducing automata (see Chapter 4, Section IV; and Chapter 11, Section IV). Yet, the issue that caught Wilkes’s attention was essentially a conceptual problem, even abstract, although it derived from the practical problem of the reliability and maintainability of computers.

  An interesting attribute of conceptual problems is that their recognition by individuals is often prompted by very personal perspectives, more of a philosophical, aesthetic, or even ethical nature than a strictly empirical or technical consideration. Wilkes identified a conceptual rather than a strictly empirical problem.18 He once remarked that, without a particular philosophical point of view, the problem he began to investigate (and the solution to which he presented at the Manchester conference) would not make too much sense.19 Elsewhere, he would comment that his problem was essentially a private problem.20

  IV

  Creativity always originates in the past. Ideas, concepts, solutions have their traces in what came before. What makes a person creative is the way in which he or she draws on the past to bring about the present—and that will, perhaps, shape the future.21 The creative being extracts ideas from the past and fuses them in unexpected and surprising ways to produce something quite different from the ingredients. Hungarian-British writer Arthur Koestler (1905–1983) called this process bisociation.22

  So it was with Wilkes’s invention of microprogramming. His search for an ordered, regular structure for the computer’s control unit led him to other kinds of circuits that manifested the kind of order he sought. In the EDSAC itself, as we have seen, the memory unit manifested such an order, but there was another unit that seemed relevant because it had organizational order. A part of the machine was concerned with decoding the operation code within an instruction and then reencoding in a different way to provide the signals sent to the different units that would execute that instruction collectively. This reencoding was performed by a circuit called the diode matrix and, Wilkes believed, “something similar” could be used to implement the control circuits.23

  A diode matrix is a regular, two-dimensional array of intersecting horizontal and vertical wires—that is, an array of intersecting orthogonal wires—in which the horizontal wires serve as inputs to the circuit and the vertical ones serve as the outputs from the circuit. The points of intersection between the horizontal and the vertical wires serve as sites of diodes. The presence of a diode causes the signal (if any) on the diode input to be passed through (“gated”) to the diode output line; in the diode matrix, the diode input is a horizontal line and the output is a vertical line. Each horizontal line in the matrix connects to one or more diodes; each vertical line transmits an input signal on the connected diode to the vertical line that connects to the diode output (Figure 12.2).

  FIGURE 12.2 A Diode Matrix

  The problem with the diode matrix encoder in the EDSAC was that it was completely inflexible. It was used to issue a fixed set of control signals corresponding to a particular operation code (“opcode”) within an instruction. However, if an instruction allowed for different interpretations of the opcode, then the diode matrix encoder was of no use. More important, the task of executing a sequence of instructions (which was the real job of the main control unit in a computer) demanded the flexibility of being able to select different sets of control signals depending on the actual instructions—a flexibility that could not be met by the diode matrix.

  It is said that travel broadens the mind. For the creative person, whether in the arts or sciences, whether in the humanities or engineering, travel certainly fosters creativity. So it was with Wilkes. As already mentioned, from June to September 1950, Wilkes was in the United States visiting various universities where work was in progress in computer development. One of the places was MIT, where, under the direction of engineer and systems scientist Jay Forrester (1918–), the Whirlwind computer was then under construction. In this machine, the duration of each arithmetic operation (except for multiplication) spanned exactly eight clock pulse intervals. The control signals corresponding to each operation were derived from a diode matrix. The regularity of this control unit made an immediate impression on Wilkes.24

  However, he wanted far greater flexibility than what either the EDSAC diode matrix-based encoder or the Whirlwind diode matrix-based control unit could provide. In the latter case, for instance, each arithmetic instruction required a fixed sequence of control signals to be issued regardless of the context of its execution. But, as noted, in general, an instruction may demand a variable sequence of control signals depending on the operands or on some condition being generated during the course of instruction execution.

  It was after his return to England in September 1950, sometime that winter, when the solution came to him.25 He found an analogy between the functional flexibility he desired of his control unit and the functional flexibility of computer programs. That is, he likened the desired flexibility for control signal sequences required to execute a single instruction to the flexibility of instruction sequences within a program. From this analogy, Wilkes arrived at the concept of the control unit as a programmed computer in miniature—a computer within a larger computer, rather like a homunculus in the brain.

  There still lay the problem of how this programlike flexibility could be attained using a regular, diode matrixlike circuit. The solution he arrived at was to use two diode matrices, one that would serve to store the control signals in the form of microinstructions, much as the computer’s main memory stores a program’s instructions. The execution of a microinstruction would cause a set of control signals to be issued in parallel.26 The second diode matrix, organized in tandem with the first, stored the addresses of the “next” microinstruction to select for execution and to control sequencing of the microinstructions. Analogous to a program stored in a computer’s main memory as a sequence of instructions, the sequence of microinstructions (along with the addresses) stored in the two diode matrices constituted a microprogram. The diode matrices formed the “microprogram store” (or “control store”) collectively.27 The overall control unit came to be called a microprogrammed control unit (Figure 12.3).

  V

  Wilkes delivered his Manchester lecture in July 1951. In November 1952, the first manuscript devoted entirely to microprogramming was dispatched to the editor of the Proceedings of the Cambridge Philosophical Society and was published the following year.28 The article described in great detail the architecture of a microprogrammed control unit as it might be deployed in a parallel machine.

  Wilkes, as remarked, was the quintessential empirical scientist–engineer. Designas-theory, implementation-as-experiment, testing, and evaluation formed an inextricably entwined quadruple in his mind. Armed with a theory of microprogramming, he would want to see if it worked in practice. As it was, by about the time the EDSAC was operational, he was already ruminating on a machine that would succeed the EDSAC, and he had decided on a parallel machine.29

  FIGURE 12.3 A Microprogrammed Control Unit.

  This succ
essor, the EDSAC 2, begun in 1953 and fully operational in 1958, was this empirical test of his microprogramming idea.30 As such, it was a brilliant example in the history of computer design of a successful experimental corroboration of a new principle of design. Microprogramming was used to implement practically all aspects of control in this machine; every significant unit in the machine, including memory, input, and output, were driven from the microprogrammed control unit. As a test of the principles, it was a spectacular success.31

  VI

  However, the EDSAC 2 was more than an empirical confirmation of microprogramming principles. It was an experimental test bed for a number of ideas Wilkes had broached in his 1951 article in Manchester.32 They included the attractiveness of parallel computing (as the term was then understood). A parallel arithmetic unit, certainly, but also with access to main memory, and not in the bit-by-bit mode of the ultrasonic memory used in the EDSAC (or the EDSAC 1, as it began to be called, in deference to its successor) but in bit-parallel fashion. The Williams CRT invented by Frederic C. Williams and Tom Kilburn in Manchester was a possibility (see Chapter 8, Section XIII). Indeed, this memory device had found its way into some of the commercial computers that were being manufactured and marketed rapidly during the early 1950s.33 But, for Wilkes and his Cambridge colleagues, it demanded too much “careful engineering” and “careful nursing” to guarantee acceptable performance.34

  Besides, a new kind of memory had emerged in the United State. Forrester, at MIT, had developed a memory made of magnetizable ferrite “cores” shaped rather like doughnuts. The cores were arranged in a two-dimensional matrix, with each core representing a bit, and the corresponding bits of a set of n such matrices organized in parallel would constitute the n bits of a word of memory. If orthogonal wires were threaded through the central hole in a core and a current was sent through the wires, the core would be magnetized in one direction or the other, and these magnetic states would represent the binary digits 1 and 0. The cores were threaded with wires in such a fashion that the bits of word could be accessed, read out, and written into in parallel, and very rapidly. As a comparison, involving commercial computers during the early 1950s, the UNIVAC 1 (in 1951) had a delay line memory with an access time of 300 microseconds, the IBM 701 (in 1953) using the Williams tube had an access time of 30 microseconds, and the UNIVAC 1103 (in 1953) and the IBM 704 (in 1954) both used ferrite cores with access times of 10 microseconds.35

  Wilkes, on his visit to America in summer and fall 1950 had witnessed the core memory being implemented on the Whirlwind at MIT—and as in the case of seeing the control matrix in the same machine, this made a strong impression on him.36 By summer 1953, they were receiving, in Cambridge, England, reports from Cambridge, Massachusetts, of the success of the core memory.37 In the EDSAC 2, not only was the read/write main memory implemented using ferrite core technology, but also the read-only microprogram store was a matrix of 1024 cores.38

  VII

  This story of the “best way to design” a computer will be incomplete unless we contemplate the consequences of Wilkes’s invention. As we have noted, creativity involves as much the future as the past; one may create something never before known, but one may also create something that influences what comes afterward.

  The trouble with the latter—that is, the consequences of an act of creation—is that one never knows when that might happen. An idea or concept, a discovery or an invention may lie dormant, unattended, for years—until someone perceives its significance in some particular context. Until that moment, that act of creation is inconsequential.

  Microprogramming, as a computer design principle, manifested something of this nature. During the 1950s, from the time when Wilkes first outlined the idea, not too many people were enticed by it as a way of designing a computer’s control unit. Apart from papers emanating from the EDSAC 2 group, only seven or eight publications on the topic are on record for the entire decade.39

  The 1960s showed a marked increase in interest in microprogramming: articles originating in Britain and the United States, in Italy, France, Russia, Japan, Australia, and Germany were published. During that decade, some 44 publications were reported by Wilkes in his literature survey of 1969.40

  The tipping point that transformed microprogramming from an experimental, exploratory technique belonging to the computer design laboratory into something like a computer design subparadigm was a decision made by IBM during the early 1960s—by which time they were as surely the undisputed leaders in the electronic computer industry as they had formerly been the leaders of the electromechanical punched-card data processing industry. The then-head of IBM’s Hursley Laboratory in the United Kingdom drew corporate IBM’s attention to the EDSAC 2. The result was the company’s decision to use microprogramming as a key design philosophy in the IBM System/360 series of computers marketed during the early to mid 1960s.41 The enormous technical and commercial success of the IBM 360 led to the large-scale adoption of microprogramming in commercial computers thereafter. When IBM talked, others listened! Later in this story we will visit this event in more detail because of the significance of the IBM 360 in another respect. As we will also see, microprogramming played a vital role.

  VIII

  The EDSAC 2 held its microprogram in read-only memory. In his Manchester article of 1951, tucked away at the very end, Wilkes envisioned the possibility of a read/write microprogram store that could be written into (thus erasing previous contents) as well as read from. This led to the intriguing possibility that, if the contents of the “erasable” microprogram memory could be changed, then one could design a computer without a fixed order code (in present-centered terms, instruction set). Instead, the programmer could design her own order code to meet her particular requirements, and change the contents of the microprogram store accordingly.42

  What Wilkes called “erasable store” came to be known as writable control store (or writable control memory), and with the later development of semiconductor memories, the writable control store became a practical reality during the early 1970s.43 Wilkes’s speculation of “a machine with no fixed order” became a practical possibility. A machine with microprogramming capability for a writable control store and no “fixed order” came to be called a universal host machine.44 During the 1970s, a number of interesting universal host machines were designed and built, both in universities and by companies, to explore their applications.45 Here was another consequence of Wilkes’s invention that he had anticipated, albeit speculatively. In these machines, the computer user (not the designer) would microprogram the host to suit his or her requirements—thus, customize the host into a desired target machine. Programming could then dissolve into microprogramming.

  NOTES

  1. T. Kilburn. (1951). The new computing machine at the University of Manchester. Nature, 168, 95–96.

  2. S. H. Lavington. (1998). A history of Manchester computers (2nd ed., p. 25). Swindon, UK: The British Computer Society.

  3. S. Rosen. (1969). Electronic computers: A historical survey. Computing Surveys, 1, 7–36 (see especially p. 10).

  4. M. V. Wilkes. (1951). The best way to design an automatic calculating machine. Report of the Manchester University Computer Inaugural Conference, Manchester, July 1951. Reprinted in E. Mallach & N. Sondak. (Eds.). (1983). Advances in microprogramming (pp. 58–60). Dedham, MA: Artech House. See also M. V. Wilkes. (1986). The genesis of microprogramming. Annals of the History of Computing, 8, 116–126 (especially pp. 118–121). All citations refer to the Mallach-Sondak reprint.

  5. Recall that the EDSAC main memory was implemented as an ultrasonic storage device consisting of a bank of tanks filled with mercury. At one end of each tank, electrical pulses arriving at fixed time intervals were converted by quartz crystals into acoustic pulses that traveled along the length of the tank, were converted at the output end into electrical pulses, were amplified, and passed back to the input end for recycling. The pulses emerging at the output end could also be �
��read out” to other parts of the computer for processing. Thus, the train of pulses circulating through each tank represented information “stored” in the device, with each pulse (or absence thereof) representing a bit. In the EDSAC, there were 32 such memory tanks, each capable of storing 32 17-bit numbers. Reading out a particular number from a tank would entail a delay until the first bit of that number appeared at the output of a particular tank; there would be a further delay until the remaining 16 pulses (bits) appeared one by one. Hence the bit-serial nature of the EDSAC memory. For more on ultrasonic memories, see M. V. Wilkes. (1956). Automatic digital computers. London: Wiley.

  6. In a serial arithmetic unit, all operations are done one digit-pair at a time—that is, they follow the way humans normally perform arithmetic operations. In binary computers, the numbers are held as bit strings, so the operations are performed one bit-pair at a time. For an early discussion of serial arithmetic units, see Wilkes 1956, op cit.

  7. Wilkes, 1951, op cit., p. 58.

  8. Ibid.

  9. Ibid.

  10. M. V. Wilkes. (1985). Memoirs of a computer pioneer (pp. 164–165). Cambridge, MA: MIT Press.

  11. Wilkes, 1951, op cit., p. 58.

  12. In a parallel arithmetic unit, operations are performed in a bit-parallel manner. For example, an add operation on two 4-bit binary numbers would involve four independent and identical adding circuits (called half-adders), each of which would have as inputs one pair of the positionally corresponding bits of the two numbers. For an extensive description of various parallel arithmetic units of the immediate post-EDSAC era, see R. K. Richards. (1957). Arithmetic operations in digital computers (pp. 82–92). Princeton, NJ: van Nostrand.

 

‹ Prev