The Chip: How Two Americans Invented the Microchip and Launched a Revolution

Home > Other > The Chip: How Two Americans Invented the Microchip and Launched a Revolution > Page 14
The Chip: How Two Americans Invented the Microchip and Launched a Revolution Page 14

by T. R. Reid


  Committed to teaching, Boole opened his own school in Lincolnshire and now found some time for mathematical work. The editor of a new journal was willing to publish Boole’s papers, despite the author’s lack of formal training. One of them caught the eye of the mathematician Augustus De Morgan, who helped Boole obtain a chair at Queen’s College in Ireland. At last Boole had a secure income and the time to work out his grand mathematical synthesis of human thought. In 1854, after a decade of intense work, he published his masterwork, The Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities. “The design of the following treatise,” the book begins, “is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolic language of a Calculus; and upon this foundation to establish the science of Logic . . .” Completely new, and somewhat obscure even to the expert, it had little initial impact. Today the book is recognized as a milestone that did indeed establish the new science of symbolic logic.

  In Ireland, Boole married Mary Everest, niece of Sir George Everest, the geographer who surveyed the high mountains of Nepal and left his name to the highest of all. Throughout his life, he demonstrated the phenomenal energy that characterized the Victorian age. In addition to his work on logic, Boole published two widely used textbooks and countless monographs. He found time for poetry, including a difficult lyric entitled “Sonnet to the Number Three.” He was a trustee of the Female Penitents’ Home and an officer of the Early Closing Association, which strove to reduce the workday to ten hours. A photograph shows him to be an intense, thoughtful professor with a square face, dark hair, and penetrating eyes. For all his achievements in higher math, he never shirked his duties as a teacher. In November 1864 he walked two miles through a cold rain to meet a class and proceeded with the lecture in his sodden clothes. From this he contracted pneumonia and died. He left behind one last manuscript, so singular and so arcane that the experts at the Royal Society could not decipher it. “No mere mathematician can understand it,” his widow observed, “and no theologian cares to try.”

  Since Boolean logic—also known as Boolean algebra, because Boole expressed logical concepts in algebraic terms—is now recognized as something important, the academicians have draped it in a formidable veil of complicated jargon, symbols, and formulas. At the core, though, the Laws of Thought that Boole described in mathematical terms are the stuff of everyday life. Boole examined everyday mental processes in terms of the simple connective tissue of language: and, or, not.

  You wake up from a sound sleep. Can you roll over and sleep some more, or do you have to get up and go to work? To decide, you carry out a fundamental Boolean operation. If your clock says yes, it’s after 8:00, and your calendar says yes, it’s a weekday, then yes, you get up for work. If either of these conditions is a no, however, you can stay in bed. This decision is known today as a Boolean AND operation. The result is yes only if condition 1 AND condition 2 are both yes.

  The kitchen sink represents another basic Boolean pattern. If no faucet is on, no water comes out of the spigot. But if either the hot faucet OR the cold faucet is on, OR if both are on, water will flow. This decision, in which the result is yes if condition 1 OR condition 2 OR both are yes, is known as a Boolean OR operation.

  In essence, Boole demonstrated that all human reasoning could be reduced to a series of yes-or-no decisions. Each decision could therefore be represented in algebraic terms. Sometimes the formulas were as simple as x + y − z, and sometimes they were more complex; in The Laws of Thought, Boole formulates an argument that God exists, as follows: x(1 − y) (1 − z) + y(1 − x) (1 − z) + z(1 − x) (1 − y) = 1. The most important of Boole’s algebraic formulas—the one he describes as the central pillar of his entire yes-or-no structure—is this:

  Anyone young enough to remember high school algebra will see that this equation holds true for two, and only two, numbers: 0 and 1. In other words, Boole’s organization of all human decisions into yes-or-no terms turned out to be a binary system. A century ahead of time, the self-taught Victorian scholar had developed a decision-making methodology that would prove just right for digital machines.

  Until digital machines came along, however, Boole’s algebra was largely ignored, except by a few of his fellow logicians. One of Boole’s most avid followers was the Oxford don in mathematics Charles Lutwidge Dodgson, who wrote a series of academic works on symbolic logic and who, under his pen name, Lewis Carroll, sprinkled his “Alice” books with allusions to Boole’s ideas. Many of the people Alice meets beyond the looking glass see their world in basic Boolean terms—yes-or-no, true-or-false, does-or-doesn’t:

  “You are sad,” the Knight said in an anxious tone. “Let me sing

  you a song to comfort you.”

  “Is it very long?” Alice asked, for she had heard a good deal of

  poetry that day.

  “It’s long,” said the Knight, “but it’s very, very beautiful.

  Everybody that hears me sing it—either it brings the tears into

  their eyes, or else—”

  “Or else what?” . . .

  “Or else it doesn’t, you know.”

  Bertrand Russell was another admirer. In Principia Mathematica, the Promethean effort to set down once and for all the fundamental logical basis of all mathematics, Russell and Alfred North Whitehead carried Boole’s original concept to a climactic conclusion. Unstintingly meticulous (it takes the authors one and a half volumes to arrive at their proof that 1 + 1 = 2) and inaccessible to all but a small coterie of experts, the Russell-Whitehead treatise seemed to offer further proof, if any were needed, that Boole’s curious combination of logic and algebra was an intellectual abstraction devoid of practical use. This was hardly what Boole had intended: “The abstract doctrines of science,” he writes in The Laws of Thought, “should minister to more than intellectual gratification.” Fifty years after it appeared, though, Boole’s great work was considered strictly an academic exercise.

  The narrative now shifts ahead to 1937 and across the Atlantic to Cambridge, Massachusetts, where groups of engineers and mathematicians were struggling to design the first primitive versions of a digital computer. An MIT engineer, Vannevar Bush, had designed an electrical calculating machine that used decimal numbers; it was built of rods, shafts, and gears arranged so that a gear would turn one tenth of a full rotation (36 degrees) to represent the number 1, two tenths (72 degrees) for 2, and so on. This device, although revolutionary for its day, tended to be imprecise; if the gear happened to turn 48 degrees, or 55 degrees, what number was represented? And so attention shifted down the street to Harvard, where another engineer, Howard Aiken, was thinking—as Von Neumann and Turing had been—about a binary machine that would use simple electrical switches. On the binary computer, precision was not a problem—the switches were either on or off, nothing else—but it was a forbiddingly complicated task to design the proper combinations of switches to carry out binary arithmetic.

  An MIT graduate student, Claude E. Shannon, who had been working with Bush, was looking for a thesis topic and decided to take on the important but formidable problem of designing digital switching circuits. In the course of his work, Shannon hit upon a crucial idea.

  If society allocated fame and fortune on the basis of intellectual merit, Claude Shannon would have been as rich and as famous as any rock idol or football star. Born in the farm community of Gay-lord, Michigan, in 1916, he graduated from the University of Michigan in 1936 and went on to take a Ph.D. in electrical engineering at MIT. His master’s thesis, in 1937, demonstrated how computerized mathematical circuits should be designed; this youthful piece of work not only served as the cornerstone of computer architecture from then on, but also launched a new academic discipline known as switching theory. Ten years later, as a researcher at Bell Labs, Shannon got to thinking about efficient means of electronic communications (for example, how to send the largest number of
telephone conversations through a single wire). He published another seminal paper, “A Mathematical Theory of Communication,” that launched an even more important new academic discipline known as information theory; today information theory is fundamental not only in electronics and computer science but also in linguistics, sociology, and numerous other fields. You could argue that Claude Shannon was the Alexander Graham Bell of the cellular phone, because mobile communications would be impossible without the basic formulas of information theory that Shannon devised.

  In 1949, Shannon published a monograph—once again, the first one ever written on the topic—called “Programming a Computer for Playing Chess.” The ideas set forth there are still central to the design of all computer games, including Deep Blue, the program that defeated world chess champion Garry Kasparov. Like many mathematicians, Shannon was an avid fan of games and puzzles; among other things, he liked to work out “pan-grams”—sentences that contain every letter of the alphabet. His pièce de résistance in this field is a sentence that uses each letter only once: “Squdgy fez, blank jimp crwth vox!”

  That fascination with language was useful for a man working at the very front edge of technology. New concepts required new terms, and Shannon made several contributions to the language of high-tech. Looking for a word to mean a single unit of information—in digital terms, a 1 or a 0—he first coined the phrase “binary digit” but quickly shortened that to “bit.” To this day, the capacity of computers and other digital devices is still measured in bits; if a personal computer is rated at 64 megabits, that means it comes with enough random-access memory to store 64 million bits, or distinct pieces of information. The term is now used by digital designers everywhere, many of whom have probably never heard of Claude Shannon. Shannon wouldn’t mind that, though. He was not one to blow his own horn. During the years he taught information theory at MIT, he never mentioned that he was the creator of the academic discipline his students were studying, and seemed somewhat embarrassed when diligent students figured out that their prof was the progenitor. Early in 2001, Bell Labs set up an exhibit in Shannon’s honor, noting how many of his twentieth-century ideas have become part and parcel of daily life in the new century. Shannon stayed away from the opening ceremony. A few weeks later he died, receiving brief obituaries in a few papers. Hardly anybody seemed to remember how influential he had been in shaping the modern world.

  In 1937, when Shannon tackled the problem of binary circuit design, digital computers used magnetic switches called relays. A relay looks like a mousetrap with an electromagnet on one end. When electricity flows to the magnet, it attracts the metal bar of the mousetrap, which flips over and thus turns the switch on. As soon as the current is cut off, the magnetic attraction stops and the metal bar springs back, turning the relay off. The problem was how to design arrays of these relays so that they would switch on and off in the proper order to add binary numbers. This was seen as an inordinately difficult task, requiring the designer to contemplate so many different levels of possible variations that it would tend to drive anybody crazy. Today, the tedious, repetitive work of designing computer architecture is a chore left to computers. But in 1937 there weren’t any. It would be up to people to figure out the design of binary logic.

  While stewing over this question, Shannon happened upon a text on Boolean logic and something clicked. Boole’s equations for AND operations, OR operations, and other logical functions reduced decision making to a set of dualities—yes or no, 0 or 1, true or false. Shannon recognized that these pairs could be represented just as well by the switching duality: on or off. In short, the dreadfully formidable task of designing binary logic circuits had already been done—by George Boole. Boole’s carefully worked-out equations could serve as road maps for wiring together electric switches to carry out logical operations. Accordingly, Shannon wrote, “It is possible to perform complex mathematical operations by means of relay circuits. Numbers may be represented by the positions of relays and stepping switches, and interconnections between sets of relays can be made to represent various mathematical operations.” At the end of his paper, Shannon showed how a series of relays, arranged to carry out Boolean AND and OR operations, could be wired to add two binary numbers.

  In addition to mathematical operations, Shannon demonstrated, Boolean circuits could be wired to make comparisons—is number x equal to number y?—and to follow simple directions of the “If A, then B” category. “In fact, any operation that can be completely described in a finite number of steps using the words ‘if,’ ‘or,’ ‘and,’ etc.,” Shannon wrote, “can be done automatically with relays.” With this ability to make decisions—to proceed in different ways depending on the results of its calculations—the machine could be programmed to carry out complicated computations without constant direction from the human operator.

  The techniques set forth in Shannon’s thesis have been universally adopted for digital machines. In modern computers transistors embedded in integrated circuits have replaced magnetic relays, but the principles of binary switching remain the same. A transistor built into integrated circuits is essentially the same silicon sandwich developed by Shockley’s team: it consists of a thin layer of P-type silicon sandwiched between two slightly thicker layers of N-type silicon. The device is hooked up so that current— a surge of electrons—will run from one N-type layer, through the middle, and out the other N-type layer. This current flow is switched on and off by signal pulses flowing to the middle layer. If a pulse is sent to the center layer, current will flow from end to end; the transistor is on. But if the center receives no pulse, it blocks current flow from N to N; then the switch is off.

  A computer’s circuitry is a chain of transistors, one after another. The circuit is analogous to a long irrigation pipe with a series of faucets built into it to control the flow of water. If faucet 1 is open, water can flow along to faucet 2; if that one is open, water can flow on to faucet 3. By opening the right combination of

  faucets at the right time, a farmer can direct water to any point in his field. In a computer’s electronic pipeline, each transistor acts as a faucet; if transistor 1 is on, current can flow on through to transistor 2, and so on. By turning on the right combination of transistors at the right time, computer designers can direct the flow of current to any point in the circuit.

  To set up the right combinations, computer builders rely mainly on three basic Boolean circuits. The simplest AND circuit, in accordance with Boole’s AND operation, consists of three transistors lined up so that switch 3 is on only if both switch 1 AND switch 2 are on. The arrangement looks like this:

  If either switch 1 or switch 2 is off, the flow of current will be blocked and switch 3 must be off. Only if both 1 AND 2 are on will current flow through to turn on 3.

  A simple OR circuit can be implemented with three switches arranged this way:

  If both switch 1 and switch 2 are off, current flow will be blocked and switch 3 must be off. But in this circuit, current can flow through either switch 1 or switch 2 to get to switch 3. Thus if either 1 OR 2 is on, OR if both are on, current will flow through the circuit to turn on 3.

  The third basic circuit, called a NOT circuit, can be wired from two switches arranged so that the second is NOT in the same state as the first. If switch 1 is on, 2 is off; if 1 turns off, 2 switches on.

  Because these switch arrangements serve either to block current or to let it pass, they are commonly called gates. Logical and mathematical operations are carried out by sending current through a maze of different gates. A basic addition circuit, in its simplest form, can be implemented with a dozen AND gates, a half dozen OR gates, and three NOT gates. Pulses representing the binary numbers to be added are sent into the circuit. Each of these pulses turns selected transistors on or off in just the right combination so that the pulses coming out at the end of the circuit will represent, in binary, the sum of the two numbers that went in.

  Mathematicians like to use the term “elegant�
�� to describe a simple solution to a complex problem. The Boolean logic that adds two binary digits is the height of elegance. In the early computers, though, the electronics of this elegant arrangement were extremely cumbersome. The simple addition circuit just described, with its twenty-one separate gates, requires about fifty transistors, a dozen or so other components, and a labyrinthine spaghetti of connecting wires. And this circuit can add only two binary digits. If the numbers being added require, say, eight binary digits each—like the decimal number 206, which is 11001110 in binary—the problem would require eight separate passes through the addition circuit, plus a few dozen more transistors to store the result of each pass.

  This is why computers were so vulnerable to the tyranny of numbers. The use of binary numbers and binary logic provided a precise computational system that was perfectly suited to electronic devices. But this perfect fit came at a price. The price was complexity. Digital devices are nothing more than switches turning on and turning off, but many, many switches must turn on and off many, many times to perform even simple operations. When electronic circuits had to be hand-wired together from individual components, these large numbers took an enormous toll in size, in speed, in cost, in power consumption, in difficulty of design.

 

‹ Prev