The Chip: How Two Americans Invented the Microchip and Launched a Revolution
Page 19
The downside was that the slide rule gave only approximate answers; if you used it, for example, to calculate the square root of 470, it could tell you that the answer is somewhere around 21.6 or 21.7, but couldn’t get much closer. And it offered no help at all in solving some of the trickier aspects of calculation—determining the order of magnitude and putting the decimal in the right place. The most primitive $3.99 calculator, in contrast, can solve problems precisely, down to the last decimal with a speed and accuracy that no slide rule could match. The square root of 470? Push two buttons and the answer leaps to the display screen: 21.67948338868. Today, in the hazy afterglow of memory, engineers tend to look on the slide rule’s drawbacks and see virtues. “The absence of a decimal point,” Professor Petroski wrote, “meant that the engineer always had to make a quick mental calculation independent of the calculating instrument to establish whether the job required 2.35, 23.5, or 235 yards of concrete. In this way, engineers learned early an intuitive appreciation of magnitudes. Now the decimal point floats across the display of an electronic device among extended digits that are too often copied down without thought.” He has a point, of course; it’s obviously a good idea for engineers or contractors to develop an “intuitive appreciation of magnitudes.” On the other hand, this feature of the slide rule probably didn’t seem so charming to a construction foreman who suddenly found himself with 200 excess tons of concrete hardening on his job site.
The slide rule was a simple instrument (at least, if you knew how logarithms work). To hear engineers tell it, that simplicity was part of its appeal. “It has a sort of honesty about it,” Jack Kilby told me one day, reaching into the drawer of his desk and pulling out his old K & E. “With the slide rule, there’re no hidden parts. There’s no black box. There’s nothing going on that isn’t right there on the table.” To put it another way, the slide rule was not threatening. Nobody ever called the slide rule a “mechanical brain.” Nobody ever declared that the slide rule was endowed with something called artificial intelligence. There were no movies about runaway slide rules called HAL seizing control of the spaceship or plotting to dominate mankind. The slide rule, hanging at the ready from the belts of Fermi, Wigner, and Wernher von Braun, helped men create the first nuclear chain reaction and send rockets to the stratosphere. But the rule was always recognized as nothing more than a tool. It had no more “intelligence” than a yardstick or a screwdriver or any other familiar tool that extends human power. Like the yardstick, the screwdriver, etc., the slide rule was just an ignorant mechanism.
Someday—fairly soon, probably, given the accelerated pace of technological development—the pocket calculator, the handheld computer, the industrial robot, the cell phone, and other digital marvels of our day will themselves be museum pieces, on exhibit in a gallery called Primitive Microelectronic Tools or some such. As we file past with our grandchildren, we may well break into nostalgic smiles of fond regard for these devices that used to be considered so revolutionary. The kids, no doubt, will be amazed to learn that back at the turn of the twenty-first century many people still resisted those simple tools, resented them, feared them— feared that digital computers, robots, etc., and their so-called artificial intelligence might replace poor bungling man as the reigning intelligence on earth. To our grandchildren’s generation, how foolish this will seem! Why, they will wonder, would anyone have feared such an ignorant mechanism?
To most of us today, even the simplest digital device seems incomprehensible. It is, as Jack Kilby has suggested, a black box. What goes on inside the black box is, for most people, black magic. You push the keys. The answer to some impossibly difficult math problem shows up on the screen, instantly. The rest is mystery. It’s no wonder that calculators, computers, etc., are thought of as intelligent machines; what other explanation could there be? In fact, the explanation of how the computer gets the answer is not at all magical. The “magic” inside the black box actually involves a series of mathematical and logical techniques carried out by artful arrangements of electronic switches arrayed in logic gates. Electronic impulses are pulled this way and that through the maze of electronic switches by blind physical force.
The electrons racing through a computer chip have as much intelligence as water running down a hill. Gravity pulls the water. Electricity—the attraction and repulsion of electronic charges— pulls the electrons. If people build sluice gates and irrigation canals in the right combinations, they can make water flow where it is needed to water the fields. If people build logic gates and connecting leads in the right patterns, they can make electronic impulses flow where they’re needed to solve a problem. In each case, the mechanism does the work, but in each case, it’s an ignorant mechanism. The human designer provides all the intelligence.
If you punch into your calculator the task of adding 3 + 2, the mechanism will produce the answer 5. The calculator gets the answer not because of “artificial intelligence” but rather because a genuine intelligence—the human mind—has designed the mechanism so that it gets the right answer.
Using switching logic (from the minds of George Boole and Claude Shannon) implemented by transistors (from the minds of Shockley, Bardeen, and Brattain) contained in the monolithic circuit (from the minds of Kilby and Noyce), the humans who build digital machines have designed an addition circuit in such a way that the pattern of pulses representing binary 5 is the only possible combination that can come out when binary 3 and binary 2 are put in. To get that sum, however, out of a mechanism consisting entirely of switches turning on and off, off and on, humans have had to go to some extreme lengths. For a machine as dim-witted as a computer to solve 3 + 2, the problem must be broken down into an absurdly detailed sequence of instructions that lead the machinery through its paces, step by elementary step. If there is magic in the pocket calculator, it is not in the machinery; it is in the humans who had the wit and the patience to program the machine to do its job.
To bring the point home, we can take a guided tour through the interior of the black box and watch a typical digital mechanism from the inside as it does its stuff. We’ll look at a simple pocket calculator—so simple it exists only in the pages of this book—called the Digital Ignorant Mechanism, Model I, or DIM-I for short. The design of DIM-I to be set forth here is based on the familiar four-function calculator available anywhere for $5 or so. To a considerable extent, though, the basic architecture of a $5 calculator is the same as that in a $50 video game, a $500 handheld computer, a $5,000 corporate server, or the $5 million supercomputers that guide NASA’s rockets through the cosmos. The bigger, more expensive machines can handle more information, store more results, and deal with a larger variety of tasks, but the modus operandi is the same. If you’ve seen one digital ignorant machine at work, you’ve pretty much seen them all.
On the outside, DIM-I is in fact a black plastic box. The box has eighteen keys: one each for the digits 0 through 9, and eight others for functions like +, =, etc. It has a display screen that can show numbers up to eight digits long. It’s rather light—a few ounces at most—and if you were to pry open the black box, you’d see why. There’s almost nothing inside. There’s an empty space containing a tiny power source—a battery sometimes, or a solar energy converter—a few wires, and a printed circuit board on which sits another, smaller black box. This one is a piece of plastic that looks like a man-made millipede: an inch-long rectangle with a symmetrical array of wire legs sticking out from each side. That millipede is the chip—or, more precisely, the plastic package that holds the chip. The two rows of legs along the sides are the electronic leads that connect the keyboard and the display screen to the chip.
Inside a computer—even the smallest handheld computer— are whole platoons of these small black millipedes, each chip designed for a specific function. Open the back of a desktop computer, for example, and you can count about 200 separate chips. There are memory chips, logic chips, input-output chips, and microprocessor chips, all lined up in formation on t
he various circuit boards. Part of the miracle of microelectronics is that more and more separate functional elements can be squeezed into a single chip; a simple drugstore calculator like DIM-I uses just one chip that has all the necessary functional elements built into it.
Just for simplicity’s sake, we’ll say that the chip inside DIM-I is a TMS 1000C, one of the common microprocessors designed for small calculators. It has (1) a set of logic gates that read electronic signals from the keyboard and “encode” them in binary form that a calculator can understand. It has (2) a relatively small number of memory units—chains of transistors lined up in ordered rows so that each one can be addressed separately. That means the machine can randomly get to each memory block, so this is random-access memory, or RAM. It has (3) an arithmetic processor unit—a group of transistors arranged in gates so that they can use Boolean logic to perform simple math. And it has (4) a set of gates that “decode” binary information back into decimal form and send it to the display screen. In other words, the chip has the necessary circuitry to do four things:
1. Sense numbers punched into its keyboard.
2. Write them on an electronic scratch pad.
3. Add, subtract, divide, and multiply them.
4. Report the sum in the form of lighted digits on the display screen.
Which is just another way of saying that DIM-I can carry out the four essential jobs of any digital device:
1. input
2. memory
3. processing
4. output
A fancier machine—say, DIM-II or DIM-III—might be able to handle larger numbers, produce graphs on the screen, and store greater quantities of information in memory. By the time one gets to DIM-X or DIM-XLVII, the machine can deal with words as well as numbers and manipulate the information in countless ways that are beyond the wildest dreams of our little calculator. DIM-I, by contrast, can’t do much. But it can demonstrate to us the basic computational mechanics, because what it does do it does in the same way as every other calculator and computer.
The processing circuitry of DIM-I, like that in any digital device, also has a central set of logic gates called the control unit. This is a sort of central switchboard that busily directs electronic pulses here and there, from input to memory to processor to the display screen, as needed to solve the problem. The control unit is itself controlled, in turn, by a simple, familiar device that is essential to the operation of any digital machine—a clock.
To the extent that we attribute anthropomorphic characteristics at all to computers, the proper analogy would be not that the machine has a brain, but rather that it has a heart—a steady, pulsing central rhythm instrument that orchestrates and controls everything that happens. The computer engineers call the central clock a clock generator, because it really is a circuit that generates pulses at a perfectly steady, unvarying rate. The clock is to the computer what the bandleader is to the band; it stands there keeping time, ONE-two-three, ONE-two-three, so that each part will come in at the right beat.
The clock inside DIM-I beats, without variation, every ten millionth of a second—that is, it emits 100,000 pulses every second, or one pulse every 1/100,000 second. This is a fairly standard clock rate for cheap, simple pocket calculators. It is slow compared to the operating speed of large mainframe computers—or of small laptop computers, for that matter—but extremely fast compared to, say, a flash of lightning or the blink of an eye. An eye’s blink takes about .30 second; in the blink of an eye, the clock inside a simple calculator will have pulsed 30,000 times.
There is no way for humans, in our poky world of seconds, minutes, and hours, to conceive of a time period like 1/100,000 second, much less the microsecond (1/1,000,000 second), the nanosecond (1/1,000,000,000 second), the picosecond (1/1,000,000,000,000 second), or the femtosecond (1/1,000,000,000,000,000 second). On the human scale, anything that lasts less than about a tenth of a second passes by too quickly for the brain to form a visual image, and is thus invisible; if the duration is less than a thousandth of a second or so, the event becomes too fast even for subliminal perception and is completely outside the human sphere. The speed of microelectronic events puts them in a world far removed from the human realm; how can an engineer contemplate a thousandth of a thousandth of a second? Computer engineers, practical types not often given to metaphysical speculation, don’t even try. They just become, as Bob Noyce said, “reconciled” to the notion that their machines work at unthinkably high speeds.
The inconceivable speed of operation comes about because the “moving parts” of a digital machine are electronic pulses that travel inconceivably fast over distances inconceivably small. An electric signal moves at the speed of light, 186,000 miles per second. This makes for extraordinarily rapid transit, even at transcontinental distances.
A baseball fan in Boston turns on his TV to watch the big game in Los Angeles. He watches the pitch coming in, the hitter start to swing, and the smack of bat against ball. He does not see this at the precise instant it happens, though. The electronic signals carrying the picture to his television take about .016 second to travel the 3,000 miles from America’s West Coast to the East. Thus the viewer will not see the bat hit the ball until about 1/62 second after the impact actually occurs. (For a fan watching the game in the form of streaming video over the Internet, the scene may be delayed even more—as much as half a second more—by the software that converts the impulse to a video image on the computer screen.)
The signals that turn switches on and off inside our calculator also move at the speed of light, but they do not have to travel 3,000 miles. On a quarter-inch square integrated circuit containing 10,000 components, individual transistors are spaced a few ten thousandths of an inch apart. Electronic charges moving at the speed of light traverse those distances in tiny periods of time. Consequently, the clock generator that controls the dispatch of pulses around the chip can be set to tick at tiny intervals.
In setting the clock rate for digital machines, however, the human designers have to consider not only the transit time for pulses racing from one transistor to another but also the time it takes for each transistor to switch from on to off. The computer is made of a long, long chain of transistors; each one has to wait for the transistor ahead of it in line to flip one way or the other before it can do anything. In most modern computers, the transistors’ switching time (known to engineers as propagation delay) is a more serious constraint than the travel time for signals moving through the circuit.
“Propagation delay,” like many other elements of microelectronic jargon, is a complex term for a simple and familiar phenomenon. Propagation delays occur on the freeway every day at rush hour. If 500 cars are proceeding bumper to bumper and the first car stops, all the others stop as well. When the first driver puts his foot back on the accelerator, the driver in the second car sees the brake lights ahead of her go off, and she, in turn, switches her foot from brake to accelerator. This switching action, from brake to accelerator, is then relayed down the chain of cars. If each driver has a switching time of just one second, the last car will have to wait 500 seconds—about 8½ minutes—because of the propagation delay down the line of traffic.
The transistors inside a digital device go from brake to accelerator—from off to on—somewhat faster. In the most high-powered machines, the switching time is about one nanosecond—one billionth of a second. In the current generation of personal computers, using the Pentium IV microprocessor, the transistors switch in about two billionths of a second. By those standards, a cheap pocket calculator like DIM-I is a tortoise. Its transistors take about 5 microseconds to go from on to off. To account for that much propagation delay and to allow a little additional time for signals to travel through the circuitry on the chip, the clock rate—the time signature that sets the tempo for each operation—in a small calculator is set at 10 microseconds, or one pulse every hundred thousandth of a second.
The ticking of the clock inside a calculator computer reg
ulates a repetitive cycle of operations—the instruction cycle—that the machine performs continuously, over and over, as long as it is turned on. The instruction cycle is a two-stage affair.
On the first clock pulse the computer’s clock unit—the switchboard—sends a message to memory, asking for the next instruction. The instruction (in the form of a binary pattern of charges) flashes back to the switchboard, which holds it until the next click of the clock. When the next clock pulse comes, the switchboard sends the instruction on to the appropriate part of the computer for execution. When one instruction has been executed, the controller waits for the next pulse. The clock ticks, the controller fetches the next instruction; the clock ticks again, and that instruction is carried out.
This two-stage instruction cycle—fetch and execute, fetch and execute—is the vital rhythm of the computer’s life, as fundamental and as constant as the two-stage respiratory cycle—inhale and exhale, inhale and exhale—of the human body. The clock generator regulates the process so that each signal—from memory, from the control unit, from the keyboard, from anywhere—arrives at its destination before the next signal starts its journey through the circuit.
In DIM-I, as in video games and other simple digital tools, the sequence of instructions—the program—is permanently installed in memory when the machine is built. Such preprogrammed devices can do the exact tasks they were built for, and nothing else. A computer, in contrast, is not restricted to built-in programs; its versatility comes from the fact that it can be programmed by each user to perform a broad range of tasks. This is the difference between a “universal” machine—a computer—and a “dedicated” tool like a calculator. (There are pocket calculators that are called programmable, but in classic terms, machines like that are really computers, not calculators.) Buying a calculator is like buying a ticket on the railroad; you can go only where the company’s tracks will take you. If you have a computer, on the other hand, you have to steer it yourself, but you can drive it wherever you want to go.