The Chip: How Two Americans Invented the Microchip and Launched a Revolution

Home > Other > The Chip: How Two Americans Invented the Microchip and Launched a Revolution > Page 1
The Chip: How Two Americans Invented the Microchip and Launched a Revolution Page 1

by T. R. Reid




  Table of Contents

  Title Page

  Dedication

  1 - THE MONOLITHIC IDEA

  2 - THE WILL TO THINK

  3 - A NONOBVIOUS SOLUTION

  4 - LEAP OF INSIGHT

  5 - KILBY V. NOYCE

  6 - THE REAL MIRACLE

  7 - BLASTING OFF

  8 - THE IMPLOSION

  9 - DIM- I

  10 - SUNSET, SUNRISE

  11 - THE PATRIARCHS

  ALSO BY T. R. REID

  AUTHOR’S NOTE

  A NOTE ABOUT SOURCES

  NOTES

  Copyright Page

  ALSO BY T. R. REID

  Confucius Lives Next Door

  For People—and for Profit:

  A Business Philosophy for the 21st Century

  (translator)

  Congressional Odyssey

  The Pursuit of the Presidency, 1980

  (coauthor)

  Ski Japan!

  Heisei Highs and Lows

  Seiko Hoteishiki

  Tomu no Me, Tomu no Mimi

  Matri filiisque in amore dedicatus.

  1

  THE MONOLITHIC IDEA

  The idea occurred to Jack Kilby at the height of summer, when everyone else was on vacation and he had the lab to himself. It was an idea, as events would prove, of literally cosmic dimensions, an idea that would be honored in the textbooks with a name of its own: the monolithic idea. The idea would eventually win Kilby the Nobel Prize in Physics. This was slightly anomalous, because Jack had no training whatsoever in physics; the Royal Swedish Academy of Sciences was willing to overlook that minor detail because Jack’s idea did, after all, change the daily life of almost everyone on earth for the better. But all that was in the future. At the time Kilby hit on the monolithic idea—it was July 1958—he only hoped that his boss would let him build a model and give the new idea a try.

  The boss was still an unknown quantity. It had been less than two months since Jack Kilby arrived in Dallas to begin work at Texas Instruments, and the new employee did not yet have a firm sense of where he stood. Jack had been delighted and flattered when Willis Adcock, the famous silicon pioneer, had offered him a job at TI’s semiconductor research group. It was just about the first lucky break of Jack Kilby’s career; he would be working for one of the most prominent firms in electronics, with the kind of colleagues and facilities that could help a hard-working young engineer solve important problems. Still, the pleasure was tempered with some misgivings. Jack’s wife, Barbara, and their two young daughters had been happy in Milwaukee, and Jack’s career had blossomed there. In a decade working at a small electronics firm called Centralab, Kilby had made twelve patentable inventions (including the reduced titanate capacitor and the steatite-packaged transistor). Each patent brought a small financial bonus from the firm and a huge feeling of satisfaction. Indeed, Jack said later that the most important discovery he made at Centralab was the sheer joy of inventing. It was problem solving, really: you identified the problem, worked through 5 or 50 or 500 possible approaches, found ways to circumvent the limits that nature had built into materials and forces, and perfected the one solution that worked. It was an intense, creative process, and Jack loved it with a passion. It was that infatuation with problem solving that had lured him, at the age of thirty-four, to take a chance on the new job in Dallas. Texas Instruments was an important company, and it was putting him to work on the most important problem in electronics.

  By the late 1950s, the problem—the technical journals called it “the interconnections problem” or “the numbers barrier” or, more poetically, “the tyranny of numbers”—was a familiar one to the physicists and engineers who made up the electronics community. But it was still a secret to the rest of the world. In the 1950s, before Chernobyl, before the Challenger rocket blew up, before the advent of Internet porn or cell phones that ring in the middle of the opera, the notion of “technological progress” still had only positive connotations. Americans were looking ahead with happy anticipation to a near future when all the creations of science fiction, from Dick Tracy’s wrist radio to Buck Rogers’s air base on Mars, would become facts of daily life. Already in 1958 you could pull a transistor radio out of your pocket—a radio in your pocket!—and hear news of a giant electronic computer that was receiving signals beamed at the speed of light from a miniaturized transmitter in a man-made satellite orbiting the earth at 18,000 miles per hour. Who could blame people for expecting new miracles tomorrow?

  There was an enormous appetite for news about the future, an appetite that magazines and newspapers were happy to feed. The major breakthroughs in biology, genetics, and medicine were still a few years away, but in electronics, the late fifties saw some marvelous innovation almost every month. First came the transistor, the invention that gave birth to the new electronic age—and then there was the tecnetron, the spacistor, the nuvistor, the thyristor. It hardly seemed remarkable when the venerable British journal New Scientist predicted the imminent development of a new device, the “neuristor,” which would perform all the functions of a human neuron and so make possible the ultimate prosthetic device—the artificial brain. Late in 1956 a Life magazine reporter dug out a secret Pentagon plan for a new kind of missile—a troop-carrying missile that could pick up a platoon at a base in the United States and then “loop through outer space and land the troops 500 miles behind enemy lines in less than 30 minutes.” A computer in the missile’s nose cone would assure the pinpoint accuracy required to make such flights possible. A computer in a nose cone? That was a flight of fancy in itself. The computers of the 1950s were enormous contraptions that filled whole rooms— in some cases, whole buildings—and consumed the power of a locomotive. But that, too, would give way to progress. Sperry-Rand, the maker of UNIVAC, the computer that had leaped to overnight fame on November 4, 1952, when it predicted Dwight Eisenhower’s electoral victory one hour after the polls closed, was said to be working on computers that would fit on a desktop. And that would be just the beginning. Soon enough there would be computers in a briefcase, computers in a wristwatch, computers on the head of a pin.

  Jack Kilby and his colleagues in the electronics business—the people who were supposed to make all these miracles come true—read the articles with a rueful sense of amusement. There actually were plans on paper to implement just about every fantasy the popular press reported; there were, indeed, preliminary blueprints that went far beyond the popular imagination. Engineers were already making their first rough plans for high-capacity computers that could steer a rocket to the moon or connect every library in the world to a single worldwide web accessible from any desk. But it was all on paper. It was all impossible to produce because of the limitation posed by the tyranny of numbers. The interconnections problem stood as an impassable barrier blocking all future progress in electronics.

  And now, on a muggy summer’s day in Dallas, Jack Kilby had an idea that might break down the barrier. Right from the start, he thought he might be on to something revolutionary, but he did his best to retain a professional caution. A lot of revolutionary ideas, after all, turn out to have fatal flaws. Day after day, working alone in the empty lab, he went over the idea, scratching pictures in his lab notebook, sketching circuits, planning how he might build a model. As an inventor, Jack knew that a lot of spectacular ideas fall to pieces if you look at them too hard. But this one was different: the more he studied it, the more he looked for flaws, the better it looked.

  When his colleagues came back from vacation, Jack showed his
notebook to Willis Adcock. “He was enthused,” Jack wrote later, “but skeptical.” Adcock remembers it the same way. “I was very interested,” he recalled afterward. “But what Jack was saying, it was pretty damn cumbersome; you would have had a terrible time trying to produce it.” Jack kept pushing for a test of the new idea. But a test would require a model; that could cost $10,000, maybe more. There were other projects around, and Adcock was supposed to move ahead on them.

  Jack Kilby is a gentle soul, easygoing and unhurried. A lanky, casual, down-home type with a big leathery face that wraps around an enormous smile, he talks slowly, slowly in a quiet voice that has never lost the soft country twang of Great Bend, Kansas, where he grew up. That deliberate mode of speech reflects a careful, deliberate way of thinking. Adcock, in contrast, is a zesty sprite who talks a mileaminute and still can’t keep up with his racing train of thought. That summer, though, it was Kilby who was pushing to race ahead. After all, if they didn’t develop this new idea, somebody else might hit on it. Texas Instruments, after all, was hardly the only place in the world where people were trying to overcome the tyranny of numbers.

  The monolithic idea occurred to Robert Noyce in the depth of winter—or at least in the mildly chilly season that passes for winter in the sunny valley of San Francisco Bay that is known today, because of that idea, as Silicon Valley. Unlike Kilby, Bob Noyce did not have to check with the boss when he got an idea; at the age of thirty-one, Noyce was the boss.

  It was January 1959, and the valley was still largely an agricultural domain, with only a handful of electronics firms sprouting amid the endless peach and prune orchards. One of those pioneering firms, Fairchild Semiconductor, had been started late in 1957 by a group of physicists and engineers who guessed— correctly, as it turned out—that they could become fantastically rich by producing improved versions of transistors and other mechanical devices. The group was long on mechanical talent and short on managerial skills, but one of the founders turned out to have both: Bob Noyce. A slender, square-jawed man who exuded the easy self-assurance of a jet pilot, Noyce had an unbounded curiosity that led him, at one time or another, to take up hobbies ranging from madrigal singing to flying seaplanes. His doctorate was in physics, and his technical specialty was photolithography, an exotic process for printing circuit boards that required state-of-the-art knowledge of photography, chemistry, and circuit design. Like Jack Kilby, Noyce preferred to direct his powerful intelligence at specific problems that needed solving, and he shared with Kilby an intense sense of exhilaration when he found a way to leap over some difficult technical obstacle. At Fairchild, though, he also became fascinated with the discipline of management, and gravitated to the position of director of research and development. In that job, Noyce spent most of his time searching for profitable solutions to the problems facing the electronics industry. In the second half of the 1950s, that meant he was puzzling over things like the optimum alloy to use for base and emitter contacts in double-diffuse transistors, or efficient ways to passivate junctions within the silicon wafer. Those were specific issues involving the precise components Fairchild was producing at the time. But Noyce also gave some thought during the winter of 1958–59 to a much broader concern: the tyranny of numbers.

  Unlike the quiet, introverted Kilby, who does his best work alone, thinking carefully through a problem, Noyce was an outgoing, loquacious, impulsive inventor who needed somebody to listen to his ideas and point out the ones that couldn’t possibly work. That winter, Noyce’s main sounding board was his friend Gordon Moore, a thoughtful, cautious physical chemist who was another cofounder of Fairchild Semiconductor. Noyce would barge into Moore’s cubicle, full of energy and excitement, and start scrawling on the blackboard: “If we built a resistor here, and the transistor over here, then maybe you could . . .”

  Not suddenly, but gradually, in the first weeks of 1959, Noyce worked out a solution to the interconnections problem. On January 23, he recalled later, “all the bits and pieces came together in my head.” He grabbed his notebook and wrote down an idea. It was the monolithic idea, and Noyce expressed it in words quite similar to those Jack Kilby had entered in a notebook in Dallas six months earlier: “...it would be desirable to make multiple devices on a single piece of silicon, in order to be able to make interconnections between devices as part of the manufacturing process, and thus reduce weight, size, etc. as well as cost per active element.”

  Like Kilby, Noyce felt fairly sure from the beginning that he was on to something important. “There was a tremendous motivation then to do something about the numbers barrier,” he recalled later. “The [electronics] industry was in a situation—for example, in a computer with tens of thousands of components, tens of thousands of interconnections—where things were just about impossible to make. And this looked like a way to deal with that. I can remember telling Gordon one day, we might have here a solution to a real big problem.”

  At its core, the big problem that the monolithic idea was designed to solve was one of heightened expectations. It was hardly an unprecedented phenomenon in technological history: a major breakthrough prompts a burst of optimistic predictions about the bright new world ahead, but then problems crop up that make that rosy future unobtainable—until a new breakthrough solves the new problem.

  The breakthrough that gave rise to the problem known as the tyranny of numbers was a thunderbolt that hit the world of electronics at the end of 1947. It was a seminal event of postwar science, one of those rare developments that change everything: the invention of the transistor.

  Until the transistor came along, electronic devices, from the simplest AM radio to the most complex mainframe computer, were all built around vacuum tubes. Anybody old enough to have turned on a radio or television set before, say, 1964, may remember the radio tube: when you turned on the switch, you could look through the holes in the back of the set and see a bunch of orange lights begin to glow—the filaments inside the vacuum tubes. A tube gave off light because it was essentially the same thing as a light bulb; inside a vacuum sealed by a glass bulb, electric current flowed through a wire filament, heating the filament and giving off incandescent light. There has to be a vacuum inside the glass bulb or else the filament will burn up instantly from the heat. But the vacuum turned out to offer advantages that went beyond fire protection. Experimenting with light bulbs at the beginning of the twentieth century, radio pioneers found that if they ran some extra wires into that vacuum bulb, it could perform two useful electronic functions. First, it could pull a weak radio signal from an antenna and strengthen, or amplify, it enough to drive a loudspeaker, thus converting an electronic signal into sound loud enough to hear. This “amplification” function made radio, and later television, workable. Second, a properly wired light bulb, or vacuum tube, could switch—about 10,000 times in a second— from on to off. (Because of its ability to turn current on and off, the radio tube was known in England as a valve.) This capability was essential to digital computers; as we’ll see in Chapter 6, computers make logical decisions and carry out mathematical computations through various combinations of on and off signals.

  But vacuum tubes were big, expensive, fragile, and power hungry. They got hot, too. The lavish console radios that became the rage in the 1930s all carried warnings to owners not to leave papers near the back of the set, because the heat of all those tubes might start a fire. In a more complicated device that needed lots of tubes packed in close to each other, like a computer or a telephone switching center, all those glowing filaments gave off such enormous quantities of heat that they transformed expensive machinery into smoldering hunks of molten glass and metal—in effect, turning gold into lead. As we all know from the light bulb, vacuum tubes have an exasperating tendency to burn out at the wrong time. The University of Pennsylvania’s ENIAC, the first important digital computer, never lived up to its potential because tubes kept burning out in the middle of its computations. The Army, which used ENIAC to compute artillery trajectories,
finally stationed a platoon of soldiers manning grocery baskets full of tubes at strategic points around the computer; this proved little help, because the engineers could never quite tell which of the machine’s 18,000 vacuum tubes had burned out at any particular time. The warmth and the soft glow of the tubes also attracted moths, which would fly through ENIAC’s innards and cause short circuits. Ever since, the process of fixing computer problems has been known as debugging.

  The transistor, invented two days before Christmas 1947 by William Shockley, Walter Brattain, and John Bardeen of Bell Labs, promised to eliminate all the bugs of the vacuum tube in one fell swoop. The transistor was something completely new. It was based on the physics of semiconductors—elements like silicon and germanium that have unusual electronic characteristics. The transistor performed the same two useful tasks as the vacuum tube—amplification and rapid on-off switching—by moving electronic charges along controlled paths inside a solid block of semiconductor material. There was no glass bulb, no vacuum, no warm-up time, no heat, nothing to burn out; the transistor was lighter, smaller, and faster—even the earliest models could switch from on to off about twenty times faster—than the tube it replaced.

  To the electronics industry, this was a godsend. By the mid-1950s, solid state was becoming the standard state for radios, hearing aids, and most other electronic devices. The burgeoning computer industry happily embraced the transistor, as did the military, which needed small, low-power, long-lasting parts for ballistic missiles and the nascent space program. The transistor captured the popular imagination in a way no other technological achievement of the postwar era had. Contemporary scientific advances in nuclear fission, rocketry, and genetics made awesome reading in the newspapers, but were remote from daily life. The transistor, in contrast, was a breakthrough that ordinary people could use. The transistorized portable radio, introduced just in time for Christmas 1954, almost instantly became the most popular new product in retail history. It was partly synergy—pocket radios came out when a few pioneering disc jockeys were promoting a new music called rock ’n’ roll—and partly sheer superiority. The first transistor radio, the Regency, was smaller, more power-efficient, far more reliable, and much cheaper ($49.95) than any radio had ever been before.

 

‹ Prev