The Chip: How Two Americans Invented the Microchip and Launched a Revolution

Home > Other > The Chip: How Two Americans Invented the Microchip and Launched a Revolution > Page 8
The Chip: How Two Americans Invented the Microchip and Launched a Revolution Page 8

by T. R. Reid


  Working in a relatively small firm, where the circuit designers in the engineering lab had regular contact with the plant managers, Kilby soon learned—probably sooner than many other people in the business—that realities of the manufacturing process severely restricted the complexity of transistorized circuitry. Upstairs in the lab, Kilby and his colleagues could design a hearing aid or a radio amplifier that squeezed unheard-of numbers of components into minute spaces. But down in the factory, those circuits could not be built; there were just too many interconnections too close together for the human hand to make them. “Jack Morton at Bell Labs suggested that electronics was facing a ‘tyranny of numbers,’ ” Kilby recalled, “and that was a perfect term for it because the numbers of parts and connections in some of these new circuits were just too big. The simple fact was that you could not do everything that an engineer would want to do.”

  For Kilby, the recognition of this major new problem was electrifying. Just as he was coming into his own as an engineer, a problem solver, the world of electronics was up against a baffling problem of premier importance. The advent of the transistor offered enormous, world-shaking possibilities, but they would never be realized unless somebody found a way around the problem of numbers. There was a huge and growing gap between design and production. Like everyone else in the industry, Jack Kilby plunged into the search for a way to bridge that gap.

  It was evident, though, that solving the tyranny of numbers, if indeed a solution could be found, was a task that would require large resources—considerably larger than a firm the size of Centralab could muster. “I felt,” Kilby wrote later, “. . . that it would not be possible for very small groups with limited funding to be competitive. I decided to leave the company.” Early in 1958 he sent out his résumé to engineers at a number of larger firms. Because the budding electronics industry was a thoroughly meritocratic universe, the lack of the MIT degree was irrelevant now. Jack’s reputation, and his catalogue of patents, made him a hot commodity in the field. IBM made an offer. Motorola did as well. And there also came a letter from Willis Adcock of Texas Instruments.

  Texas Instruments today, largely because of Jack Kilby, is a global semiconductor giant, one of the world’s leading manufacturers of microelectronic devices. In 1958, though, it was just beginning to make a mark in the electronics business. The company had been born in the mid-twenties as the Geophysical Research Corporation; its business was sending sound waves deep into the earth to find potential oil drilling sites. During World War II the same deep-sounding methods proved useful for locating enemy submarines, and GRC became a defense contractor. The firm’s postwar president, Patrick Haggerty, expanded the government business to the point where manufacture of electronic instruments became more important than geophysical research. Convinced that great things were in store for his little firm, the visionary Haggerty changed the company’s name to General Instruments—an audacious suggestion that this impudent pup in Dallas could stand with General Electric and the other eastern electronics giants. The Pentagon didn’t like the choice—it had another supplier with a similar name—so Haggerty unhappily fell back on geography: Texas Instruments.

  It was another audacious Haggerty gambit that started TI on its road to dominance. In 1952, when transistors were still exotic, unreliable devices costing $15 or more each, Haggerty hired a Bell Labs physicist named Gordon Teal and ordered him to develop a reliable mass-production transistor that would sell for $2.50. Teal did it. In 1954, Haggerty launched his most famous initiative: he put his cheap, reliable transistors into a consumer product—the pocket radio. The idea was a smash hit in the marketplace. More important, it made the transistor a common household item and Texas Instruments a common name in electronics.

  The first pocket radios, like all transistorized equipment of the day, used transistors made of germanium, a material easy to work with but unsatisfactory for many applications because it could not operate at high temperatures. Another semiconductor material, silicon, could withstand heat. But silicon was almost impossible to work with; the material is brittle and difficult to purify, and components made of silicon tended to shatter on the assembly line. Haggerty, not the type to let impossible manufacturing obstacles stand in his way, ordered Gordon Teal and Willis Adcock, a physical chemist, to devise a silicon transistor. On the boss’s command, the project was pursued under security arrangements that any of the world’s spy agencies would admire. In the end, Haggerty’s ambition and his commitment to secrecy paid off in spectacular fashion. In May 1954, Teal and Adcock attended a technical meeting on manufacturing problems in the transistor business. The holy grail at the time was a transistor made of silicon, but speaker after speaker at the meeting set forth the insuperable problems posed by silicon. Finally, Teal rose to speak. He had listened with interest, he said, to the bleak predictions about silicon’s feasibility. “Our company,” he noted calmly, “now has two types of silicon transistor in production. . . . I just happen to have some here in my coat pocket.” Adcock then appeared, carrying a record player that employed a germanium transistor. As a record played, Teal dunked the transistor in a vat of boiling oil; the sound stopped. Next Teal wired in one of TI’s new silicon transistors. He dumped it into the hot oil; the band played on. The meeting ended in pandemonium, for in those days long before the advent of the cellular phone, Teal’s presentation sparked a mad race to the telephone booths among salesmen and engineers from the major eastern electronics firms. A reporter from Fortune magazine overheard a sales rep from General Electric shouting into the phone: “They’ve got the silicon transistor! In Texas!!” It was not clear which of these facts was the more astounding.

  Texas Instruments was on its way, and TI’s triumphant engineers began producing more and more ambitious designs. Soon enough, though, the tyranny of numbers became evident to the people in Dallas. Willis Adcock was placed in charge of a major research effort to surmount this obstacle to further progress in electronics. One of the first solutions TI worked on was an idea called the Micro-Module. The theory behind it was that all the components of a circuit could be manufactured in the same size and shape, with wiring built right into each component. These identical modules could then be snapped together, like a child’s Lego blocks, to make instant circuits. The concept was important to Texas Instruments, not so much because of its intrinsic merits, but because it was important to the U.S. Army. Each of the military services was pursuing its own solution to the interconnections problem, and the Army keenly desired that its proposal should prevail; if TI could deliver, it would become the darling of all Army contracting officers. Thus when Jack Kilby arrived at Adcock’s lab in May 1958, the Micro-Module was the hottest thing going. Kilby disliked it from the start.

  This feeling was partly an engineer’s intuition; the Micro-Module bore some resemblance to an idea that had flopped at Centralab, and Kilby didn’t think it would work any better in Dallas. To a systematic problem solver like Jack Kilby, though, the real flaw was more basic: the Micro-Module implied the wrong definition of the problem. The real problem posed by the tyranny of numbers was numbers. The Micro-Module did nothing to reduce the huge quantities of individual components in sophisticated circuits. No engineer could work with much enthusiasm on a solution to the wrong problem; Kilby’s heart sank at the thought that he had left a good job and moved his family across the country only to be put to work on a project that was fundamentally off target.

  Texas Instruments then had a mass vacation policy; everybody took off the same few weeks in July. Kilby, who hadn’t been around long enough to earn vacation time, was left alone in the semiconductor lab. He was in a rotten mood. “I felt it likely,” he recalled years later, “that I would be put to work on a proposal for the Micro-Module program when vacation was over—unless I came up with a good idea very quickly.”

  Kilby plunged in with his wide-angle approach, soaking up every fact he could about the problem at hand and the ways Texas Instruments might solve it. Among m
uch else, he took a close, analytical look at his new firm and its operations. The obvious fact that emerged was Texas Instruments’ heavy commitment to silicon. To capitalize on its victory in the race to develop silicon transistors, TI had invested millions of dollars in equipment and techniques to purify silicon and manufacture transistors with it. “If Texas Instruments was going to do something,” Jack explained later, “it probably had to involve silicon.” This conclusion provided Kilby the focus he needed for the narrow, concentrated phase of problem solving. He began to think, and think hard, about silicon. What could you do with silicon?

  Jack Kilby’s answer to that question has come to be known as the monolithic idea. The idea has so changed the world that it is just about impossible today to reconstruct what things were like before he thought of it—and thus it is almost impossible to appreciate how ingenious, and how daring, the answer was. The monolithic idea has become an elementary part of modern science, as fundamental, and as obvious, as J. J. Thomson’s daring suggestion that there were tiny charged particles swirling around inside the atom. In July 1958, though, Kilby’s answer was hardly elementary.

  What could you do with silicon? It was already known in 1958 that the standard semiconductor devices, diodes and transistors, could be made of silicon, if the silicon was doped with the proper impurities to make it conduct electric charges. But if the silicon had no impurities, its electrons would all be bound in place. No charges would flow through such a piece of silicon; it would block current just like a standard resistor. Kilby thought about that: A silicon resistor? Why not? A strip of undoped silicon could act as a resistor. It wouldn’t be as good as a standard carbon resistor, but it would work. For that matter, by taking advantage of the peculiarities of the P-N junction, you could also make a capacitor out of silicon. Not much of a capacitor, to be frank about it—its performance wouldn’t equal that of a standard metal-and-porcelain capacitor—but it would work. And come to think of it—this was the idea that would revolutionize electronics—if you could make all parts of a circuit out of one material, you could manufacture all of them, all at once, in a monolithic block of that material.

  The more Kilby thought about it, the more appealing this notion became. If all the parts were integrated on a single slice of silicon, you wouldn’t have to wire anything together. Connections could be laid down internally within the semiconductor chip. No matter how complex the circuit was, nobody would have to solder anything together. No wires. No soldering. The numbers barrier would disappear. And without wiring or connections, an awful lot of components could be squeezed into a pretty small chip. On July 24, 1958, Kilby opened his lab notebook and wrote down the monolithic idea: “The following circuit elements could be made on a single slice: resistors, capacitor, distributed capacitor, transistor.” He made rough sketches of how each of the components could be realized by proper arrangements of N-type and P-type semiconductor material.

  Some four decades later, when that sentence from Jack’s old notebook was read out at the Nobel Prize ceremony in Stockholm, the idea that a whole circuit could be built on a single microchip of silicon was so common it appeared in junior high textbooks. In 1958, however, this suggestion was so “nonobvious” as to be astonishing. “Nobody would have made these components out of semiconductor material then,” Kilby has explained. “It didn’t make very good resistors or capacitors, and semiconductor materials were considered incredibly expensive. To make a one-cent carbon resistor from good-quality semiconductor seemed foolish.” Building a resistor out of silicon seemed about as sensible as building a boxcar out of gold; you could probably do it, but why bother? Even Kilby was a little skeptical at first: “You couldn’t be sure that there weren’t some real flaws in the scheme somewhere.” The only way to find out was to build a model of this integrated circuit and give it a test. To do that, Kilby would need the boss’s okay.

  When everybody came back from vacation, eager to get cracking on the Micro-Module, Kilby showed his notebook sketches to Willis Adcock. “Willis was not as high on it as I was,” Kilby recalled later. Adcock was intrigued with the idea but had doubts about its practicality; “it was pretty damn cumbersome,” he said afterward. It was probably worth trying; but on the other hand, Adcock was supposed to be making Micro-Modules to keep the Army happy. To build Kilby’s model, he would have to divert people from that project and put them on the previously untried task of building a complete circuit out of semiconductors. Adcock hesitated. Kilby pushed. Eventually, they made a deal: if Kilby could actually make a working resistor and a working capacitor out of separate pieces of silicon, Adcock would authorize the far more costly effort to construct an integrated circuit on a single semiconductor chip. Kilby painstakingly carved a resistor out of a strip of pure silicon. Then he took a bipolar strip of silicon and wired the P-N junction to make the capacitor. He wired these strange devices into a test circuit, and they worked. Adcock looked it over, and then okayed the attempt to construct a complete circuit on a single chip.

  The design that Adcock chose was a phase-shift oscillator circuit, a classic unit for testing purposes because it involves all four of the standard circuit components. An oscillator is the opposite of a rectifier; it turns direct current into alternating current. If it works, the oscillator transforms a steady, direct current into fluctuating pulses of power that constantly change direction, back and forth, back and forth. The transformation shows up neatly on an oscilloscope, a piece of test equipment that displays electric currents graphically on a television screen. If you hook direct current—for example, from a battery—to the oscilloscope, the steady current will show up as a straight line across the screen, like this:

  But if you put a phase-shift oscillator between the battery and the oscilloscope, the oscillating current will show up as a gracefully curving line—a sine wave—undulating across the screen, like this:

  On September 12, 1958, Jack Kilby’s oscillator-on-a-chip, half an inch long and narrower than a toothpick, was finally ready. Jack had glued it to a glass slide so that it would sit flat on the table; it had wires sticking out here and there. Somebody with a sense of history took a picture of the thing, and Jack suddenly felt kind of embarrassed about how crude it looked. A group of Texas Instruments executives gathered in Kilby’s area in the lab to observe this tiny and wholly new species of circuit; Jack was surprised to see the chairman of the company, Mark Shepherd, among the onlookers. This thing had better work, he thought to himself. Conceptually, of course, Kilby knew it would. He had thought the thing through so often, there couldn’t be a flaw. Or could there? After all, nobody had ever done anything like this before. Kilby was strangely nervous as he hooked up the wires from the battery to his small monolithic circuit, and from the circuit to the oscilloscope. He fiddled with the dials on the oscilloscope. He checked the connections. He looked up at Adcock, who gave him a here-goes-nothin’ shrug. He checked the connections again. He took a deep breath. He pushed the switch. Immediately a bright green snake of light started undulating across the screen in a perfect, unending sine wave. The integrated circuit, the answer to the tyranny of numbers, had worked. The men in the room looked at the sine wave, looked at Kilby, looked at the chip, looked at the sine wave again. Then everybody broke into broad smiles. A new era in electronics had been born.

  4

  LEAP OF INSIGHT

  The little airplane jumped from the boy’s hand and shot off into the blue Iowa sky, the engine purring, the body spiraling perfectly, the plane racing higher and farther away every second, soaring right past the end of town and far out over the cornfields. It was almost a mile away, still performing beautifully, when he lost sight of it for good. “That was my first technological disaster,” Robert Norton Noyce recalled many years later. “Yeah, it worked a lot better than any model I’d ever built before, but it was gone.” Actually, it wasn’t gone. Six months later, during the harvest, a farmer found the toy plane among the cornstalks and guessed that it was probably the work of the mi
nister’s son, who was always messing around with engines and gadgets and models.

  By then, though, the boy had moved on to other things. That lost airplane had prompted him to build a radio control unit for his next model. Radio proved so interesting that Bob Noyce and a buddy put together a pair of crude transceivers to send messages back and forth. Neither boy obtained a radio operator’s license, making their network a federal offense; the crime can safely be reported now because the statute of limitations for their violation expired about 1942. The boys got interested in chemistry; soon Bob was mixing his own home-brew explosives out of gun cotton and nitrogen tri-iodide. Then he found an old washing machine motor and tried to make it drive his bicycle. Then there was something else, and something else after that. “I was a pretty curious kid,” Noyce said later, looking back. “I was always trying to figure out how everything worked.”

  Some things never change. The most striking thing about Robert N. Noyce—the grownup version, that is—was the enormous range of his interests and the breadth of his activities. Until the day he died, he still wanted to know how everything worked. At various stages of his professional career he was a theoretical physicist, an inventor, a corporate chief executive, a venture capitalist, a lobbyist, and eventually, elder statesman and leading spokesman for the American semiconductor industry. His colleagues and competitors called him “the mayor of Silicon Valley,” and the only problem with that title was that his influence reached well beyond Silicon Valley in his last decade or so. He was successful as a scientist and engineer—he won a slew of academic and industrial awards, and surely would have shared the Nobel Prize with Jack Kilby if he had lived long enough. But he was even more successful as an entrepreneur—indeed, he set the mold for that classic turn-of-the-century phenomenon, the high-tech multimillionaire. It was Bob Noyce who first demonstrated to the denizens of Silicon Valley that a clever engineer could turn his technical talent into boundless wealth. By 1990, estimates of his net worth ranged toward the billion-dollar mark, which put him roughly in the same league as Bill Gates at the time. Off the job, he restored old airplanes and helped plan Harvard’s future and skied and studied Japanese and took up, with maximum energy and commitment, whatever else his far-flung curiosity fastened upon.

 

‹ Prev