The Chip: How Two Americans Invented the Microchip and Launched a Revolution

Home > Other > The Chip: How Two Americans Invented the Microchip and Launched a Revolution > Page 15
The Chip: How Two Americans Invented the Microchip and Launched a Revolution Page 15

by T. R. Reid


  Despite its inauspicious debut at the electronics convention, accordingly, the monolithic idea, conceived just as the digital computer was growing up, was destined to be a spectacular success. With integrated circuitry, the neat patterns of Boolean logic could be mapped directly onto the surface of a silicon chip; an entire addition circuit would now take up less space and consume less power than a single transistor did in the days of discrete components. With the advent of the chip, the digital computer had finally become as elegant in practice as it was on paper.

  7

  BLASTING OFF

  The first integrated circuits proved so hard to produce that nearly two years passed after the chip’s public debut before the new device was available for sale. Fairchild was first off the block; its catalogue for spring 1961 trumpeted a new line of six different monolithic circuits which it called “Micrologic elements.” A few weeks later Texas Instruments entered the fray with a similar series of “solid circuits.” As the two companies rather stridently pointed out, the new circuit-on-a-chip was smaller, lighter, faster, more power efficient, and more reliable than any conventional circuit wired together from discrete parts.

  It was also more expensive. A “Micrologic” logic gate circuit, containing three or four transistors and another half dozen diodes and resistors, was initially priced at $120. An equipment manufacturer could wire together a circuit using top-of-the-line transistors for less than that, even after labor costs were figured in. It was as if an automobile company had designed a family station wagon that could go 500 miles per hour—and cost $150,000. Who needed it? “There was the natural reluctance to commit to something new,” Bob Noyce recalled later. “And added to that you had a price that was basically uneconomical. So at first the traditional electronics customers just weren’t buying.”

  This posed a fairly serious problem. Even more than most industries, electronics firms rely heavily on an economic phenomenon known as the learning curve. In the early life of a new product—when manufacturers are still learning how to design and produce the device at a reasonable cost—prices are necessarily high. As sales increase, better production techniques are developed, and prices curve sharply downward. The integrated circuit in early 1961 was stalled at the high end of the curve; there was no commercial market to push it down. As Noyce recalled, the chip seemed to be caught in a classic commercial Catch-22. Until the market picked up, the price would remain high; but as long as prices stayed high, the traditional electronics markets weren’t interested.

  And then, virtually overnight, the President of the United States created a new market.

  In May 1961—a time when, as The New York Times noted, “there was a strong catch-the-Russians mood in Washington”— John F. Kennedy went before a joint session of Congress to propose “an extraordinary challenge.” “I believe we should go to the moon,” the president said. “. . . I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to earth. No single space project in this period will be more impressive to mankind. . . . And none will be so difficult or expensive to accomplish.”

  A 480,000-mile round trip to the moon was indeed a challenge of extraordinary dimensions for a nation whose greatest space achievement so far had been Alan Shepard’s 15-minute, 302-mile suborbital flight in the spacecraft Freedom 7. (John Glenn, the third American in space, did not make his three-orbit trip until nine months after Kennedy’s speech.) A successful lunar voyage would require major advances in rocketry, metallurgy, communications, and other fields. Among the most difficult problems were those the space experts called G & N—that is, guidance and navigation.

  The trick of steering a fast-moving spaceship from a fast-moving planet through two different atmospheres and two different gravitational fields to a precise landing on a fast-moving satellite would require an endless series of instantly updated calculations—the kind of work only a computer could do. But a computer on a spacecraft would have to be smaller, lighter, faster, more power-efficient, and more reliable than any computer in existence. In short, somebody needed that 500-mph station wagon. And with the prestige of the nation at stake, high prices were no problem. “The space program badly needed the things that an integrated circuit could provide,” Kilby said later. “They needed it so badly they were willing to pay two times or three times the price of a standard circuit to get it.”

  The G & N system was so crucial to the moon shot that the assignment for its development was the first major prime contract awarded after Kennedy’s speech. There was no question that the computer would have to be built from integrated circuits, and Fairchild quickly started receiving large orders for its Micrologic chips. By the time the Eagle had landed at Tranquillity Base on July 20, 1969—meeting the late president’s challenge with five months to spare—the Apollo program had purchased more than a million integrated circuits.

  Two decades later, when the American semiconductor industry was facing an all-out battle with Japanese competitors, U.S. electronics companies complained loudly that Japanese firms had an unfair advantage because much of their development funds were provided by the government in Tokyo. On this point, the American manufacturers lived in glass houses. The government in Washington—specifically, the National Aeronautics and Space Administration and the Defense Department—played a crucial role in the development of the American semiconductor industry. The Apollo project was the most glamorous early application of the chip, but there were numerous other rocket and weapons programs that provided research funds and, more important, large markets when the chip was still too expensive to compete against traditional circuits in civilian applications. A study published in 1977 reported that the government provided just under half of all the research and development money spent by the U.S. electronics industry in the first sixteen years of the chip’s existence. Government sales constituted 100 percent of the market for integrated circuits until 1964, and the federal government remained the largest buyer of chips for several years after that.

  The military had started funding research on new types of electric circuits in the early 1950s, when the tyranny of numbers first emerged. The problems inherent in complex circuits containing large numbers of individual components were particularly severe in defense applications. Such circuits tended to be big and heavy, but the services needed equipment that was light and portable. “The general rule of thumb in a missile was that one extra pound of payload cost $100,000 worth of extra fuel,” Noyce recalled. “The shipping cost of sending up a 50-pound computer was too high even for the Pentagon.” Further, space-age weapons had to be absolutely reliable—a goal that was inordinately difficult to achieve in a circuit with several thousand components and several thousand hand-soldered connections. When the Air Force ordered electronic equipment for the Minuteman I, the first modern intercontinental ballistic missile, specifications called for every single component—not just every radio but every transistor and every resistor in every radio—to have its own individual progress chart on which production, installation, checking, and rechecking could be recorded. Testing, retesting, and re-retesting more than doubled the cost of each electronic part.

  In classic fashion, the three military services went off in three different directions in the search for a solution. The Navy focused on a “thin-film” circuit in which some components could be “printed” on a ceramic base, somewhat reducing the cost and size of the circuit; Jack Kilby worked on this idea for a while during his years in Milwaukee at Centralab. The Army’s line of attack centered around the Micro-Module idea—the Lego block system in which different components could be snapped together to make any sort of circuit. Kilby worked on that one for a few days when he first arrived at Texas Instruments.

  The Air Force, with a growing fleet of missiles that posed the most acute need for small but reliable electronics, came up with the most drastic strategy of all. It decided to jettison anything having to do with conventio
nal circuits or conventional components and start over. The Air Force program was called “molecular electronics” because the scientists thought they could find something in the basic structure of the molecule that would serve the function of traditional resistors, diodes, etc. Bob Noyce brushed up against molecular electronics early in his career. “The idea of it was, well, you lay down a layer of this and a layer of that and maybe it will serve some function,” Noyce said later. “It was absolutely the wrong way to solve anything. It wasn’t built up from understandable elements. It didn’t start with fundamentals because they were rejecting all the fundamentals. It was pretty clearly destined for failure.” The Air Force wasn’t listening. With strong lobbying from the generals, molecular electronics won the ultimate bureaucratic seal of approval—a line item of its own in the federal budget. Congress eventually appropriated some $5 million in research funds. Nothing came of the idea.

  Each service, naturally, was eager to see its own approach prevail. All three services, consequently, were somewhat taken aback when they learned, in the fall of 1958, that a fellow named Kilby at Texas Instruments had worked up a solution to the numbers problem that was neither Army nor Navy nor Air Force.

  The military services learned of Kilby’s new monolithic circuit as soon as the people at Texas Instruments had tested the first chip and found that it worked. “TI had always followed a strategy of getting the Pentagon to help with development projects,” Kilby explained later. “So sometime in the fall of 1958, Willis [Adcock] and I started telling the services what we had.” The Navy wasn’t interested. The Army agreed to provide funding, but only to prove that Kilby’s new integrated circuit was “fully compatible” with the Micro-Module. “Well, it wasn’t a Micro-Module at all,” Kilby recalled. “But that was okay. It gave us some money to work with, and we didn’t care what they called it. If they wanted it green, we’d paint it green.” Hoping to supplement the modest Army grant, Adcock and Kilby spoke to the Air Force. “They weren’t interested,” Kilby said later. “Our circuit had the traditional components, resistors, and the like, and their approach wasn’t going to have any of that traditional jazz.” Despite the initial rejection, Adcock wouldn’t give up. For months he argued his case, and eventually he found a colonel who was starting to lose faith in the cherished notion of molecular electronics. In June 1959 the Air Force agreed to help out, a little bit. Somewhat grudgingly, the service coughed up just over $1 million for developmental work on the chip, a piddling amount for a major new electronics project. (Years later, the Air Force’s public relations wing put out a book on microelectronics: “The development of integrated circuits is, in large part, the story of imaginative and aggressive leadership by the U.S. Air Force.”)

  Events followed a different course at Fairchild, largely because Bob Noyce had different ideas about Pentagon-funded research. Noyce had worked on some defense research and development projects when he was a young engineer at Philco, and the experience left a sour taste that never went away. It wasn’t fair, he thought—it was “almost an insult”—to ask a competent, creative engineer to work under the supervision of an Army officer who had at best a passing familiarity with electronics. The right way for the private sector to carry out research, Noyce felt strongly, was with private money. If this research happened to produce something useful for the military, fine, but Noyce did not want his engineers restricted to military research or bound by the confines of a defense development contract.

  And so Fairchild developed the monolithic idea into a marketable commodity using its own funds. Noyce readily conceded, though, that the company was willing to do so in considerable part because of potential sales to the military market. “The missile program and the space race were heating up,” Noyce said. “What that meant was there was a market for advanced devices at uneconomic prices . . . so there was a lot of motivation to produce this thing.”

  In addition to the Apollo program, several new families of nuclear missiles provided large early markets for integrated-circuit guidance computers. The designers of Minuteman II, the second-generation ICBM, decided in 1962 to switch to the chip. With that decision, which led to $24 million in electronics contracts over the next three years, the integrated circuit took off. Texas Instruments was soon selling 4,000 chips per month to the Minuteman program, and Fairchild, too, landed important Minuteman contracts. Soon thereafter the Navy began buying integrated circuits for its first submarine-launched intercontinental missile, the Polaris. By the mid-sixties, chips were routinely called for in specifications for a large variety of military electronic gear—not only G & N computers but also telemetry encoders, infrared trackers, loran receivers, avionics instruments, and much more. NASA’s IMP satellite, launched late in 1963, was the first space vehicle to use integrated electronics, and thereafter chips became the circuits of choice in satellites and other space endeavors. About 500,000 integrated circuits were sold in 1963; sales quadrupled the next year, quadrupled again the year after that, and quadrupled again the year after that.

  The burgeoning government sales not only provided profits for the chip makers but also conferred respectability. “From a marketing standpoint, Apollo and the Minuteman were ideal customers,” Kilby said. “When they decided that they could use these solid circuits, that had quite an impact on a lot of people who bought electronic equipment. Both of those projects were recognized as outstanding engineering operations, and if the integrated circuit was good enough for them, well, that meant it was good enough for a lot of other people.”

  One of the major pastimes among professional economists is an apparently endless debate as to whether military-funded research helps or hurts the civilian economy. As a general matter, there seem to be enough arguments on both sides to keep the debaters fruitfully occupied for years to come. In the specific case of the integrated circuit, however, there is no doubt that the Pentagon’s money produced real benefits for the civilian electronics business—and for civilian consumers. Unlike armored personnel carriers or nuclear cannon or zero-gravity food tubes, the electronic logic gates, radios, etc., that space and military programs use are fairly easily converted to earthbound civilian applications. The first chip sold for the commercial market—used in a Zenith hearing aid that went on sale in 1964—was the same integrated amplifier circuit used in the IMP satellite. For the Minuteman II missile, Texas Instruments had to design and produce twenty-two fairly standard types of circuits in integrated form; every one of those chips was readily adaptable to civilian computers, radio transmitters, and the like. A large number of the most familiar products of the microelectronic revolution, from the busy businessman’s pocket beeper to the Action News Minicam (“film at eleven”), resulted directly from space and military development contracts.

  The government’s willingness to buy chips in quantity at premium prices provided the money the semiconductor firms needed to hone their skills in designing and producing monolithic circuits. With their earnings from defense and space sales, Fairchild, Texas Instruments, and a rapidly growing list of other companies developed elaborate manufacturing facilities and precise new techniques for making chips. As experience taught ways to solve the most common production problems, the cost of making a chip began to fall. By 1964 the initial manufacturing base was in place, and the integrated circuit started flying down the learning curve with the speed of a lunar rocket in reentry. In 1963 the price of an average chip was about $32. A year later the average price was $18.50, a year after that $8.33. By 1971, the tenth anniversary of the chip’s arrival in the marketplace, the average price was $1.27. By the year 2000, a chip with the capacity of those 1971 models would sell for a nickel or less.

  While prices were falling, capability soared. Year after year, buyers of integrated circuits got more product for less money. Manufacturers learned how to cram more and more components onto a single chip. This achievement was partly a matter of design; complex circuits had to be laid out on the tiny flake of silicon so that each individual compone
nt could perform its function without interfering with any of the other components squeezed alongside. The chief technical obstacle to high-density chips, though, was production yield. The more components printed on a chip, the greater the chance that one of those components would have a defect. One defective transistor could render the entire integrated circuit worthless. “A single speck of dust is huge compared to the components in a high-density circuit,” Bob Noyce said. “One dust particle will easily kill a whole circuit. So you’ve got to produce the thing in a room that is absolutely free of dust. You’ve got to build in thousands of [connecting] leads that are finer than a human hair, and every one of them has to be free of any defect. Well, how do you build a room that’s free of dust? And how do you print a lead that is essentially perfect? We had to learn over time how to do things like that.” Over time, the industry developed the “negative pressure” fabrication room—with a steady suction taking air, and dust, out of the room. The white nylon “bunny suit” that fab workers wear to prevent contamination has become a symbol of the microchip industry. And the machinery that “prints” circuitry onto CD-size “wafers” of silicon is so complex and so precise that a single photolithography unit costs tens of millions of dollars.

  But as the industry learned how to operate at ever-tinier dimensions, it found itself in a delightful position. A chip containing 10,000 components required no more silicon, and not much more labor, than one with only 5,000 components. It was as if a fast-food stand had found a way to turn out two burgers using the same amount of meat and bread that it had previously used for one. The semiconductor industry found ways to double capacity over and over again. Noyce’s friend and colleague Gordon Moore was asked in 1964, when the most advanced chips contained about 60 components, to predict how far the industry would advance in the next decade. “I did it sort of tongue-in-cheek,” Moore recalled later. “I just noticed that the number of transistors on a chip had doubled for each of the last three years, so I said that rate would continue.” To his dismay, that off-the-cuff prediction was widely quoted and soon came to be known as Moore’s Law. This industrial “law” has developed all sorts of variants through the decades, but in its most common form, Moore’s Law holds that the number of transistors on the most advanced integrated circuit will double every eighteen months or so. To Moore’s astonishment, the law has held true all the way to the twenty-first century.

 

‹ Prev