The Chip: How Two Americans Invented the Microchip and Launched a Revolution

Home > Other > The Chip: How Two Americans Invented the Microchip and Launched a Revolution > Page 16
The Chip: How Two Americans Invented the Microchip and Launched a Revolution Page 16

by T. R. Reid


  “At the time I said it, I had no idea that anybody would expect us to keep doubling [capacity] for even ten more years,” Moore explained nearly four decades later. “If you extrapolated out ten years from then, to 1975, that would mean we’d have 65,000 transistors on a single integrated circuit. It just seemed ridiculous.” By 1975 the industry was producing a new series of memory chips that contained 65,536 transistors. It takes roughly four discrete transistors to store one bit of information, so the 65,000-transistor memory chip stored 16,000 bits. This 16K chip gave way about three years later to the 64K memory chip, with some 258,000 transistors. With only a few hiccups along the way, progress in placing more and more components on a chip has closely followed Moore’s Law ever since. By the first years of the twenty-first century, the state of the art in memory was a 256M chip, which held 256 million bits of information. The 256M circuit contains just over 1 billion transistors, all stacked on a sliver of silicon no bigger than the word “CHIP” on this page.

  Memory chips tend to be the most crowded of all integrated circuits, because the transistors on those chips can be laid out in neat arrays of identical components. Other types of circuits are a little less heavily populated because their design requires less compact layouts. But these logic chips, too, have roughly followed Moore’s Law, with the number of components doubling at roughly eighteen-month intervals. The Intel 8080 microprocessor that was coming to market about the time Gordon Moore made his prediction contained about 4,000 components. A quarter century later, the 8080’s great-great-great-great-grandson, known as Pentium IV, had ten thousand times as many components— something over 40 million transistors—on a chip roughly the same size.

  Around the time Jack Kilby won the Nobel Prize for the microchip, Gordon Moore took a long look back at the history of the invention and admitted that he is still amazed his own prediction continues to hold. “I still have a tough time believing that we can make these things,” he said. “I’m a person who has been there literally since the beginning, and I know as a technical matter that a billion transistors on a chip is doable. Hell, we’re doing it! But it is still astonishing we have come this far.”

  As fabrication plants turn out millions of chips that contain tens or hundreds of millions of transistors each, Moore—retired from active service at Intel, but still on the board—has spent some of his retirement time calculating just how many transistors are produced in a year. The comparisons he came up with demonstrate mainly how hard it is to imagine the numbers involved. At one point, Moore estimated that the number of transistors was greater than the number of raindrops that fall on California in an average year. Another calculation concluded that the semiconductor industry makes ten transistors per year for every ant on the earth.

  With the steady decline in prices and the steep ascent in capacity since the birth of the microchip, the semiconductor industry has produced the greatest productivity gains in American industrial history. A graph comparing prices and capacity during the first forty years of the chip’s existence makes a nearly perfect X: the price curve angles sharply downward over time, and the capacity curve angles straight up. In the first generation of “solid circuits” back in the early 1950s, the chips were so simple and the prices so high that buyers were paying about $10 per transistor. By the year 2000, $10 would buy two 64-million-bit memory chips, with about half a million transistors. Clearly a 500,000-fold reduction in price is something special, and consequently it is probably unfair to compare the chip to other industrial products. The temptation is hard to resist, though, and the comparison is frequently made. In a typical version, Gordon Moore suggested what would have happened if the automobile industry had matched the semiconductor business for productivity. “We would cruise comfortably in our cars at 100,000 mph, getting 50,000 miles per gallon of gasoline,” Moore said. “We would find it cheaper to throw away our Rolls-Royce and replace it than to park it downtown for the evening. . . . We could pass it down through several generations without any requirement for repair.”

  Another interested party who found this history hard to believe was Bob Noyce. “Progress has been astonishing, even to those of us who have been intimately engaged in the evolving technology,” he wrote.

  An individual integrated circuit on a chip perhaps a quarter of an inch square now can embrace more electronic elements than the most complex piece of electronic equipment that could be built in 1950. Today’s microcomputer [on a chip], at a cost of $300, has more computing capacity than the first large electronic computer, ENIAC. It is 20 times faster, has a larger memory, is thousands of times more reliable, consumes the power of a light bulb rather than that of a locomotive, occupies 1⁄30,000 the volume and costs 1⁄10,000 as much. It is available by mail order or at your local hobby shop.

  Noyce wrote that in 1977. Naturally, the passage is seriously out of date now. Today’s microcomputer on a chip is vastly more powerful than the model Noyce had in mind, and costs considerably less.

  The dramatic increase in the capacity of a chip also improved circuit performance. Higher-density chips meant less space between components. Less space meant less travel time for signal pulses running from one component to the next, so a smaller circuit was a faster circuit. Similarly, a smaller circuit required less power. Thus the chip makers were offering lower prices, higher capacity, and better performance year after year. In the second half of the 1960s that achievement caught the attention of commercial markets, such as producers of computers and industrial equipment, which had spurned the chip when it first appeared in 1961. By the early 1970s, the government was no longer the leading consumer of chips; that position had been taken over by the computer industry.

  In retrospect, it is easy to see that the integrated circuit was perfectly suited to the digital computer, but the point was less than obvious to computer manufacturers when the chip first came on the market. In the days before the mini- and microcomputer, when computers routinely cost hundreds of thousands of dollars, the development of a new model represented an enormous investment on the manufacturer’s part. Lead times were long; a decision made in 1961 would govern the production of machines that came on the market four years later. As a result, computer design was a conservative science. In the early 1960s no computer builder was willing to take a chance on a completely new type of circuit.

  Thus when IBM brought out a major new line of computers, the System 360, in 1964, the new machine did not use integrated circuits. Still, the System 360 involved a revolutionary concept— it was a family of computers in various sizes and prices that shared the same instruction code, or software, and could thus communicate with one another—which rendered all existing machines obsolete. To fight back, competitors had to come up with something new of their own. They turned to integrated logic circuits. Using the chip, Univac, Burroughs, and RCA turned out machines that were as powerful as the IBM systems but smaller, faster, and cheaper. Brash upstarts like Digital Equipment and Data General entered the market with a new concept—the fully integrated device called the minicomputer. It was about the size of a senior executive’s desk and cost less than $100,000 but matched some of IBM’s big mainframes in computing power. In 1969, IBM bowed to the inevitable and began using chips for all logic circuitry in its computers. Now the chip makers had a market that would dwarf the space and defense business. By 1970, there were more than two dozen American firms turning out integrated circuits; they sold 300 million chips that year. Two years later 600 million chips were sold.

  Logic circuitry, however, represented only one part of the potential computer market. Digital machines need logic gates to manipulate data, but they also need memory units to hold the data. The computers of the 1960s stored data, in the form of binary digits, using an ingeniously simple technique called magnetic core memory. A core memory looks like a tennis net made of fine wires; wherever two wires cross, a small iron wedding ring—the core—is hooked over the intersection. By sending electronic pulses along the right pair of wires, ea
ch individual iron core could be magnetized or demagnetized. A magnetized core represented a binary 1; demagnetized, it stood for binary 0. Core memory was fairly bulky; it took about 10 square feet of wire net to store 1,000 bits of information (“bit” is Claude Shannon’s term for a single binary digit, a 1 or a 0). But it was reliable, easy to make, and inexpensive. The wires and the iron cores were dirt cheap, and a complete memory unit needed only a handful of transistors to send out the needed pulses. The most expensive thing about core memory was the labor cost for stringing all those iron rings on the net. This job, done by hand, was eventually farmed out to places like Hong Kong and Mexico, so prices for core memory remained low.

  Some farsighted semiconductor engineers could see the possibility of putting memory onto a chip. The integrated circuit, after all, was a perfect medium for storing binary digits. A chip comprised a large array of switches (transistors), and any switch has memory. The light switch on the wall is a memory unit; it remembers the last thing you did to it, and stays that way—either on or off—until you change the setting. For various technical reasons, semiconductor memories often used more than one transistor to store each binary digit. In one standard memory design, a block of four transistors was used to store each bit. If a signal pulse turned the block on, it stood for 1; if the block of transistors was off, it stood for 0.

  A semiconductor memory chip was ten times smaller than the equivalent core memory unit; since the signal pulses had shorter distances to travel, it was much faster. Through most of the 1960s, however, it was also two or three times more expensive. In 1967 engineers at Fairchild performed the prodigious feat of squeezing 1,024 transistors onto a single integrated circuit. At four transistors per bit, such a circuit could provide storage for 256 bits of information. But a 256-bit memory chip was still more than twice as expensive as a comparable amount of iron core memory. The Fairchild chip was admired in the laboratories but ignored in the market.

  A monograph that appeared in the Proceedings of the Institute of Electrical and Electronic Engineers in 1968 set forth in discouraging detail the economics of the memory business. Just to approach the price of core memory, it said, the semiconductor people would have to come up with a 1,000-bit memory chip. Semiconductor memory would not actually become cheaper until somebody developed a 4,000-bit chip. Four thousand bits on a single chip? Most of the industry looked at those figures and decided that the wisest course would be to forget memory. A pair of engineers at Fairchild—Bob Noyce and his friend Gordon Moore—looked at the same numbers and decided to give it a try.

  By 1968, the men who had formed Fairchild Semiconductor were chafing at the controls imposed by their corporate superiors back east at the Fairchild Camera and Instrument Corporation. Noyce, Moore, and their colleagues knew more about the semiconductor industry, both the technical and the marketing side, than any of the Fairchild directors, but they were constantly forced to follow corporate decrees that seemed downright foolish. During 1967, moreover, the parent company went through a period of turmoil that saw two CEOs hired and fired within six months. To the “Fairchildren” out in California, it seemed obvious that the right man to lead the corporation was their own leader, Bob Noyce. But when this suggestion was passed to the corporate board, the directors could not bring themselves to entrust their established and traditional corporation to a California cowboy. When Noyce was passed over, the Californians gathered and agreed it was time to move on.

  When Noyce, Moore, Andrew Grove, and several others left Fairchild in 1968 to start a new company specializing in semiconductor memories, they were gambling that a memory chip would be easier to make in extremely high densities than the traditional logic chip. A logic circuit, with its assorted gates and pathways, requires a variety of components laid out in complex patterns and a byzantine pattern of leads to connect the parts. A memory chip, in contrast, consisted for the most part of identical transistors lined up in identical blocks, one after another, like blocks of identical tract houses in suburban Levittown. Connections could be provided by a simple network of parallel leads, just like a neat grid of crisscrossing suburban streets. Each block of transistors, like each house in Levittown, could be assigned a unique address. Consequently, the memory circuit would permit random access—that is, the logic circuits could send data to or extract it from any one of 1,000 memory locations without disturbing the other 999.

  The new firm that Noyce, Moore, and the others founded, Intel Corporation, turned out its first high-density memory circuit late in 1968. It held 1,024 bits of data. Using the standard engineering shorthand for 1,000, the letter “K,” this random-access memory chip was called a 1K RAM. Intel rang up a grand total of $2,672 in sales that first year. By 1973, when the 4K RAM came to market, the firm’s sales topped $60 million, and Texas Instruments and several other firms had jumped into the memory business as well.

  Like most other fields of human endeavor, the computer world has its own version of Parkinson’s Law. It is sometimes stated in pure Parkinsonian terms—“Data expands to fill the memory available to hold it”—and sometimes in plainer language— “There’s no such thing as enough memory.” For computer buffs, using one bank of memory is like eating one peanut. The computer business turned out to have a voracious appetite for cheap, fast random-access memory, and the semiconductor business geared up to meet the need. Partly because of the commonly used four-transistor-per-bit storage configuration, memory chips tended to grow by factors of four. A 16K RAM came on the market in 1975. The 64K RAM went on sale five years later. In accordance with Moore’s Law, this growth has continued up to a 256M RAM chip at the start of the twenty-first century.

  In addition to the two types of digital circuits—logic and memory—the late sixties also saw the first significant development of another species of chip, called a linear, or analog, integrated circuit. The linear chips replicated the functions of many traditional electronic circuits—timers, radio transmitters, audio amplifiers, and the like. Such applications put the integrated circuit into a number of noncomputer electronic devices. Some were traditional—the first integrated circuit radio receiver went on the market in 1966—and some wholly new—the cardiac pacemaker, a tiny circuit that gives off small electric pulses at precise intervals, was implanted in a human chest for the first time in 1967. Eventually, integrated electronics were replacing traditional circuitry in everything from elevators to Osterizers.

  Applications for integrated circuits were multiplying rapidly, but not as rapidly as the semiconductor companies were turning out densely integrated new circuits. “We reached a point where we could produce more complexity than we could use,” Gordon Moore said later. Those industry executives who took time to look up from their balance sheets and think about the future could see that supply of electronic circuits was increasing faster than demand. Almost alone among industrial products, moreover, the integrated circuit was a one-time-only sale. There was nothing to break, no moving parts to wear out. Once a chip passed its initial inspection, it would last a lifetime; there was little or no replacement market.

  An exam question that pops up now and then in the nation’s business schools postulates a situation where somebody invents a common product that will never wear out. The famous prototype is the miracle fabric that Alec Guinness invents in the great film The Man in the White Suit—a cloth that never gets dirty, wrinkles, or wears out. In the movie, the new product is at first welcomed with joy; then the dry cleaners, suit makers, and department stores of the world figure out what this breakthrough would do to them. In the end, the miracle fabric is quietly buried. But what if such a product came along in real life—a lifetime light bulb, for example, or permanent razor blade? How should the razor blade industry react? One answer is that the manufacturers should make a quick killing selling the lifetime blade until all conventional blades are replaced—and then go out of business. This answer is not an acceptable one, however, at most business schools, which preach the need for unending growth. The ri
ght answer is that the industry should use its ingenuity to create new uses for lifetime razor blades, and cater to a continually expanding market.

  Faced with an ever-expanding supply of a lifetime product, the semiconductor industry at the end of the 1960s picked the right answer. The thing to do was to find new applications and new markets for integrated circuits. Since the chip, up to then, had been sold almost exclusively for government and industrial uses, the obvious new market to shoot for was the largest market of all— the consumer. To maintain its explosive rate of growth, the American semiconductor industry would have to take its revolutionary new product into the American home.

  But how? The only chips that most Americans knew much about were made from potatoes; the very word “semiconductor” was completely alien to the general public. How many homes really needed an interplanetary guidance and navigation system or a $50,000 computer? Extending the microelectronic revolution down to the average consumer loomed as a formidable problem. To solve it, the industry turned to one of its premier problem solvers—Jack Kilby.

 

‹ Prev