The Chip: How Two Americans Invented the Microchip and Launched a Revolution

Home > Other > The Chip: How Two Americans Invented the Microchip and Launched a Revolution > Page 12
The Chip: How Two Americans Invented the Microchip and Launched a Revolution Page 12

by T. R. Reid


  Three months later, everyone met again in a lab in Palo Alto to hear Fairchild’s response. There was not a great deal the Fairchild people could say on the basic question, priority of invention. On that point, Kilby was a clear winner. Nonetheless, before the session ended, Noyce’s lawyers had dropped a bombshell.

  During the long months of preliminary backing and filling, Fairchild’s trusted patent attorney, John Ralls, had died. His place was taken by Roger Borovoy, a junior member of Ralls’s firm who had caught Noyce’s eye and eventually won his confidence. A lot of young lawyers might have found Borovoy’s position somewhat daunting. In his first big case he was litigating against Mosher, a titan of the bar, and the facts on the crucial issue were all on Mosher’s side. As Borovoy saw it, however, he was sitting pretty. “Here I was,” he recalled later, clearly savoring the memory, “a punk kid, defending the most important electronics patent in twenty-five years and Mo Mosher opposing me. Fantastic, right?”

  Since he had no reasonable defense to Kilby’s claim of priority, Borovoy decided to go on the offensive. He pored over Kilby’s patent application. What could he attack? Where was the weak spot? And of course, he found one: the flying wire picture. By 1964, when Borovoy took over the case, the industry had largely determined what an integrated circuit would look like; it didn’t look anything like the drawing in Kilby’s patent application. Focusing on that drawing, Borovoy drew up an offensive strategy. If he could discredit Kilby’s application because of its weakness on interconnections, Noyce would be left with the only valid patent for an integrated circuit. Of course, Kilby’s application also contained that last-minute paragraph explaining how, in place of the flying wires, “conducting material such as gold may then be laid down on the insulating [oxide] to make . . . connections.” To win the case, Borovoy knew, he would have to find something wrong with that paragraph.

  Accordingly, when the litigants assembled at Palo Alto to hear the Fairchild testimony, Borovoy brought forward an expert witness—an electrical engineering professor at Stanford—who declared that no one could build an integrated circuit by following the instructions in Kilby’s application. The hand-wired circuit in the picture was obviously wrong. For that matter, the business about laying down gold on an oxide was faulty, too. You can lay down gold on oxide, the expert testified, but “it will not stick.”

  Under Borovoy’s gentle prodding, the expert contrasted Kilby’s language—“laid down on”—with the wording of Noyce’s patent, which said the connection material had to be “adherent to” the oxide layer. “Laid down on” had no clear meaning, the expert said. “Adherent to,” in contrast, was a precise technical term. On that fine distinction Fairchild would have to rest its case.

  A month later, when inventors and lawyers gathered again at a lab in Dallas to hear Texas Instruments’ rebuttal, Mosher produced an expert of his own. This was an engineer from Kilby’s alma mater, the University of Illinois, and he thoroughly disagreed with the Stanford man. Gold will stick to an oxide layer, he said, so there was no practical difference between “laid down on” and “adherent to.” With this testimony, both sides had had their say. All the expert testimony had consumed six more months, but the stage was set for final resolution.

  Actually, it was not quite set. First, there were a few more procedural battles to be fought:

  “Request for Sur-Rebuttal Testimony”

  “Opposition to Request for Sur-Rebuttal Testimony and Con-

  ditional Request for Sur-Sur-Rebuttal Testimony”

  “Reply to Request for Suspension of Action on Request for

  Leave to Take Sur-Rebuttal Testimony and on Conditional

  Motion to Take Sur-Sur-Rebuttal Testimony”

  Next, the lawyers had to argue the case before the Board of Patent Interferences. In oral argument and in their written briefs, both sides gave most of their attention to the interconnections question. Borovoy’s brief included an oversize copy of the flying wire picture. “Note that this drawing shows no oxide layer and no gold wires ‘laid down on’ any such layer,” the brief said. “In fact, it is readily apparent that the gold wires are anything but ‘laid down.’ ”

  Six months later, however, on February 24, 1967, when the board issued its opinion, it brushed all that aside. After reviewing the experts’ disagreement over “laid down on” and “adherent to,” the board observed that “we are not particularly impressed with that testimony.” As the board saw it, Kilby’s patent application, while not perfect, was clear enough. That left only the question of which inventor was first: “Since Noyce took no testimony to establish any date prior . . . Kilby must prevail.” Eight years after he had filed his patent application, Jack Kilby had been adjudicated the inventor of the integrated circuit. The stage was now set for Texas Instruments to wield its sword against the rest of the electronics industry.

  Actually, it was not quite set. Any American who is unhappy with a federal agency’s decision has the right to appeal, and Fairchild exercised that right. A year was devoted to the preparation of briefs and the filing of motions, and in the fall of 1968, Mosher and Borovoy appeared before the Court of Customs and Patent Appeals to argue all the issues once again. Another year passed. On November 6, 1969, the court issued its opinion. This time, the decision dealt exclusively with the difference between “laid down on” and “adherent to.” The judges had found Roger Borovoy’s argument appetizing—and swallowed it whole. “Kilby has not demonstrated,” the opinion said, “that the term ‘laid down’ had . . . or has since acquired a meaning in electronic or semiconductor arts which necessarily connotes adherence.” In ignoring the difference between the crucial phrases, the appeals court said, the interference board was “clearly in error.” The board’s opinion was reversed. The Borovoy ploy had worked. Once again, Robert Noyce was officially recognized as the inventor of the microchip.

  Now it was Mosher’s turn to appeal. Six months after the court of appeals’ opinion, he filed a brief in the U.S. Supreme Court, asking the justices to review the opinion. Six months later, the Court issued a terse reply to Mosher’s request: “Denied.” Ten years and ten months after Jack Kilby had first applied for his patent, the case of Kilby v. Noyce had come to an end. Noyce had won. Now the stage was set for Fairchild to exploit its patent.

  Actually, it was not set. During the decade that the lawyers had been waging their battle, the integrated circuit had emerged as the most important new product in the history of electronics. The market grew explosively. By the time the last court had issued the last ruling, production of semiconductor chips was a multibillion-dollar industry. As a result, the legal right to this invention had become too important to be left to lawyers.

  And so, in the summer of 1966, before the first opinion was issued, executives from Texas Instruments, Fairchild, and about a dozen other electronics firms had held a summit meeting and cut a deal. TI and Fairchild each conceded that the other had some right to the historic invention. The two companies agreed to grant licenses to each other for integrated circuit production. Any other firm that wanted to enter the market then had to arrange separate licenses with both Texas Instruments and Fairchild. The two firms generally demanded a royalty fee ranging from 2 to 4 percent of the licensee’s profit from chip production. This agreement provided the other firms a means to enter the integrated circuit business; it has provided TI, Fairchild, and Fairchild’s successor companies with hundreds of millions of dollars in royalties over the years.

  Meanwhile, there was yet another court that had to weigh in— the court of professional opinion. That ruling was arguably more important than anything the Supreme Court could say about patent rights to the microchip. Here was an invention, after all, of transcendent importance, and it could clearly be traced to one of two specific men. That was unusual. Most contemporary innovations seem to emerge from vast, faceless research labs or from giant corporate R&D operations. The ideas tend to have a shared parentage that extends over a broad range of individua
ls. It’s hard to name, for example, the inventor of the cellular phone, or the Internet, or AZT, global positioning satellites, digital cameras, artificial knees, vitamin pills, Viagra. (Actually, Pfizer Pharmaceuticals did file patents for Viagra, naming five specific scientists as the inventors. But Pfizer now says those five were merely representative of the vast research team that developed the penile dysfunction pill.)

  The microchip was different. Two young men had independently hit on the monolithic idea at about the same time. Both worked in corporate settings that made it possible to bring the idea to fruition quickly. Because both had been trained to record and date their ideas in lab notebooks, there was documentary proof of their inventions. So who would get the credit? Which one would go down in history as the Man Who Made the Microchip?

  The question could fairly easily have become an ugly and extended point of contention. In fact, it never did. This salutary result is largely due to the nature of the two inventors. Both were decent, fair-minded people. Both were men who got more pleasure from the sheer joy of inventing than from public acclaim for their inventions. Both Jack Kilby and Bob Noyce were far more comfortable at the lab table, working out some technical problem, than they were at the head table of some gala banquet held in their honor. Accordingly, almost from the beginning, both engineers were generous about recognizing the work of the other guy. Taking a cue from the two inventors, the scientific and engineering communities agreed to agree that Kilby and Noyce deserved joint credit for the monolithic idea.

  Both Kilby and Noyce were awarded the National Medal of Science for overcoming the tyranny of numbers, and both were inducted into the National Inventors Hall of Fame as inventors of the integrated circuit. In the engineering textbooks, Kilby gets credit for the idea of integrating components on a chip, and Noyce for working out a practical way to connect those components. Among their fellow engineers, Kilby and Noyce are referred to as co-inventors of the chip, a term that both men found satisfactory. After Noyce’s death in 1990, Jack Kilby kept getting awards, including the Kyoto Prize, the Japanese version of the Nobel Prize (in 1993) and then the Nobel Prize in Physics (in 2000). On both occasions, Kilby pointed out that “Bob Noyce of Fairchild developed a similar idea, along with a practical means of manufacturing it.”

  So the praise as well as the profit for this groundbreaking idea was to be shared. On the day of Fairchild’s great courtroom victory, consequently, hardly anybody paid any attention. After ten years, tens of thousands of pages, and well over a million dollars in legal fees, the legal labors had brought forth an utterly inconsequential mouse. “Patent Appeals Court Finds for Noyce on IC’s” began the headline over a small story reporting the decision in the trade journal Electronic News. “IC Patent Reversal Won’t Change Much.”

  6

  THE REAL MIRACLE

  The integrated circuit made its debut before electronic society at the New York Coliseum on March 24, 1959. The occasion was the industry’s most important yearly get-together—the annual convention of the Institute of Radio Engineers. Texas Instruments had managed, in the nick of time, to turn out a few chips that had no flying wires, and there was a lavish display at the TI booth featuring the new “solid circuits.” There was also a lavish prediction (which we know today to have been a massive understatement) from TI’s president, who said that Jack Kilby’s invention would prove to be the most important and must lucrative technological development since the silicon transistor. Nonetheless, the new circuit-on-a-chip received a frosty reception.

  “It wasn’t a sensation,” Kilby recalls dryly. There were about 17,000 electronic products on display at the convention (the Coliseum used a million watts of power daily during the gathering), and large numbers of them attracted more attention than the integrated circuit. There were hundreds of reporters on hand, and virtually all of them managed to miss the biggest story of the week. In its special issue on the convention, Electronics magazine, which was supposed to recognize important new developments in the field, offered breathless reports on such innovations as a backward -wave oscillator and a gallium arsenide diode, but made no mention of the integrated circuit. In a wrap-up two weeks later, Electronics devoted a single paragraph to Texas Instruments’ new “match-head size solid-state circuit.”

  “There was a lot of flak at first,” Kilby recalls, and indeed, what little comment the new device received was largely critical. The critics identified three basic problems with the integrated circuit. In the first place, the idea of making resistors and capacitors out of silicon flew in the face of decades of research that had established conclusively that nichrome was the optimum material for making resistors, Mylar for capacitors. Monolithic circuits of silicon would be inherently inferior. In the second place, integrated circuits would be hard to make; one common line of analysis held that 90 percent of each production batch of chips would be faulty. In the third place, the whole concept posed a threat to an important segment of the engineering community. If component manufacturers like Texas Instruments started selling complete circuits to computer manufacturers, the circuit designers at computer firms would become redundant—and unemployed.

  “These objections were difficult to overcome,” Kilby wrote later, “because they were all true.” As a result, the giants of the industry—Sylvania, Westinghouse, and their ilk—carefully kept themselves clear of the business for several years. This untimely burst of caution opened the way for upstarts like Texas Instruments, Fairchild, and a slew of new firms in Silicon Valley to work out the problems and cash in on the revolution. With intensive research, the hungry young companies learned how to design circuits on the chips that circumvented the shortcomings of silicon components; they found new production techniques that overcame the initial manufacturing difficulties. The result has been American industry’s greatest postwar triumph. The integrated circuit, a child of Texas and California, has swept the world and spawned a furiously competitive global market. At the start of the twenty-first century, annual sales of integrated circuits were close to $200 billion; the market for digital devices dependent on the chip was well over $1 trillion per year. Just a few years after its un-spectacular coming-out party, the integrated circuit caught the attention of the press and became known as the “miracle chip.” Today, the miraculous has become normal, and the chip is ubiquitous. The average home in any developed country contains thousands of integrated circuits; the average garage has hundreds more. The 24,800 man-made objects presently floating in space are crammed with millions of integrated circuits and would not be up there if they weren’t.

  The integrated circuit was an enormous success because it solved an enormously important problem—the tyranny of numbers. But the success story was also a matter of timing. The chip was born just when the computer was starting to grow up. When Kilby and Noyce made their intellectual breakthrough, the computer was right on the verge of becoming an essential tool for agencies and companies with major number-crunching requirements—banks, insurance companies, the Social Security system, etc. But even in the 1950s, a few visionaries were talking about the concept of a “personal” computer—a computer in every home, or a computer on every wrist. Chips were perfect tools to implement the digital math and logic that computers use, and they were small enough to permit the computer to shrink without losing capacity. The chip and the computer went together like the horse and carriage—or, more aptly, like the oil industry and the auto industry that sprang up together at the start of the twentieth century. “The synergy between a new component and a new application generated an explosive growth for both,” Bob Noyce wrote in a retrospective article two decades after the monolithic idea was born. “The computer was the ideal market . . . a much larger market than could have been provided by the traditional applications of electronics in communications.”

  In traditional circuitry, involving discrete components wired together, resistors and capacitors were cheap, but switching components such as vacuum tubes and transistors were relatively expen
sive. This situation was nicely suited to the manufacture of radios, television sets, and the like; an ordinary table radio of the 1950s used two or three dozen capacitors and resistors but only two or three transistors. With integrated circuits, the traditional economies were reversed. Resistors and capacitors, which use up power and take up a lot of room on a chip, became expensive; transistors were compact, simple to make, and cheap. That situation is precisely suited for computers and other digital devices, which need large numbers of switches—transistors—and small quantities of other components.

  The innards of computers, calculators, cell phones, digital cameras, etc., consist of chips containing long chains of transistors that switch back and forth to manipulate information. Like a light switch on the wall, these transistors can be either on or off; there’s nothing in between. Since there are only two possible conditions, a computer has to reduce every job, every decision, every computation to the simplest possible terms: on or off, yes or no, stop or go, one or zero. Humans can do the same thing, of course. We do it on Easter morning when the kids look for hidden eggs and their parents provide only two clues: “You’re hot” or “You’re cold.” Eventually, most of the eggs are found, but the process is so tedious that even the kids get fed up with it pretty quickly. Computers, in contrast, use this tedious system all day, every day. They have to. They can’t handle anything else.

  For all the mystique of “electronic brains” and “artificial intelligence,” digital devices are actually mindless dullards that rely on computational techniques mankind abandoned in Neanderthal days. Digital problem solving involves simple math—far simpler than the stuff humans learn in grade school. A computer approaches every problem like a child counting on his fingers, but the computer counts as if it had only one finger. (The word “digital” comes from the Latin digitus, meaning “a finger.”) The real miracle of the “miracle chip” is that people have devised ways to manipulate this one minimal skill so that machines can carry out complex operations.

 

‹ Prev