TOWARD THE END OF 1969, Noyce and Rock began to talk about taking Intel public, going so far as to split the stock four for seven in an attempt to get the price per share in the $20–$30 range most investors preferred. Both men, however, had significant concerns that taking a young company public would mean subjecting it to stockholders’ demands for short-term profits at a time when Intel needed to invest for long-term growth. Noyce had another worry. He thought that the public’s voracious appetite for semiconductor issues would unrealistically inflate the firm’s market cap, and thereby sever any clear connection between the employees’ work and the value of the company. “A company worth $5 million could be bid up on the market by unsophisticated bidders to $50 million,” he explained. If the value of the company did not “follow the real progress of its growth,” Noyce said, “the value of the employees’ stock options would not correspond to their performance.” In other words, he worried that employees would find it demoralizing to get rich from a stock bubble.4
Rock and Noyce managed to arrange a second round of private financing in 1969, and then a third in 1970, at prices that rivaled market rates without introducing the hassles of public ownership. Noyce worked closely with Rock to identify and entice investors sophisticated enough to understand the risks they were taking and the need for patience. He attended meeting after meeting with banks, financiers, current investors wanting to increase their participation, and new investors, most of them again personal acquaintances of Noyce, Rock, or Moore. These second and third rounds of funding brought another $2.2 million into Intel.5
The funding also boosted Noyce’s confidence in the company’s future. He bought his own plane, a single-engine Pegasus, shortly after passing his pilot’s exam in 1969. He approached flying with an uncharacteristic seriousness and deliberation, always thoroughly examining the plane before climbing in the cockpit and not touching alcohol for at least 24 hours before he was scheduled to take off. In the air, too, he was surprisingly careful. “It was an interesting transformation,” recalls Gordon Moore. “Driving with [Noyce] in a car was like taking my life in my hands—he didn’t pay much attention to his driving. But flying in his airplane: boy, he was nothing but business.”6
By this time, almost no trace remained of the Bob Noyce who had turned down a general manager’s job at Fairchild because he feared he might fail. Where Noyce had been almost apologetic about not being a particularly good details manager when he left Fairchild, by the time Intel was two years old, he was nearly defiant. “I don’t get my kicks out of seeing things run at the highest level of efficiency with the greatest degree of control,” he said. “Control immediately means a loss of personal freedom for either the people in the factory or, as far as that goes, for the management. Once you’ve set down the ground rules for return on investment or earnings before taxes, you’re suddenly cut off from some of the choices you could have made…. I guess what I’m saying is that the venture part of management[,] rather than the control part of management[,] is more fun for me.” He added a bit pointedly: “This ‘immature’ management has been much more successful than the mature management that tried to get into the [semiconductor] business.”7
INTEL INTRODUCED ITS FIRST PRODUCT, a 64-bit random access memory (RAM) in May 1969. This was the bipolar device from the group run by Bohn, and its development had proven far easier than Intel could have imagined. The memory was also easy for competitors to build, however, which meant that Texas Instruments and Fairchild introduced their own 64-bit memories at nearly the same time as Intel. As a result, the bipolar RAM generated little more than what one person called a “revenue trickle” for Intel in 1969.8
At the other end of the spectrum was the multichip memory effort. Intel had no problem building the individual memory chips but could not reliably attach them to their ceramic base. Moreover, yields were terrible, power dissipation was high, Intel could not determine how to test the devices efficiently, and it was difficult to imagine ever shrinking the package enough to make it appealing to customers. When Gordon Moore tried to test an almost-finished device for shock resistance by dropping it, nearly every chip popped off the ceramic base and clattered across the ground.9
Clearly the MOS device would make or break the company. If the three-pronged effort was, as Gordon Moore liked to put it, “like Goldilocks,” with the multichip memory too hard to build and the bipolar memory so easy that anyone could do it, the MOS device needed to prove itself “just right”—easy for Intel to build and hard for everyone else.
It certainly was not easy to build. Vadasz and Grove were trying to make MOS transistors that would have silicon, rather than metal, gates. At Fairchild, Grove and Vadasz had been members of a research team following up on intriguing Bell Labs research that indicated that silicon gates might reduce the risk of contamination and improve the yields in MOS devices. The Fairchild research was underway when Grove and Vadasz left the company and resurfaced trying to build silicon-gate MOS devices for Intel.
In March 1969, Fairchild requested that Noyce meet with the company’s general counsel for an informal deposition to determine whether or not Fairchild had grounds to sue Intel for violations of trade secrets related to the silicon gate process, or for corporate raiding. Although the deposition was informal and friendly—both of the Fairchild attorneys were social friends of the Noyces, and one later came to work for Intel—the stakes were high. At Fairchild, Les Hogan wanted to discourage future spinouts, and suing a startup as prominent as Intel would both prove he was serious and could also slow down a company that, while no threat at the present, might become one later. For his part, Noyce insisted that Intel’s silicon gate work represented not stolen intellectual property, but “evolutionary improvements” on efforts “discussed by researchers in other locations, but … never really brought into production.”10
Fairchild never filed a suit against Intel. An intellectual property case would have been hard to prove, since Fairchild research had not gone much beyond confirming the Bell Labs findings, and since no lab books, process manuals, or masks were missing from Fairchild. Moreover, researchers at several companies other than Fairchild and Intel were also pursuing silicon gate research. A raiding suit offered even less hope for success for Fairchild. The company had hired more than 60 Motorola employees devoted to Hogan in the past year, which meant Fairchild was itself the defendant in a raiding suit brought by Motorola. The Fairchild attorneys would have been hard pressed to explain why they were in any position to charge another firm with questionable practices on this front.
Moreover, as Fairchild counsel Roger Borovoy, who thought they might have had a case, explains with a sigh, neither he nor anyone else at Fairchild relished the prospect of suing Noyce: “We just said, ‘The hell with it.’ There was no way Sherman Fairchild, who was still active, would sue Bob Noyce…. All the up Sherman ever had was from Bob Noyce. Bob Noyce made Fairchild. So why screw around with this [talk of a suit] any more?”11
Grove, Vadasz, and the MOS team in the Intel lab found the notion that they might be significantly benefiting from Fairchild’s silicon gate research laughable. Recall the difference between lab work and production. Fairchild’s work had never left the lab. There were no transistors rolling off the lines with silicon gates at Fairchild and certainly no integrated circuits. The MOS team that went to Intel did indeed owe all their knowledge about silicon gates to their experiences at Fairchild, but it took them a year to translate this familiarity with a theory into a reality etched in silicon. An Intel progress report from November 1968, claims that the process is “off and limping.” Recalls one employee, “It was a little bit like peeling an onion. Every time we would fix a problem, we’d uncover another one. I was afraid the last layer was going to be nothing. For all we knew, the silicon gate process was no good.” About this same time, Grove wrote of the MOS effort: “Results: one day ho, the next day hum.”12
Intel produced its first working silicon-gate MOS memory in March 1969, just day
s after Noyce’s deposition. The MOS team called the company into the cafeteria to share several bottles of champagne. Noyce, however, did not attend the celebration. He was in a hospital in Aspen, where he had broken his leg in five places when he fell while skiing. When Moore called to tell him about the working MOS circuit, Noyce called it “the best news that I’ve ever gotten” and was immediately swamped with guilt for not having been at Intel when the milestone was met.13
NOYCE ALWAYS MAINTAINED that the “genesis” for any successful thinking about semiconductor products “must not be ‘we have this product, how do we get rid of it?’ but ‘this is a critical product.’” The technology itself interested Noyce less than the need for the technology, a distinction that he explained thus: “A company must go out and find what the customer wants. Where is the need? Where is the opening? … The need is not for, say, half a million ¼-inch drill bits. The need is that there are ten million ¼-inch holes that need to be drilled.” Even if a customer thinks he needs a drill bit, it was Intel’s job, in Noyce’s estimation, to recognize that the real need was to make a hole, and then to find the best way to make it.14
The most storied example of Intel’s commitment to this approach is the company’s development of the microprocessor—the so-called computer on a chip—in 1969 and 1970. The legends surrounding the microprocessor are many, and the reality is especially hard to pinpoint because almost none of the original documentation—drawings, progress reports, contracts, communications among people at Intel or between Intel and the companies that used the microprocessor—survives. In other words, any account depends a great deal on the memories of the participants.15
A dependence on human recall, particularly of events more than 35 years old, is always risky. It is particularly so in this case because the stakes are so high. The microprocessor is one of the most important inventions of the twentieth century. Every computer and piece of “smart electronics” on the planet depends on microprocessor technology, as do many things not considered particularly intelligent, such as internal combustion engines and automobile brakes. Microprocessors are a multi-billion-dollar industry. The devices have also made Intel the world’s dominant semiconductor company, which makes receiving credit for it within the company a particularly appealing prize.
The combination of high stakes and little original information has led to a not-unexpected result, namely that in the same way that everyone of a certain age seems to recall voting for Kennedy in the 1960 presidential election even though Nixon almost won that contest, nearly everyone involved with Intel’s work on the microprocessor remembers himself as playing a crucial role in developing and promoting the device, even though the company abandoned its development for several months, considered it unimportant enough to assign the rights to someone else, and then, after securing these rights back to Intel, almost did not market the microprocessor at all.
And then, of course, there are dozens of people who never worked at Intel who can make legitimate claims on the invention of the microprocessor. Bill Davidow, who oversaw microprocessor marketing at Intel and who himself had some strong technical ideas on the subject, has said, only partly in jest, that there are 500 inventors of the microprocessor. Fairchild, IBM, Signetics, Four-Phase, and RCA were also working on microprocessor-like devices at the same time Intel was tackling the project. Intel filed for its first patent in 1973, but a small company called Microcomputer had filed for a patent on a general logic device in 1970, and Texas Instruments applied for a patent on a microprocessor-like device in 1971.16
In the same way that ideas about interconnecting components were “in the air” for years before Noyce and Kilby independently demonstrated their integrated circuits, so too were ideas about a general-purpose logic device “in the air” for years before anyone at Intel began working on what would come to be called microprocessors. “This is a funny deal with the microprocessor,” explains Gordon Moore. “There was no real invention [in a technical sense]. The breakthrough was a recognition that it was finally possible to do what everyone had been saying we would some day be able to do.”17
In the midst of all this confusion and uncertainty, one fact emerges with surprising clarity—Bob Noyce was absolutely essential to the microprocessor’s development and success at Intel. He encouraged its development; he lobbied for its introduction; he dreamed of its future importance; he promoted it tirelessly within the company and to customers. The operations manager (Grove), the key inventor (Hoff), and the board chair (Rock), have each independently said, in one way or another that “the microprocessor would not have happened at Intel if it had not been for Bob.”18
INTEL’S MICROPROCESSOR STORY opens in the spring of 1969, around the time that Moore called Noyce in Aspen to tell him that the MOS team had a working silicon-gate memory. A manager from a Japanese calculator company called Busicom, which was planning to build a family of high-performance calculators, contacted either Bob Graham or Noyce to ask if Intel, which had a small business building custom chips designed by customers, would like to manufacture the chip set that would run the calculator. Calculator companies around the world were seeking out semiconductor companies to build the chips for their machines, and Noyce said that Intel was nearly the only manufacturer left who had not already agreed to work with a calculator company. It made sense for Intel, young and unknown, and Busicom, ten years older but still not well established, to work together.19
Tadashi Sasaki, a senior manager with Japanese electronics giant Sharp, explains that it was he, not serendipity, that brought Intel and Busicom together. Sasaki says he had long felt great gratitude to Fairchild and Bob Noyce because the planar and integrated circuit research published by Fairchild had contributed to Sasaki’s own professional success. Sasaki had also been intrigued by an idea one of his researchers had proposed in 1968—that it would one day be possible to build an entire calculator on a single chip. Sasaki says that Noyce and Graham visited him at the end of 1968 trying to drum up business for Intel, but that Sharp’s existing contracts made it impossible to give even a small order to Intel. Sasaki says he then tried to help Noyce by arranging a dinner that included Sasaki, Noyce, Graham, and Yoshio Kojima, president of Busicom and a university classmate of Sasaki’s. Sasaki also funneled roughly 40 million yen to Busicom, with the stipulation that it be used in a contract to Intel to build a calculator on a chip.20
Noyce’s datebooks do not note any meeting with Sasaki or Sharp, but it seems likely that he did travel to Japan around this time. Intel established a sales office in the country in 1969, and Noyce’s prestige among the Japanese made him the logical person to have facilitated the process. Several people who traveled with Noyce to Japan have commented on how greatly he was admired there. Roger Borovoy described Noyce as “a god” to the Japanese. Ed Gelbach, Intel’s second marketing vice president, recalls being “in awe” of Noyce’s influence in the young Japanese electronics industry. “[Japanese executives] would come up to him and say, ‘we designed this [Intel chip into our product] just because of you, Dr. Noyce.’ In Japan, [Noyce] had single handedly the most significant impact on getting Intel parts designed in.” There is thus no reason to doubt Sasaki’s feelings about Noyce. His encouraging Busicom to contact Intel also seems plausible. “Busicom kind of appeared out of the blue,” Moore recalls.21
It is certain, however, that Busicom did not request that Intel build a calculator on a single chip. In fact, where the standard calculator used about six chips, each with 600–1000 transistors, Busicom, which was designing a particularly complex calculator, wanted a set of a dozen specialized chips with 3,000 to 5,000 transistors each. Busicom planned to send a team of engineers to Intel to design the chips on-site and would pay Intel $100,000 to manufacture its calculator chip sets. Busicom expected to pay Intel about $50 for each set manufactured and promised to buy at least 60,000 of them. Intel agreed to this arrangement.22
Three Busicom engineers arrived in California towards the end of June, and by the
first week of July, they were a fixture in the Intel building. Noyce asked Ted Hoff, Intel’s resident computer expert, to serve as the official liaison to the Busicom team. No one, including Noyce, expected that the Busicom project would require much attention from Hoff. The idea was simply for the Busicom team to have someone specific to whom they could turn with questions or requests for assistance.
“I had no design responsibilities for the project, but soon I was sticking my nose where it didn’t belong,” recalls Hoff. “Normally you wouldn’t do that, but [Intel] was a start-up company, and a lot of us had hopes for its financial success, so I didn’t want to let major effort go into something disastrous.”23
In short order, Hoff, “kind of shocked at how complex this was,” became convinced that it would be impossible to build the chips at the agreed-upon price. And the more he thought about it, the more strongly he believed that he knew a better way to build the calculator Busicom wanted.24
Busicom was requesting a number of logic chips—the chips that manipulate data rather than just storing it—each of which could do precisely one thing: one chip performed calculations, another controlled the printing, a third handled the display, and so on. Hoff thought that instead of using specialized logic chips, Intel could build a single, general-purpose logic chip that in effect would be a rudimentary computer programmed to act like a calculator. The secret would be to simplify the instruction set for this chip, which came to be called a microprocessor, by offloading as many of the instructions as possible to a memory chip—and memory chips, of course, were Intel’s specialty.25
To Hoff his ideas seemed irrefutably right. But when he tried to convert the Busicom team to his vision, they showed no interest. “The detail was not so good,” recalled Masatoshi Shima, a member of the Busicom team. Shima cited the plan’s “lack of system concept, lack of decimal operations, lack of interface to the keyboard, lack of real-time control, and so on.”26
The Man Behind the Microchip Page 28