The Chip: How Two Americans Invented the Microchip and Launched a Revolution

Home > Other > The Chip: How Two Americans Invented the Microchip and Launched a Revolution > Page 22
The Chip: How Two Americans Invented the Microchip and Launched a Revolution Page 22

by T. R. Reid


  The results were dramatic. In 1961 the six Japanese manufacturers combined sold less than one half of one percent of the black-and-white television sets purchased in the United States. They moved up to 2 percent of the market in 1962 and 13 percent in 1965; thereafter, the percentage kept going up, year after year.

  American manufacturers shrugged off this challenge. By the mid-sixties, they were concentrating on an even hotter new product—color television, in which U.S. technology led the world by a large margin. But the color TV market turned into a rerun of the monochrome story. The Japanese aggressively purchased patent licenses in the West. They entered the fray a few years behind the American firms but made up for the late start with their familiar combination of extensive variety, low prices, and high quality. By 1976, Japan was the world’s leading producer of color television sets and had 35 percent of the U.S. market.

  At this juncture, some U.S. firms simply gave up. Admiral quit the business. Motorola sold its television operation to Matsushita. Others fought back, both in the market and in Washington. U.S. manufacturers charged that some Japanese competitors were “dumping” TV sets in the United States at predatory prices; under the law, this could have resulted in multimillion-dollar fines for the Japanese firms, but diplomatic considerations prevented that result. Instead, the governments of Japan and the United States in 1977 negotiated an “orderly marketing agreement”—the diplomatic term for an import quota. Those yearly agreements preserved almost 80 percent of the U.S. market for domestic makers (the term “domestic,” in this context, included Japanese firms building sets in the United States). In the end, it was diplomacy, not technology or business acumen, that saved the United States a piece of the action in a market it had pioneered.

  The television saga would be an unsettling one from the American point of view even if it were the sole instance of such an abrupt industrial defeat. But in consumer electronics, as in automobiles and steel, the history just related was the rule, not the exception. Throughout the final decades of the twentieth century, American firms were overtaken by Japanese manufacturers offering a broadly appealing range of products that combined premium quality with competitive prices.

  The Commerce Department, displaying the universal bureaucratic yen for numbers and tables, issued a study in 1980 that reduced the whole sorry state of affairs to a single chart:

  In each case, the product was conceived and developed in the United States; in each case, the Japanese (sometimes followed by producers from other nations, such as Taiwan, South Korea, and, more recently, China) borrowed the technology and then swept into the American market.

  The American semiconductor industry, during the first booming decade after Jack Kilby and Bob Noyce hit upon the monolithic idea, could safely look upon these developments in consumer electronics as irrelevant. In chips, the American pioneers had an enormous technological lead over foreign competitors everywhere, European or Asian. And however daunting the Japanese might be in consumer electronic goods, the chip was a different animal. The Japanese had proven adept at moving into relatively stable technological fields, where the design of the product was settled, and producing large volumes of goods at reasonable prices. But the integrated circuit business, in its first quarter century, was never stable. Improvements came so rapidly that there was no such thing, really, as a settled design; prices fell so rapidly along the learning curve that undercutting from foreign competitors did not loom as a serious threat.

  Still, there were signs during the 1960s and early 1970s that the Japanese had a collective eye on the semiconductor business. The big Japanese electronics firms—outfits like Fujitsu, Nippon Electric Company (NEC), Hitachi, and Sony—began buying licenses for every U.S. semiconductor patent they could find; between 1964 and 1970, royalty payments from Japan on U.S. semiconductor patents rose by a factor of ten, from $2.6 million to $25 million per year. For some U.S. firms, particularly Fairchild, this became an important source of no-risk income. “American firms have generally been very cooperative,” a Brookings Institution economist, John Tilton, reported. “With few exceptions, they have been willing to license Japanese as well as other foreign firms and aid them in assimilating new semiconductor technologies, even though in the process they are helping establish potential rivals.”

  There was only one major U.S. firm that refused to cooperate: Texas Instruments. TI spurned royalty payments and set a more ambitious price on its semiconductor patents: no Japanese firm could use them unless the Japanese government permitted TI to set up manufacturing operations in Japan. This demand posed a serious dilemma in Tokyo. The Japanese were determined to exclude most foreign competition in high-tech fields from their country. But in order to produce integrated circuits, the Japanese firms would need access to both the Noyce patent from Fairchild and the Kilby patent from Texas Instruments. Fairchild was willing to sell its technology, for a royalty of 4.5 cents on every dollar the Japanese makers earned on chips. But the Texans hung tough. In 1968, after several years of offers and counteroffers, the Dallas firm was finally permitted—in return for a license to the Kilby patent—to open a Japanese plant. For more than a decade thereafter, TI was the only U.S. semiconductor producer with any significant sales in Japan. Not until the late 1980s, when the Reagan administration finally slapped an import ban on Japanese memory chips, did Intel and other U.S. firms manage to open plants and design facilities in Japan.

  Whether or not the American firms were aware of it, both the vigorous pursuit of U.S. patent rights and the exclusion of U.S. competition were part of a grand strategy devised by the Japanese government to win that nation global preeminence in chips. The semiconductor business was one of several high-tech industries targeted by the Japanese Ministry of International Trade and Industry (MITI) for intensive development. In its ten-year plan, or “vision,” for the 1980s, MITI concluded that microelectronics was perfect for the island nation, because it required large quantities of human resources, such as advanced engineering and diligent workers, which Japan has in abundance, but only small amounts of energy and natural resources, which Japan lacks.

  Officially, MITI’s decisions about what was good for Japanese industry constituted nothing more than “administrative guidance, ” which companies were legally free to ignore. Some did. In the 1950s, for example, someone at MITI had the brilliant insight that Europeans and Americans would never shell out their money for cars bearing names like Honda or Toyota. MITI issued “guidance” telling a group of automakers to get together and design a single “people’s car” that would represent the entire Japanese industry around the world. Honda, Toyota, Nissan, etc., rejected this idea, with stunning worldwide results. When MITI worked with its electronics industry in the sixties to plan Japan’s foray into the low-priced end of the U.S. television market, one manufacturer, Sony, spurned the consensus view and found a lucrative niche of its own, as a prestige upper-bracket label.

  As a rule, though, MITI’s policies tend to become industrial practice. They certainly did in semiconductors. Just as MITI planned it, Japanese electronics firms acquired the U.S. technology required to get started in the manufacture of integrated circuits in the mid-sixties. Just as MITI planned it, these firms were able to rely on sales to domestic Japanese computer and telecommunications firms—free of competition from any U.S. firm except Texas Instruments—to provide the financial cushion necessary for an assault on the world market for chips. And just as MITI proposed, the five largest electronics firms banded together in the early 1970s for a cooperative research endeavor—funded partly by the government and partly by the companies—to develop manufacturing techniques for very-large-scale integrated (VLSI) chips.

  For all this effort, however, the Japanese electronics industry still lagged far behind American firms in most areas of microelectronics. The one product in which the Japanese took any significant market share was random-access memory, or RAM, chips. The memory circuit, pioneered by Robert Noyce and Gordon Moore at Intel in 1968, was ni
cely suited to the Japanese strengths: a simple and relatively stable product that was consumed in huge quantities by computer manufacturers around the world. Even in the RAM market, though, U.S. dominance was unchallenged until the middle of the 1970s. Then the American industry gave Japan its chance.

  During the prolonged recession following the 1973 oil embargo, American semiconductor firms reacted the way American manufacturers normally react to recessions: they laid off workers, closed plants, and generally hunkered down to await an upturn. In 1976, when the economy came roaring back, there was an enormous burst of demand from computer firms for what was then the most advanced memory chip: the 16K RAM, capable of storing some 16,000 bits of information. The U.S. firms could not rebuild fast enough to meet the need. Their customers went shopping for an alternate source of RAM chips—and found it in Japan.

  Following standard Japanese industrial practice, the big Japanese electronics firms had maintained their work force and their production capacity during the recession. They absorbed the costs of full employment during slack times on the theory that the investment in keeping a trained, loyal work force on the job would pay off when things turned up. They were, accordingly, in the catbird seat when the market for 16K RAMs took off in the mid-1970s. Silicon Valley’s inability to meet demand gave Japan a golden opportunity to show the world what it could do, and Japanese firms leaped at the chance. They began dispatching high-quality, competitively priced chips around the world. By 1980 the Japanese had 42 percent of the world market in memory chips, a market that American firms had once owned.

  More important, Japanese firms had put themselves in perfect position to compete when the next generation of memory chips, the 64K RAM, was developed in the late 1970s. This time Japanese firms had no technological lag to worry about, and this time they had established markets everywhere for their memory chips. After two years of nip-and-tuck competition, Japanese firms finally eclipsed the Americans. By 1983 they controlled well over half the world market for 64K RAMs.

  The Japanese success in this one product line prompted all manner of distressed breast-beating in the United States—far more than the situation actually warranted. The RAM memory chips constitute a high-volume part of the semiconductor market but not a particularly important or remunerative one. In dollar terms RAM chips probably represent somewhere between 5 and 10 percent of annual semiconductor sales. Americans have maintained supremacy in virtually every other type of integrated circuit. For Silicon Valley to worry about Japanese or Korean or Taiwanese sales of memory chips is like General Motors losing sleep over a smaller company that has done well in the spark plug business.

  But many Americans were worried. The Japanese inroads in chips, a congressional committee reported, “indicate the potential for an irreversible loss of world leadership by U.S. firms in the innovation and diffusion of semiconductor technology.” The Silicon Valley firms were sufficiently alarmed to form a trade group, the Semiconductor Industry Association, specifically to fight off the foreign challenge. “The television people woke up when the Japanese had 20 percent of the market, and went to the government when the Japanese had 40 percent,” the group’s executive director told The New York Times. “That’s a little late.”

  The Semiconductor Industry Association began turning out a steady flow of studies and brochures and advertisements, along with petitions to various government commissions and agencies, asserting that the competition from Japan was unfair. The SIA cited MITI’s determination to keep U.S. companies out of the Japanese market. It complained about Japanese government contributions—something over $100 million—to the very-large-scale integration research and development venture. It pointed out that the Japanese firms, partly because of that nation’s industrial structure and partly because of support from MITI, received loans from Japanese banks at much lower interest rates than any electronics firms could hope for in the United States.

  “I just don’t want to pretend I’m in a fair fight. I’m not,” wrote Jerry Sanders, chairman of Advanced Micro Devices, in a statement that crystallized the SIA position. “Do you know how the Japanese got the dynamic RAM business? They bought it. (If I had their deal, I’d have bought it too.) They pay 6 percent, maybe 7 percent, for capital. I pay 18 percent on a good day. . . . They start every product development cycle with hundreds of millions of dollars of free R&D every year, paid for by their government. Good for them. But then their parts arrive here in a flood.”

  There was, however, another point of view—a view held, among other places, at Texas Instruments, which never found reason to become a member of the Semiconductor Industry Association. The dissenters pointed out that despite Japan’s efforts to exclude foreign competitors, American firms had always had a larger share of the Japanese semiconductor market than the Japanese had gained in the United States. They argued that the American semiconductor industry, launched on a wave of government financing and still receiving tens of millions of dollars annually from the Pentagon for research and development, was hardly in a position to carp at MITI’s grants to Japanese firms. The SIA’s American critics also recalled that the Silicon Valley firms, by selling their patents and by cutting capacity just before the 1976 boom, were themselves partly responsible for the Japanese success. “Those fellows on the West Coast sort of have schizophrenia,” Fred Bucy, the president of Texas Instruments, said after the SIA was founded. “They had the same leverage as we did. . . . But they were very shortsighted in the way they handled the patent situation.”

  There was one other factor, too, fueling the Japanese success in RAM memory chips, but it was something that American trade groups were not eager to talk about. It was a familiar factor to Americans who had observed the Japanese success in television, in cameras, in automobiles—the quality factor.

  In the middle 1970s, when the Japanese electronics firms first managed to break into the American semiconductor market, U.S. companies buying imported memory circuits for installation in their computers began to notice something interesting: Japanese chips were better. There was no discernible difference in performance, because all memory chips are built to work the same way. But there was a marked difference in reliability. Chips made in Japan were less likely to fail than the American product.

  At first, the Japanese edge in quality was the dirty little secret of Silicon Valley. Hardly anybody talked about it, and when the subject did come up, American manufacturers heatedly denied that Japanese firms were turning out more reliable chips. That changed one morning in March 1980 when an American computer executive named Richard W. Anderson stood up at an industry meeting in Washington, D.C., and delivered a paper that came to be known as “The Anderson Bombshell.”

  Anderson was a division manager at Hewlett-Packard, a giant California-based manufacturer of electronics instruments and computers that is one of the world’s biggest consumers of integrated circuits. It was he who had decided, rather reluctantly, to start buying Japanese memory chips for Hewlett-Packard computers, and at the Washington meeting he told his story.

  We first introduced semiconductor memory in our computers in 1974. We got all our memory from United States suppliers. Then in 1977 the 16K, or 16 thousand-bit, RAM began to make its appearance. The first introductions that I was familiar with came from U.S. suppliers, and we hurried to implement this design into our product line. . . .

  However, some months after introduction, the U.S. suppliers that we had been working with found themselves unable to meet our quantity demands, either due to yield or capacity problems, and this left us between the proverbial rock and a hard spot. So, after much anguish, we decided to talk to a Japanese company who had been calling on us telling us of their memory for some time. And I would like to state at the outset we took a very cautious approach because we remembered well the impressions from post–World War II Japanese products; namely, that they were cheap, low cost, and low quality. And so our engineers went through a very rigorous qualification program; and we were pleasantly surpris
ed to find they qualified.

  Anderson went on to say that, over time, he bought more and more chips from the Japanese firm. Although the fact was not immediately obvious, he said, Hewlett-Packard gradually began to realize that there was a significant difference in the Japanese memory circuits. “We had fewer failures in incoming inspection; we had fewer failures during production cycle; we saw fewer failures of products in customers’ hands. . . . Not only was the quality good, but [it] was actually superior to what had been our experience with the domestic suppliers.

  “Then came 1979,” Anderson went on, “and a real market crunch hit the memory suppliers, particularly the U.S. manufacturers . . . and we found ourselves in short supply. So we went back to Japan and qualified two more Japanese suppliers for the product line that I’m responsible for. And again the same experience: excellent quality.” Eventually, Anderson added, Hewlett-Packard compiled performance records on some 300,000 memory chips, of which half came from the Japanese suppliers and half from American makers. The final standings showed that all three Japanese firms were delivering higher quality goods than the best American manufacturer. “So that’s a remarkable, and I would think to American suppliers, perhaps a frightening set of statistics,” Anderson said.

  Frightening it certainly was. The message that foreign competitors were outperforming the United States in semiconductors, the symbol of American technical preeminence, was a slap in the face that could not be ignored. The Anderson Bombshell, widely reported and corroborated by some other computer firms, made it impossible for American firms to deny any longer that there was a quality difference. Instead, they set out to learn how the Japanese had attained it. “U.S. Microelectronics Firms Study Japan for Secrets of Quality and Productivity,” read a headline in The Wall Street Journal in 1981. The spate of books that appeared in the early 1980s extolling Japanese management practices became required reading in the semiconductor business. The American Electronics Association did a booming trade in seminars on Japanese quality control. Companies dispatched fact-finding teams to Tokyo to uncover the Japanese quality secret.

 

‹ Prev