The Chip: How Two Americans Invented the Microchip and Launched a Revolution
Page 18
The story of the microprocessor begins in Tokyo, but the scene shifts rapidly to Silicon Valley. In 1969 a Japanese business-machine manufacturer, Busicom, was planning a new family of desktop printer calculators but could find no engineers in Japan capable of designing the complex set of integrated circuits the machines would require. Busicom sought help—from Bob Noyce, who was still putting together his new company, Intel. The Japanese signed a contract with Intel calling for the design and production of twelve interlinked chips for the new line of machines. Busicom sent a team of engineers to Intel to oversee the work. Noyce, meanwhile, handed the problem to a one-man team— Marcian E. “Ted” Hoff, a thirty-four-year-old Ph.D. who had been lured away from a teaching job at Stanford by the prospect of broader horizons in industry. Although Hoff was an expert on microcircuits, his real ambitions were somewhat larger. He had always wanted to design his own computer.
When the Busicom engineers showed Hoff their tentative plans for the twelve chips they needed, the American was appalled. The arrangement was outrageously complex—some of the simplest functions would require sending the same number into and out of two or three different memory registers—and could not possibly be implemented at an acceptable price. Even worse, in Hoff’s eyes, the design was inelegant. It was downright wasteful to put dozens of man-years into designing a set of specialized circuits that could be used in only one small group of machines.
This last concern was important to Noyce as well. By the end of the sixties, Noyce was worried about the rapid proliferation of different integrated circuits, each designed for its own special purpose. Every customer who wanted a chip for his product was demanding a custom-designed chip just for that product. “If this continued,” Noyce and Hoff wrote later, “the number of circuits needed would proliferate beyond the number of circuit designers. At the same time, the relative usage of each circuit would fall. . . . Increased design cost and diminished usage would prevent manufacturers from amortizing costs over a large user population and would cut off the advantages of the learning curve.”
Looking ahead, Noyce saw that the solution to proliferation of special-purpose integrated circuits would be the development of general-purpose chips that could be manufactured in huge quantities and adapted (“programmed”) for specific applications. Hoff had been intrigued by this concept and was frankly looking for an opportunity to give it a try. When the Busicom assignment landed in his lap, he grabbed the chance. Scrapping Busicom’s ideas, the designer came up with a strikingly new design for the Japanese: a general-purpose processor circuit that could be programmed for a variety of jobs, including the performance of arithmetic in Busicom’s machines. As Hoff pointed out, this approach would permit much simpler circuitry than the Japanese firm had suggested. Indeed, by the summer of 1971, Hoff was able to put all the logic circuitry of a calculator’s central processor unit, or CPU, on a single chip. The CPU could be coupled with one chip for memory, one for storage registers, and one to hold the program; the entire family of calculators would require only four integrated circuits. In his job as a circuit designer, Hoff had, in fact, fulfilled his personal dream: he had designed his own general-purpose computer.
By the summer of 1971, though, the calculator industry was in the throes of great change. The introduction of Jack Kilby’s $150 handheld calculator that spring had completely changed the rules; companies like Busicom with their heavyweight $1,000 machines were in big trouble. Accordingly, Busicom told Intel it could no longer pay the price originally agreed upon for the new chips. Negotiations ensued. Busicom got its lower price, but gave up something in return that turned out to be almost priceless: its exclusive right to the chips. Intel was now free to sell Hoff’s general-purpose CPU-on-a-chip to anybody.
But would anybody buy it? That question spurred furious debate at Intel. The marketing people could see no value in a one-chip CPU. At best, a few minicomputer firms might buy a few thousand of the chips each year; that wouldn’t even pay for the advertising. Some directors were worried that the new circuit was too far afield from Intel’s real business. Intel, after all, was a circuit maker; Hoff’s new chip was a single circuit, all right, but it really amounted to a complete system—almost a whole computer. There was strong pressure, Noyce and Hoff wrote later, to drop the whole thing.
Intel had recently hired a new marketing manager, Ed Gelbach, and he arrived at the company in the midst of this controversy. As it happened, Gelbach had started in the semiconductor business at Texas Instruments; like everyone else at TI, he was steeped in Patrick Haggerty’s view of the world. Gelbach realized immediately that Intel had reversed the course of the industry by producing a general-purpose chip. “General purpose,” Gelbach saw, was just another way of saying “pervasive.” The real markets for the new device, he said, would be completely new markets. With this one-chip central processor—known today as a microprocessor—the integrated circuit could “insert intelligence into many products for the first time.”
And so Intel’s new “4004” integrated circuit went on sale for $200 late in 1971. With mild hyperbole, Intel advertised the device as a “computer on a chip.” Gradually, as people realized that it really could work just about anywhere, the microprocessor started showing up just about everywhere. A typical application was the world’s first “smart” traffic light. It could tell, through sound and light sensors, when rush hour was starting, peaking, or running down; the tiny CPU would alter the timing of red and green in response to conditions to maximize traffic flow. Soon there was a smart elevator, a smart butcher’s scale, a smart gas pump, a smart cattle feeder, a smart intravenous needle, and a bewildering array of other “smart” devices. A microprocessor in a K2 ski would react to vibrations and stiffen the ski laterally, reducing bounce on the run. A microprocessor in a tennis racket senses where the ball has hit the racket and instantly adjusts string tension to make that very point the racket’s “sweet spot” for that one shot.
Texas Instruments, of course, was hardly pleased to let its arch-rival steal a march on such an important new battleground. Working on a contract for a customer who wanted circuitry for a “smart” data terminal—a keyboard-screen combination that could communicate with large computers far away—a TI engineer named Gary Boone developed a slightly different version of a single-chip processor unit. Boone’s version, called the TMS 1000, received the first patent awarded for a microprocessor. Then, at the end of 1971, Boone and another Texas Instruments engineer, Michael Cochran, produced the first prototype of an integrated circuit that actually was a computer on a chip. The single monolithic circuit contained all four basic parts of a computer: input circuits, memory, a central processor that could manipulate data, and output circuits. A year later, Intel came out with a second-generation microprocessor; since it had roughly twice the capacity of the original 4004 chip, the new device was called the 8008. The introductory price was $200. That morphed into the 8080, the 8086, and then a series of progressively more powerful processor chips that powered progressively more powerful personal computers: the 80286, 80386, 80486, Pentium, Pentium Pro, Pentium III, and Pentium IV. The Pentium IV chip operated at a speed about five hundred times faster than the 8008; the price was about the same.
It was the marriage of the microprocessor and a group of devices called transducers that finally brought microelectronics into every home, school, and business. A transducer is an energy translator; it converts one form of energy into another. A telephone receiver is a transducer, changing your voice into electrical pulses that travel through the wire. The keyboard on a calculator converts physical pressure from a finger into pulses that the central processor can understand. Other sensors can turn sound, heat, light, moisture, and chemical stimuli into electronic impulses. This information can be sent to a microprocessor that decides, according to preprogrammed directions, how to react to changes in its environment.
A heat-sensitive transducer can tell whether a car’s engine is burning fuel at peak efficiency; if it is not,
the transducer sends a pulse to logic gates in a microprocessor that adjust the carburetor to get the optimum mixture of fuel and air. A light-sensitive transducer—the familiar electric eye—at the checkout stand reads the Universal Product Code on a carton of milk and sends a stream of binary pulses to a microprocessor inside the cash register. The central processor queries memory to find out the price assigned to that specific product code today, adds that price to the total bill, and waits patiently (this has all taken three thousandths of a second) for the transducer to read the next product code. A moisture sensor and a heat sensor inside the clothes dryer constantly measure the wet clothes and adjust the machinery so that the laundry will be finished in the shortest possible time.
Microelectronics is at work inside the human body. A microprocessor that controls a speech-synthesizing chip can be connected to a palm-size keyboard that permits the mute to speak. Now under development is a chip that may be able to turn sound into impulses the brain can understand—not just an electronic hearing aid, but an electronic ear that can replace a faulty organic version. Other experiments suggest the possibility of an implantable seeing-eye chip for the blind—a light-sensitive transducer connected to a microprocessor that sends intelligible impulses to the brain.
Among the countless new applications that people dreamed up for the computer-on-a-chip was, of all things, a computer—a completely new computer designed, not for big corporations or mighty bureaucracies, but rather for ordinary people. The personal computer got its start in the January 1975 issue of Popular Electronics magazine, a journal widely read among ham radio buffs and electronics hobbyists. The cover of that issue trumpeted a “Project Breakthrough! World’s First Minicomputer Kit to Rival Commercial Models.” Inside, the reader found plans for a homemade “microcomputer” in which the Intel 8080 microprocessor replaced hundreds of individual logic chips found in the standard office computer of the day. The Popular Electronics kit was strictly bare-bones, but it gave anybody who was handy with a soldering iron the chance to have a computer—for a total investment of about $800. At a time when the smallest available commercial model sold for some $30,000, that was indeed a breakthrough. Readers sent in by the thousands. The other electronics magazines started offering computer kits of their own. Within a year thousands of Americans were tinkering with their own microprocessor-based personal computers.
The personal computer community in those pioneering days was a sort of national cooperative, with each new computer user eagerly sharing techniques and programs with everybody else. It was a community that lived by the old Leninist maxim, “From each according to his ability, to each according to his need.” If you needed a program that would make your homebrew computer compute square roots, and if I had the ability to write a program to do just that, I would proudly share my handiwork with you—for free. But among the first to start programming the Popular Electronics 8080-based computer was a Harvard undergraduate who had a different idea. In the late 1970s he wrote the first genuinely useful program for 8080-based PCs, a simple version of the BASIC programming language. And then he did an amazing thing: he charged money for it. For this, the young programmer was attacked and vilified by many of his fellow buffs. And yet there were people willing to pay $50 for the BASIC program. Sales grew so fast that the undergraduate dropped out of college, to the despair of his parents, and started a tiny business selling software for the new breed of “microcomputers.” Bill Gates named his company Microsoft.
These early computer buffs—“addicts” might be a more descriptive term—began forming clubs where they could get together for endless debates about the best approach to bit-mapped graphics or the proper interface for a floppy disk or the relative merits of the 8080, 6502, and Z80 microprocessors. At one such organization in Silicon Valley, a group called the Homebrew Computer Club, two young computer-philes, Steven Jobs and Stephen Wozniak, convinced themselves that there had to be a larger market for personal computers than the relatively small world of electronics tinkerers. The gimmick, they decided, was to design a machine that was pretty to look at and simple to use. The important thing was that the personal computer could not be intimidating; even the name of the machine would have to sound congenial. Eventually, Jobs settled on the friendliest word he could think of—“apple.” It had nothing to do with computers or electronics, but then that was the whole point. The two started a computer company called Apple.
There was a time—when computers were huge, impossibly expensive, and daunting even to experts—when the sociological savants regularly warned that ordinary people could become pawns at the hands of the few corporate and governmental Big Brothers that could afford and understand computers. This centralization of power in the hands of the computer’s controllers was a basic precept of Orwell’s 1984. But by the time the real 1984 rolled around, the mass distribution of microelectronics had spawned a massive decentralization of computing power. In the real 1984, millions of ordinary people could match the governmental or corporate computer bit for bit. In the real 1984, the stereotypical computer user had become a Little Brother seated at the keyboard to write his seventh-grade science report.
Patrick Haggerty, the visionary who had predicted that the chip would become “pervasive,” had been proven right. By the twenty-first century, microelectronics did pervade nearly every aspect of society, replacing traditional means of control in familiar devices and creating new aspects of human activity that were previously unknown. By shrinking from the room-size ENIAC to the pinhead-size microprocessor, the computer had imploded into the basic fabric of daily life.
Haggerty lived until 1980, long enough to see his prediction starting to come true, but not to determine what the impact would be. His successors struggled to grapple with that issue. Prominent among those who were fascinated with the effect of microelectronics on human society was one of the patriarchs, Robert Noyce. “Clearly, a world with hundreds of millions of computers is going to be a different world,” he said near the end of his life. “But what will come of it? Who can use all that intelligence? What will you use it for? That’s a question technology can’t answer.”
9
DIM- I
A small but noteworthy segment of American industry, dead of competitive causes, was formally laid to rest in Washington, D.C., on a summer day in 1976. The rite of internment was a fittingly sad ceremony at the Smithsonian Institution. Keuffel & Esser Company, the venerable manufacturer of precision instruments, presented the museum with the last Keuffel & Esser slide rule, together with the milling machine the company had used to turn out millions of slide rules over the years for students, scientists, architects, and engineers. Shortly past its 300th birthday— the rule was invented by the seventeenth-century British scholar William Oughtred, whose other great contribution to mathematics was the first use of the symbol “×” for multiplication—the slide rule had become a martyr to microelectronic progress.
“Progress,” in this case, meant Jack Kilby’s handheld calculator. In the five years after the small electronic calculator first hit the market, K & E, the largest and most famous slide rule maker in the world, had watched sales fall from about 20,000 rules per month to barely 1,000 each year. Toward the end, one of the major sources of slide rule sales was nostalgia, as museums, collectors, and photographers bought up the relics on the theory that the slide rule would soon disappear. By the mid-seventies, the handwriting was on the screen, so to speak, and even Keuffel & Esser was selling its own brand of electronic calculator. “Calculator usage is now 100 percent here,” an MIT professor told The New York Times in 1976, and that statement was essentially the obituary of the slide rule.
K & E’s last slide rule was eventually deposited in a bin on a storage shelf at the Smithsonian’s National Museum of American History. Someday it may be dusted off and put on display; at present, a special appointment must be made to view the relic. The curator says almost nobody ever bothers. Still, the slide rule lives on, in the affectionate memory (and fr
equently, amid the clutter in the desk drawers) of a whole generation of scientists and engineers, Jack Kilby among them. When word came in the fall of 2000 that Jack had won the Nobel Prize, the photographers who showed up at his office insisted that Kilby pose with his old slide rule, roughly the equivalent of asking Henry Ford to pose on horseback. Kilby and other engineers of his vintage recall the slide rule today with the same fond regard that an old golfer might have for a hickory-shafted mashie niblick or an auto buff reserves for the original 1964 Ford Mustang. In a requiem for the slide rule published in Technology Review, Professor Henry Petroski recalled that the Keuffel & Esser Log-Log Duplex Decitrig he bought as an undergraduate in the 1950s became his most valuable possession. “That silent computational partner [was] my constant companion throughout college and my early engineering career.”
From a practical viewpoint, though, the competition between the slide rule and the calculator was completely one-sided from the beginning. The slide rule was essentially a complicated ruler. In the most common form, it was a ten-inch-long rectangle made of ivory, wood, or plastic with three different numerical scales marked along it—one on the top edge, a different on the bottom edge, and another in the middle. The middle section could slide right and left between the top and bottom. While a ruler is marked off in equal increments—1 inch, 2 inches, 3 inches, with each whole number exactly an inch apart—a slide rule was calibrated in the logarithms of the whole numbers. A “logarithm” is a tiny bit of mathematical magic. By sliding a piece of plastic marked in logarithms, you can multiply, divide, square, cube, or find the square root of any number. So the slide rule, for three centuries, served as a simple calculator.