Orwell's Revenge

Home > Other > Orwell's Revenge > Page 27
Orwell's Revenge Page 27

by Peter Huber


  Coax and microwaves would transform more than the telephone industry. John Walson, Sr., of Mahanoy City, Pennsylvania, recognized that coaxial cable was the perfect medium for connecting homes in rural areas to a large master antenna. He began work on his first “community antenna” television network in 1948. Others took up the idea. Antennas were placed on hilltops, on tall buildings, and on masts. Distant signals were picked up and piped to viewers over coaxial cable. Before long, antenna operators began using microwave systems to beam in television signals to the master antennas from still farther afield. Thus, almost without design, cable television was invented. By 1955 there were 400 such systems in operation, serving 150,000 subscribers.

  Still other Bell Labs scientists were leading research in yet another sphere. The technology of telephone exchanges had languished for some years after the first electromechanical switches began to be used. (As late as 1951, operators were still being used to connect almost 40 percent of domestic long-distance calls.) In 1936, when Orwell was publishing his third book, Keep the Aspidistra Flying, Bell Labs’ director of research first discussed with physicist William Shockley the possibility of creating electronic telephone exchanges. Electronic switching, however, required a better amplifier than the vacuum tube technology. While each triode vacuum tube was capable of operating as a switch in a telephone exchange, an exchange needed thousands of such switches; tubes used too much power, and generated too much heat, to be packed together in the numbers required. Shockley and his Bell Labs colleagues Walter Brattain and John Bardeen set off in search of something better.

  They found it in 1947, as Orwell was completing the first draft of 1984. What they found was the transistor. A Nobel prize followed in 1956—the same year as the Hush-A-Phone ruling, the same year that IBM agreed for the second time to let others into the punch card business.

  The transistor, like the vacuum tube it displaced, was a compact, energy-efficient switch. Switches are the heart of a telephone exchange, for it is by opening and closing an appropriate set of switches that a single continuous line is created between Romeo in San Francisco and his Juliet in New York City. Switches are also the heart of a computer: by shifting on and off like beads moving on an abacus, switches can keep track of numbers, and numbers can keep track of everything. The first-generation computers in 1956 were still monstrous devices built around huge racks of vacuum tubes.

  The new transistor soon came to the notice of Jack Kilby, an engineer who had been designing compact systems for hearing-aid companies. In 1958, Kilby moved to the Dallas headquarters of Texas Instruments and had a brainstorm. Transistors were being made on silicon by then. Why not make resistors and capacitors too on the same medium, and thus manufacture entire circuits all at once, in one process, on one substrate? Why not, in other words, manufacture an “integrated circuit”? Robert Noyce, another alumnus of Bell Labs then working at Fairchild Semiconductor, soon radically improved on Kilby’s design. In 1968, Noyce and a colleague set up their own new company, Intel. Intel would eventually become master of the microprocessor, the computer on chip, which—like the audion before it— would fundamentally transform all of telephony, computing, and broadcast.

  • • •

  When microwaves, satellites, and developments in computers, radios, modems, fax machines, telephone handsets, and all the other varied progeny of the transistor began to make new competition feasible in the 1950s, the competitors arrived, first in ones and twos, then in legions, demanding permission to provide equipment and services around the periphery of the Bell empire.

  At first Bell responded along the familiar Hush-A-Phone lines. By the 1960s, however, the pressure from the market had grown too intense for the FCC to ignore. Slowly, grudgingly, the FCC retreated from the Hush-A-Phone mind-set and began authorizing all forms of electronic terminal equipment on private premises. Beginning with its Carterfone ruling in 1968 and ending in the late 1970s, the FCC eliminated all “foreign attachment” prohibitions from Bell’s tariffs. Standard interfaces between customer equipment and the network were established. With the FCC’s belated acquiescence, the market had won a historic victory over the monopoly. The way had been opened for a complete line of competitive products that would interconnect with the network on customer premises. To put the matter in Orwell’s terms, it was now official Ministry policy that Bell would create, maintain, and support “loose ends and forgotten comers” on its network. The network now had something that it had not had before: jacks that any humble citizen could plug into, or disconnect from, without a by-your-leave from Bell or the FCC.

  Virtually everything that was to follow in the dismantling of the Bell monopoly was a replay of Carterfone, a process of creating new “loose ends” on the network, new interfaces for the market. It required two more decades of regulatory and antitrust handwringing, but the rules permitting foreign attachments to the network created the market for enhanced services as well. If customers could connect their own telephones and answering machines to the network, private entrepreneurs could connect their own electronic publishing, data processing, voice mail, or dial-a-porn services too. All of these services simply involved connecting new equipment or new people to the existing wires.

  Competing long-distance services developed in exactly the same manner. In the 1940s, long-distance service was provided exclusively over wires, and the same basic economics that seemed to preclude competition in local service applied equally to long-distance service. The development of microwave and satellite technologies radically changed that picture, making competition both practical and inevitable.

  Initially, the pressure for competition came from large businesses, which sought to build microwave links solely to satisfy their own private communications needs. Then, in 1963, a small startup firm, Microwave Communications, applied to the FCC to construct a microwave line between St. Louis and Chicago. MCI told the FCC it would offer business customers “interplant and interoffice communications with unique and special characteristics.” In fact, what MCI had in mind was head-to-head competition against Bell’s long-distance operations.

  Other MCIs came clamoring at the FCC’s door, and the commission came under intense pressure to establish general conditions for entry by new long-distance carriers. In 1980 it formally adopted an open entry policy for all interstate services. It was Carterfone again, but this time on what engineers call the “trunk side” (as opposed to the “line side”) of the local exchange.

  Then, with almost no warning, a new generation of radio services burst on to the scene. When it first allocated frequencies for land mobile services in 1949, the FCC granted separate blocks to telephone companies and to “miscellaneous” or “limited” common carriers. The commission consistently maintained this procompetitive policy thereafter. When it began to issue cellular telephone licenses in the early 1980s, the FCC allocated two licenses for every service area, prohibited any licensee from owning a significant interest in both licenses, and thereafter encouraged the development of other radio technologies capable of providing directly competitive services. Most important, it required all landline telephone companies to provide unaffiliated mobile concerns with interconnection equal in type, quality, and price to that enjoyed by affiliates. Thus, a third set of loose ends to the network was created, this time at the interface between the traditional and still dominant landline telephone company and the new, much more competitive radio carriers.

  Developing at the same time was an eclectic array of new telecommunicating exchanges and devices. Before the advent of the transistor, both computers and telephone exchanges had required large, cumbersome, costly, custom-configured, labor-intensive centers. With the new electronics, much more powerful telephone switches and computers could be built into more compact and reliable units—minicomputers and “private branch exchanges,” which, as small, privately operated telephone exchanges, are telephony’s equivalent to the desktop computer.

  Larger institutions—hospitals, universities, c
orporate headquarters, and so on—had once relied on a few centralized mainframes to do their computing, and on “Centrex” services handled through public telephone exchanges, even for internal telephone calls. Now these same functions could be—and rapidly were—located in stand-alone units on private premises. Competing manufacturers of small, private exchanges and minicomputers proliferated. By the late 1970s, even Bell was systematically downgrading Centrex service and migrating its larger customers to private exchanges.

  This dispersion of electronic intelligence created a host of new centers, held in private hands, capable of communicating by wire, and in need of connections to do so. As had happened almost a century earlier with the rise of the telephone itself, the new talking boxes created new demand. What was critically different about the new-generation local exchanges, whether true private exchanges or communicating computers, was that they were owned and controlled not by a small number of quasi-governmental, monopoly telephone companies but by a larger number of private, competitive institutions. For the most part, these private owners welcomed competitive bidding for their telecommunications needs. The telephone had created the original demand for a telephone network almost a century before. Now a new generation of transistor-based electronic equipment in private hands was creating demand for the kinds of competing long-distance services that MCI proposed to offer.

  At the same time, the transistor was also fulfilling its original mission, which was to transform the public telephone exchange: a new generation of electronic switches was deployed in the 1960s and 1970s. These switches were far more efficient, powerful, and flexible than the old switches they replaced. They could support levels of interconnection—and thus offer customers a variety of choices—that would have been prohibitively slow, complex, and unreliable in the days when switching was accomplished by human operators or electromechanical devices. As MCI built up its business in the 1970s, the company resolved to carry competition back up the network—to compete not just in connecting private computers and switches but also between the public exchanges operated by the Bell and other public telephone companies. The capabilities of the new electronic switches made that aspiration quite realistic; as every telephone user knows today, such switches can be programmed with databases to route traffic automatically, Hatfield’s to Bell, McCoy’s to MCI, effortlessly and invisibly whenever either places a long-distance call.

  The new array of players assembled around loose ends of the Bell network demanded truly equal interconnection on equal terms. When Bell declined to provide it, the newcomers responded with a blizzard of FCC petitions and private antitrust suits. Viewed in historical context, the federal government’s antitrust suit that produced the final breakup of the Bell System was little more than a footnote to what had unfolded in the market and the FCC before. By late 1981, AT&T was ready to throw in the towel. It cut a deal with the federal antitrust prosecutors. The final breakup was scheduled for January 1, 1984.

  • • •

  For a time it appeared that Big Blue’s hegemony would collapse long before Bell’s.

  Computers based on vacuum tubes rather than transistors had begun to displace tabulating machines in government and defense agencies during World War II. Two University of Pennsylvania scientists, J. Presper Eckert and John W Mauchly, were the leading pioneers in the new field. Together they built ENIAC (Electronic Numerical Integrator and Computer), generally recognized as the first electronic computer.

  Watson saw the ENIAC but failed to recognize its importance. In 1951, however, AT&T licensed the basic transistor patents to other companies. Philco, RCA, and General Electric quickly developed computers that were much more advanced than IBM’s. In 1952, Remington Rand acquired the company founded by Eckert and Mauchly, and the following year it introduced the UNIVAC. IBM nonetheless continued to gain in the market, on the strength of its sales force and established business base. Before long, IBM had grasped the power of the new electronic technology and mounted a crash effort to recapture its technological lead. By the 1960s, there were eight major manufacturers of mainframe computers, but IBM was so dominant once again that the group came to be called Snow White and the Seven Dwarfs.

  When IBM announced its new System/360 on April 7, 1964, it did so simultaneously in sixty-three U.S. cities and fourteen foreign countries. During the first two years of System/360, 9,013 computers (three times original projections) were ordered. By 1967, IBM 360 installations accounted for an estimated 80 percent of all new computer capacity in the world and approximately 70 percent of new computer installations in the major markets of Britain, France, Germany, Italy, and the United States. IBM’s revenues mushroomed, from $1.7 billion to $7.5 billion.

  The same economics that had secured IBM’s market dominance in the era of punch cards and tabulating machines had apparently come into play again. Once a customer had committed to IBM, a switch to another vendor entailed prohibitive new investment in applications, training, and software. With the largest base of customers, IBM also offered the largest library of application programs. At the same time, IBM did all it could to freeze out the competition. The strategy was the same as Bell’s. Computers then, as now, consisted of central processors, storage devices (disk drives, tapes, computer cards), and input-output devices (screens, printers, keyboards, and so on). IBM rigidly adhered to a policy of closed, proprietary architectures, a policy readily enforced when all its machines were supplied like Bell’s) only under lease. Would-be competitors were eager to sell “plug-compatible” peripherals—card readers, printers, disk drives, monitors, and so on— that would hook into the IBM machines. IBM was determined that they wouldn’t. There were to be no loose ends on the IBM mainframe—none at all.

  Outside IBM, there was much disagreement about precisely what should be done. In retrospect the debates seem absurd, but they were perfectly serious at the time. Many pundits still believed in Grosch’s law, according to which the efficiency and power of an electronic computer would increase steadily with its size. You would always get more total computing power at less cost, it was thought, by building one larger computer rather than two smaller ones.

  The implication was as obvious as it was ominous: computing was destined to end up in one or two machines, or perhaps a very small cluster of machines, located in a few, huge, central buildings, buildings that—every Orwellian expected—were bound to tower vast and white above the grimy landscape, enormous pyramidal structures of glittering white concrete, soaring up, terrace after terrace, three hundred meters into the air.

  One possible solution would have been to unleash Bell and IBM to compete head to head. Bell, after all, had invented the transistor, the key to all electronic computers of the day; moreover, Bell was already manufacturing a lot of very powerful computers for its own uses. But as the government lawyers saw it, Bell was already too big and powerful for anyone’s good, and in other arenas the entire government strategy had been to quarantine Bell from entering new markets like computing.

  So instead of letting another established firm compete against IBM, the government resolved for a time to have IBM compete against itself. The objective: break up IBM. This required a mammoth antitrust suit. The following thirteen years of litigation would represent the slowest, most expensive, paper-clogged, and useless antitrust lawsuit ever undertaken by the federal government—an operation as monstrous and inefficient in its own way as the computer monopolist that it targeted. The suit came to be known as the Antitrust Division’s Vietnam.

  The agreement to break up AT&T was announced on January 8, 1982, the same day that the federal government agreed to dismiss its case against IBM. One of the eight fragments of the old Bell System— the surviving AT&T—was to be freed from all antitrust quarantines and so permitted to enter the computer business. Intel was already over a decade old. Apple was growing fast. And IBM had just introduced a brand-new machine, based on an Intel microprocessor. Big Blue’s new machine—its “personal computer”—was small and beige.
<
br />   • • •

  The small beige machine was made possible by a single device: the integrated circuit, the microprocessor, the computer on a chip. The integrated circuit continued the transistor’s restructuring of telephony but accelerated the pace of change a thousand-fold.

  Intel, alongside other chip developers like Motorola and Texas Instruments, had taken a familiar device, the transistor, and made it smaller. Transistors were shrunk from the size of a fingernail to the size of a hair, to the size of a microbe and smaller. The power of the microprocessor grew as fast as its components shrank.

  The economics of producing electronic equipment shifted dramatically. Designing a single, advanced microprocessor may require a billion-dollar investment. Thereafter, any number of copies can be stamped out at very little cost. The technology thus triggered an efflorescence of new desktop and office systems, as well as consumer electronics. All depended on the same fundamental component: the transistor. All operated digitally. All could be mass produced at little cost once the electronics for the first unit had been designed.

  The result has been a radical technological transformation, characterized by two seemingly contradictory trends: fragmentation and convergence.

  The first major trend today continues to be one of fragmentation. The once-centralized network is becoming decentralized. “Terminals”—dumb end points to the network—are giving way to “seminals”—nodes of equal power that can process, switch, store, and retrieve information with the agility that was once lodged exclusively in a few fortified centers’ massive switches and mainframe computers. Residences and offices across the country are rapidly being equipped with a new generation of telephones: computers, facsimiles, electronic burglar alarms and meter readers, remote medical monitoring systems, and, soon, high-definition digital televisions. VCRs and videotapes are now, by a wide margin, the dominant medium for distributing movies. The “picturephone” that the Bell System unsuccessfully attempted to market in the 1960s is already owned by millions of Americans. It is called a video camera.

 

‹ Prev