by Tim Wu
Licklider came to believe that the computer would realize its deepest potential in linking man and machine. He was interested in all forms of technologically augmented human life—what science fiction writers call cyborgs, and what Sigmund Freud meant when he described man as a “prosthetic god.”*
The basic story of the Internet’s early development has been told many times; but our specific concern is to understand what was the same and what was different about this network as compared with radio, television, and the telephone system. Licklider and other early Internet founders believed that they were building an information network like none other. Some of its innovations, like packet switching, were obviously radical even in their day. Yet as we have seen time and time again, one generation’s radical innovation is the next generation’s unyielding dinosaur.
In this chapter, we begin the pursuit of a central question: Was the Internet truly different, a real revolution? We don’t yet know the answer. But here, at its origins, we can gain the first inklings of what might account for that sense. The evidence boils down to the idea that of its singularity, the computer and the Internet attempted to give individuals a degree of control, of decision-making power unprecedented in a communications system. These were systems whose priority was human augmentation rather than the system itself. The aim was therefore an effort to create a decentralized network, and one that would stay that way.
THE NETWORK AND THE COMPUTER
To understand how far any notion of the Internet in the 1960s might be from our present experience, consider how far were the machines it meant to link from any we would call by the same name today. Computers were fearsome creatures, the size of rooms, jealously guarded by companies and government agencies. Their main function was mass-produced arithmetic—“data processing.” The archetype was the IBM AN/FSQ-7, the largest computer in human history, an electronic version of the Flying Fortress. As the scholar of media Howard Rheingold describes it, “the computers weighed three hundred tons, took up twenty thousand feet of floor space, and were delivered in eighteen large vans apiece. Ultimately, the air force bought fifty-six of them.”3
There could be no Internet as we know it without a concept of the computer as something beyond an adding machine—this had to come first. The philosophy of the Internet and the computer are so intertwined that is difficult to discuss just one of the two. They are in the same relationship as the telephone and its wires, or the film industry and its theaters: one could not advance without the other.
In 1960 Licklider wrote his famous paper, “Man-Computer Symbiosis.” Until then, if the dominant real-life vision of computing was IBM’s giant abacus, the prevailing imaginative alternative was the cliché of 1950s science fiction and some of the day’s more outlandish computer science speculation. It was an autonomous machine whose spirit would endure in the robot character of Lost in Space and the droids of Star Wars. Theorists of Artificial Intelligence foresaw computers that were intelligent machines, that could walk, talk, and help us with household tasks like washing dishes or greeting guests. This vision didn’t suffer the problem we’ve identified as technological shortsightedness. On the contrary, it was just too far out.
Licklider’s idea was different. “The hope,” he wrote, “is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought.…” The idea is one we take for granted now: computers would be used by humans in the process of thinking, as analytic aids rather than as calculators (the status quo) or as surrogates (the stuff of fantasy).4
The idea wasn’t Licklider’s alone. As with other conceptual leaps we’ve described, several individuals also made it at about the same time. Ten years before Licklider wrote his paper, for instance, a young engineer named Douglas Engelbart was pondering what he might do with his life. He was recently married yet felt himself lost, an idealist in search of a meaningful contribution. One evening in 1950 he was struck with a powerful vision: a general purpose machine that might augment human intelligence and help humans negotiate life’s complexities. John Markoff, who has documented Engelbart’s life carefully, describes the vision in some detail: Engelbart “saw himself sitting in front of a large computer screen full of different symbols. He would create a workstation for organizing all of the information and communications needed for any given project.”5
Engelbart’s ideas were similar to Licklider’s, if a bit further along in their development. But neither was as yet close to describing how one might practically wed human and computer capacities. Eventually Engelbart’s work caught Licklider’s attention, and with that, ARPA funding flowed to Engelbart to create the “Augmentation Reseach Center” at the Stanford Research Institute in Menlo Park, California. His immediate objective was finding better ways to connect the human brain to the power of a computer—what we now call “interfaces.”
It’s easy to forget that computers once took all of their questions and delivered all of their answers in numerical form. The basic ideas of a screen, a keyboard, and, most famously, a mouse are owed to Engelbart, who was the first to model those concepts, however crudely. He invented what would be called the “personal computer paradigm,” and even if there is much more to the history of the PC than what he did, the degree to which our present matches his drawing-board vision of 1950 is a little unnerving—every day billions at home or at work sit down in front of something that is essentially what he imagined that evening.*
Today, not only that interface but also that notion of what a computer is for—the vision Engelbart shared with Licklider—reign supreme. Nearly every program we use is a type of thinking aid—whether the task is to remember things (an address book), to organize prose (a word processor), or to keep track of friends (social networking software). This purpose of personal computing would go hand in glove with the idea of computer network communication. Both were radical technology; and fittingly, both grew out of a kind of counterculture.*
AT&T AND THE INTERNET
If, in the 1960s, computing was dominated by the giant data processing mainframe, the first (and last) word in communications and networking was still AT&T. AT&T owned the physical long distance lines that connected cities. If you wanted to send information from place to place, it was to AT&T you turned. Thus, improving communications meant improving the Bell system. Toward this end, a man named Paul Baran would spend years of his life trying to persuade AT&T to adopt the networking technologies that would ultimately underlie the Internet.6
In the early 1960s, Baran, a researcher at the RAND Institute, was thinking about how America could survive a nuclear attack. His goal, as he wrote at the time, was to “do all those things necessary to permit the survivors of the holocaust to shuck their ashes and reconstruct the economy.” Chief among his concerns were communications systems. Having concluded that AT&T’s long distance system was vulnerable to a Soviet strike, Baran came up with an ingenious means to harden the system. The idea was to try to turn the telephone infrastructure, a point-to-point system, into a highly redundant network—that is, one with various paths between any two points, so that if one route were taken out, the others would survive.† Baran’s inspiration was the human brain, which can sometimes recover from damage by reassigning lost functions to neural paths still intact. In order for his approach to work, Baran envisioned breaking up every message into tiny pieces, which would be sent over the network by any path available at a given moment. Today we call Baran’s concept “packet networking,” and it is the basis of almost every information network in the world.
Circuit switching and packet routing
These diagrams distinguish Baran’s idea from AT&T’s. On the AT&T network, a centralized switch picks a single route (a “circuit”) between two points, A, B, or C. On the Baran network, the packets of information can travel between any two points in multiple ways. There are, as pictured, three different ways between A and B, for instance.
/>
The key, however, is understanding how these different types of networks embody different systems of decision making. The AT&T system on the left is centralized, or hierarchical. The switch at the center decides how A will reach B. But, Baran’s system features multiple decision makers of equal weight. Each “router” must help decide how information should get from A to B, and as you can see, there are three different paths. Hence, in the same way that Licklider’s computer was meant to empower the individual, Baran’s packet networks contemplated a network of equals.
Perhaps it is a philosophical impulse that helps explain AT&T’s lack of enthusiasm for Baran’s ideas. As Katie Hafner and Matthew Lyon write in Where Wizards Stay Up Late, packet networking struck AT&T officials as “preposterous.” “Their attitude,” Baran said, “was that they knew everything and nobody outside the Bell System knew anything. So here some idiot comes along and talks about something being very simple, who obviously doesn’t know how the system works.” AT&T even went to the trouble of hosting a series of seminars to explain to Baran and others how the Bell system operated, and why a packet network was impossible, which is to suggest that there is more to their demurral than just the usual myopia. Ideologically, AT&T was committed to a network of defined circuits, or reserved paths, controlled by a single entity. Based on the principle that any available path was a good path, the packet concept admitted, however theoretically, the possibility of a network with multiple owners—an open network. And such a notion was anathema to AT&T’s “ONE COMPANY, ONE SYSTEM, UNIVERSAL SERVICE.”
Baran would spend four years at RAND trying to persuade AT&T to build the world’s first packet network, which he saw as simply an advance, not a threat. Yet even with the Air Force offering to pay for an experimental network, AT&T would not be budged. Baran would have to look elsewhere to try out his ideas.
COMMUNICATIONS
“In a few years,” Licklider wrote in 1968, “men will be able to communicate more effectively through a machine than face to face.” If we owe the computer’s interface more to Engelbart’s vision, we owe its status as communications instrument par excellence more to Licklider’s. It was his conviction that one day, the computer would displace the telephone as the dominant tool for human interaction. He was first to see the great coming rivalry between the telephone and the computer.
In his 1968 paper “The Computer as a Communication Device,” Licklider and a fellow scientist, Robert Taylor, made the following prediction: “We believe that we are entering a technological age in which we will be able to interact with the richness of living information—not merely in the passive way that we have become accustomed to using books and libraries, but as active participants in an ongoing process, bringing something to it through our interaction with it, and not simply receiving something from it by our connection to it.”
It is an astonishingly prescient comment, though it might have amounted to far less had Licklider not been appointed, by the Kennedy administration, to direct ARPA funding at the Pentagon in 1962. That position allowed him to direct capital toward individuals whose work he believed could make his great multiaccess network a reality—Engelbart, as we have seen, and also most of the other fabled fathers of the Internet.
Here, then, in the story of its origins, is the case that the Internet was different, that fundamentally different and indeed radical ideas were in play. But if computers had the potential to revolutionize communications, the challenges remained vast to put those ideas into effect. Computer communications required the development of a common language, which effort we shall follow in chapter 15, and some way to reach the masses. Both of those problems would take decades to solve—and so not until the 1990s would the seed bear fruit.
* Freud wrote, “Man has, as it were, become a kind of prosthetic god. When he puts on all his auxiliary organs he is truly magnificent; but those organs have not grown on to him and they still give him much trouble at times.” In the 1960s, Licklider imagined a great universal network by which the minds of all humanity might be linked via computers. This strange idea was the basis of what we now call the Internet.
* Any who doubt the prescience of his vision are invited to watch a video of his model PC that Engelbart made in 1968. Some of the components are a bit off—the keyboard, for instance, is not QWERTY—but there is no mistaking the basic form of the computer that would not become commonplace before the arrival of the Apple Macintosh and the first browsers in the mid-eighties and early nineties.
* Concurrent with Engelbart’s design efforts was his participation as a subject in trials to evaluate the effects of LSD on human creativity.
† This, by the way, is the source of the commonplace that the Internet was designed to survive a nuclear attack.
CHAPTER 13
Nixon’s Cable
In the late 1960s, Ralph Lee Smith was at home one afternoon in New York’s Greenwich Village when the telephone rang. It was an editor at The New York Times Magazine, well known to Smith, a freelance writer and frequent contributor to all the leading magazines in town. Familiar with Smith’s progressive social criticism, including his ably researched books The Health Hucksters (an exposé of food and drug advertising) and At Your Own Risk: The Case Against Chiropractic, the editor wanted to suggest a subject: “cable television.” Smith had never heard of it—in fact, he didn’t even own a TV. But he thanked the editor for thinking of him.
Deciding the subject was worth a sniff, Smith began to talk to engineers, futurists, and government officials, and he became tremendously excited. All who spoke to him described the coming technology as having near-utopian promise for social liberation. Cable, they believed, might well prove more revolutionary than the printing press. With the capacity in theory to bring an unlimited number of channels of information into the home, it had the potential to heal American politics, revive local communities, and offer every American direct access to the world’s knowledge and wisdom: “a communications center of a breadth and flexibility to influence every aspect of private and community life.”1
Smith became a believer. The idea of a technology that might democratize information resonated with the values of late-1960s New York, a folk music hotspot where only recently the city government had managed to vote down such imperious designs as Robert Moses’s Lower Manhattan Expressway. Smith wrote a manifesto called The Wired Nation, which won awards as a magazine article and later a book. Smith thus suddenly found himself at the vanguard of a visionary—and today mostly forgotten—movement to promote cable television as a technological savior of liberal values.2
By the 1940s the major media industries had all assumed their stable, apparently invincible forms; they seemed to be permanent fixtures of the American landscape, like the Democratic Party or Mount Rushmore. NBC and CBS ruled broadcasting. AT&T ran the telephone system. The Hollywood studios controlled film. Each monopoly or oligopoly had been blessed by the government in one way or another. And within two decades each would lie in the ruins of its former self.
The empires of AT&T and the Hollywood studios would be broken up by court orders. But broadcasting’s fate would be different. The stations, and ultimately the networks would be natural victims of the Cycle. Cable was the first disruptive innovation since the war, and one that would shred the prevailing power structure of television. Ralph Lee Smith was thus the 1970s avatar of what to us is a familiar figure: the idealist who helps to usher a closed established industry into a wide-open, expansive phase.
What few people know is that Ralph Smith’s arguably most important ally in this power-to-the-people crusade was President Richard Nixon. In the 1960s, cable was a technology serving small towns and remote localities, barred by federal law from expansion. It seemed doomed to being but the handmaid of broadcasting. Indeed, in another version of history, the cable networks would have emerged only as offshoots of NBC, CBS, and ABC, as has been the fate of cable in other major economies, among them Japan and Germany. But the Nixon administration had a di
fferent vision for cable. Nixon’s young head of communications policy, Clay Whitehead, ran the Cabinet Committee on Cable, which foresaw a life for the medium as a highly deregulated common carrier. And it was Nixon’s FCC that would launch the reforms to set cable free, for reasons somewhat more complicated than the general advancement of freedom.
CABLE IMPRISONED
In the late 1960s, cable had a distinctive identity. It was a scrappy industry of small-town entrepreneurs in perpetual trouble with the law, something akin to the file sharing sites of the early twenty-first century—a band of outsiders, certainly; outlaws, maybe.3
In the gleaming media metropolis, cable was the dive bar. It attracted shady characters, and its function was, one might say, parasitical. Cable founders were offering something that was hardly new or bold. The concept was called Community Antenna Television, a system to capture and retransmit TV to places that the broadcast signal didn’t reach. As with broadcast radio in the 1910s, the origins of cable television are obscure, because it was the work of amateurs.
In the late 1940s or so, men like John Walson, the owner of an appliance store in the Pennsylvania mountains, began erecting giant antennas to “catch” the weak signals and then transmit them over wires to paying customers. As with the farmer’s telephone in the first years of the twentieth century, cable was a do-it-yourself business for anyone with will and wires. It was a genesis and a business model that would ever after be stamped as pugnacious and cut-rate, a sharp contrast to the affected regal bearing of NBC (the Peacock Network) and CBS (the Tiffany Network), to the ultra-establishmentarian self-importance of the Bell system or the glamour of classic Hollywood.