by Andrew Keen
Along with its remarkable prescience, what is so striking about “As We May Think” is its unadulterated technological optimism. In contrast with Norbert Wiener, who later became an outspoken critic of government investment in scientific and particularly military research and who worried about the impact of digital computers upon jobs,14 Vannevar Bush believed that government investment in science represented an unambiguously progressive force. In July 1945, Bush also wrote an influential paper for President Roosevelt entitled “Science, The Endless Frontier,”15 in which he argued that what he called “the public welfare,” particularly in the context of “full employment” and the role of science in generating jobs, would be improved by government investment in technological research. “One of our hopes is that after the war there will be full employment,” Bush wrote to the president. “To reach that goal, the full creative and productive energies of the American people must be released.”
“As We May Think” reflects this same rather naïve optimism about the economics of the information society. Vannevar Bush insists that everyone—particularly trained professionals like physicians, lawyers, historians, chemists, and a new blogger-style profession he dubbed “trail blazers”—would benefit from the Memex’s automated organization of content. The particularly paradoxical thing about his essay is that while Bush prophesied a radically new technological future, he didn’t imagine that the economics of this information society would be much different from his own. Yes, he acknowledged, compression would reduce the cost of the microfilm version of the Encyclopaedia Britannica to a nickel. But people would still pay for content, he assumed, and this would be beneficial to Britannica’s publishers and writers.
The third member of the MIT trinity of Net forebears was J. C. R. Licklider. A generation younger than Bush and Wiener, Licklider came in 1950 to MIT, where he was heavily influenced by Norbert Wiener’s work on cybernetics and by Wiener’s legendary Tuesday night dinners at a Chinese restaurant in Cambridge, which brought together an eclectic group of scientists and technologists. Licklider fitted comfortably into this unconventional crowd. Trained as a psychologist, mathematician, and physicist, he had earned a doctorate in psychoacoustics and headed up the human engineering group at MIT’s Lincoln Laboratory, a facility that specialized in air defense research. He worked closely with the SAGE (Semi-Automatic Ground Environment) computer system, an Air Force–sponsored network of twenty-three control and radar stations designed to track Russian nuclear bombers. Weighing more than 250 tons and featuring 55,000 vacuum tubes, the SAGE system was the culmination of six years of development, 7,000 man-years of computer programming, and $61 billion in funding. It was, quite literally, a network of machines that one walked into.16
Licklider had become obsessed with computers after a chance encounter at MIT in the mid-1950s with a young researcher named Wesley Clark, who was working on one of Lincoln Labs’s new state-of-the-art TX-2 digital computers. While the TX-2 contained only 64,000 bytes of storage (that’s over a million times smaller than my current 64-gigabyte iPhone 5S), it was nonetheless one of the very earliest computers that both featured a video screen and enabled interactive graphics work. Licklider’s fascination with the TX-2 led him to an obsession with the potential of computing and, like Marshall McLuhan, the belief that electronic media “would save humanity.”17
Licklider articulated his vision of the future in his now-classic 1960 paper, “Man-Computer Symbiosis.” “The hope is that in not too many years, human brains and computing machines will be coupled . . . tightly,” he wrote, “and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”18
Just as Norbert Wiener saw computers as more than calculating devices able to solve differential equations and Vannevar Bush believed they could effectively organize information, Licklider recognized that these new thinking machines were, first and foremost, communications devices. A division of labor between men and computers, he argued, could save us time, refine our democracy, and improve our decision making.
In 1958, Licklider left MIT. He first worked at a Cambridge, Massachusetts–based consulting group called Bolt, Beranek and Newman (BBN). Then, in 1962, he moved to Washington, D.C., where he took charge of both the command and control and the behavioral sciences divisions of the Advanced Research Projects Agency (ARPA), a civilian group established by President Dwight Eisenhower in early 1958 to aggregate the best scientific talent for the public good. At ARPA, where he controlled a government budget of $10 million and came to head up the Information Processing Techniques Office, Licklider’s goal was the development of new programs that used computers as more than simply calculating machines. He gave ARPA contracts to the most advanced computer centers from universities like MIT, Stanford, Berkeley, and UCLA, and established an inner circle of computer scientists that a colleague dubbed “Lick’s Priesthood” and Licklider himself imagined as “The Intergalactic Computer Network.”19
There was, however, one problem with an intergalactic network. Digital computers—those big brains that Licklider called “information-handling machines”—could only handle their own information. Even the state-of-the-art devices like the TX-2 had no means of communicating with other computers. In 1962, computers still did not have a common language. Programmers could share individual computers among each other by “time-sharing,” which allowed them to work concurrently on a single machine. But every computer spoke in its own disparate language and featured software and protocols unintelligible to other computers.
But J. C. R. Licklider’s Intergalactic Computer Network was about to become a reality. The peace that Vannevar Bush welcomed in July 1945 had never really materialized. America had instead quickly become embroiled in a new war—the Cold War. And it was this grand geostrategic conflict with the Soviet Union that created the man-computer symbiosis that gave birth to the Internet.
From Sputnik to the ARPANET
On Friday, October 4, 1957, the Soviet Union launched their Sputnik satellite into earth’s orbit. The Sputnik Crisis, as President Eisenhower dubbed this historic Soviet victory in the space race, shook America’s self-confidence to the core. American faith in its military, its science, its technology, its political system, even its fundamental values was severely undermined by the crisis. “Never before had so small and so harmless an object created such consternation,” observed Daniel Boorstin in The Americans, writing about the loss of national self-confidence and self-belief that the crisis triggered.20
But along with all the doom and gloom, Sputnik also sparked a renaissance in American science, with the government’s research and development budget rising from $5 billion in 1958 to more than $13 billion annually between 1959 and 1964.21 ARPA, for example, with its initial $520 million investment and $2 billion budget plan, was created by President Eisenhower in the immediate aftermath of the crisis as a way of identifying and investing in scientific innovation.
But rather than innovation, the story of the Internet begins with fear. If the Soviets could launch such an advanced technology as Sputnik into space, then what was to stop them from launching nuclear missiles at the United States? This paranoia of a military apocalypse, “the specter of wholesale destruction,” as Eisenhower put it, so brilliantly satirized in Stanley Kubrick’s 1964 movie, Dr. Strangelove, dominated American public life after the Sputnik launch. “Hysterical prophesies of Soviet domination and the destruction of democracy were common,” noted Katie Hafner and Matthew Lyon in Where Wizards Stay Up Late, their lucid history of the Internet’s origins. “Sputnik was proof of Russia’s ability to launch intercontinental ballistic missiles, said the pessimists, and it was just a matter of time before the Soviets would threaten the United States.”22
The Cold War was at its chilliest in the late fifties and early sixties. In 1960, the Soviets shot down an American U-2 surveillance plane over the Urals. On August 17, 1961, the Berlin Wall, the C
old War’s most graphic image of the division between East and West, was constructed overnight by the German Democratic Republic’s communist regime. In 1962, the Cuban Missile Crisis sparked a terrifying contest of nuclear brinksmanship between Kennedy and Khrushchev. Nuclear war, once unthinkable, was being reimagined as a logistical challenge by game theorists at military research institutes like the RAND Corporation, the Santa Monica, California–based think tank set up by the US Air Force in 1964 to “provide intellectual muscle”23 for American nuclear planners.
By the late 1950s, as the United States developed hair-trigger nuclear arsenals that could be launched in a matter of minutes, it was becoming clear that one of the weakest links in the American military system lay with its long-distance communications network. Kubrick’s Dr. Strangelove had parodied a nuclear-armed America where the telephones didn’t work, but the vulnerability of its communications system to military attack wasn’t really a laughing matter.
As Paul Baran, a young computer consultant at RAND, recognized, America’s analog long-distance telephone and telegraph system would be one of the first targets of a Soviet nuclear attack. It was a contradiction worthy of Joseph Heller’s great World War II novel Catch-22. In the event of a nuclear attack on America, the key response should come from the president through the country’s communications system. Yet such a response would be impossible, Baran realized, because the communications system itself would be one the first casualties of any Soviet attack.
The real issue, for Baran, was making America’s long-distance communications network invulnerable against a Soviet nuclear attack. And so he set about building what he called “more survivable networks.” It certainly was an audacious challenge. In 1959, the thirty-year-old, Polish-born Baran—who had only just started as a consultant at RAND, having dropped out of UCLA’s doctoral program in electric engineering after he couldn’t find a parking spot one day on its Los Angeles campus24—set out to rebuild the entire long-distance American communications network.
This strange story has an even stranger ending. Not only did Baran succeed in building a brilliantly original blueprint for this survivable network, but he also accidentally, along the way, invented the Internet. “The phrase ‘father of the Internet’ has become so debased with over-use as to be almost meaningless,” notes John Naughton, “but nobody has a stronger claim to it than Paul Baran.”25
Baran wasn’t alone at RAND in recognizing the vulnerability of the nation’s long-distance network. The conventional RAND approach to rebuilding this network was to invest in a traditional top-down hardware solution. A 1960 RAND report, for example, suggested that a nuclear-resistant buried cable network would cost $2.4 billion. But Baran was, quite literarily, speaking another language from the other analysts at RAND. “Many of the things I thought possible would tend to sound like utter nonsense, or impractical, depending on the generosity of spirit in those brought up in an earlier world,”26 he acknowledged. His vision was to use digital computer technology to build a communications network that would be invulnerable to Soviet nuclear attack. “Computers were key,” Hafner and Lyon write about Baran’s breakthrough. “Independently of Licklider and others in computer’s avant-garde, Baran saw well beyond mainstream computing, to the future of digital technologies and the symbiosis between humans and machines.”27
Digital technologies transform all types of information into a series of ones and zeros, thus enabling computer devices to store and replicate information with perfect accuracy. In the context of communications, digitally encoded information is much less liable to degrade than analog data. Baran’s computer-to-computer solution, which he viewed as a “public utility,”28 was to build a digital network that would radically change the shape and identity of the preexisting analog system. Based on what he called “user-to-user rather than . . . center-to-center operation,”29 this network would be survivable in a nuclear attack because it wouldn’t have a heart. Rather than being built around a central communication switch, it would be what he called a “distributed network” with many nodes, each connected to its neighbor. Baran’s grand design, articulated in his 1964 paper “On Distributed Communications,” prefigures the chaotic map that Jonas Lindvist would later design for Ericsson’s office. It would have no heart, no hierarchy, no central dot.
The second revolutionary aspect of Baran’s survivable system was its method for communicating information from computer to computer. Rather than sending a single message, Baran’s new system broke up this content into many digital pieces, flooding the network with what he called “message blocks,” which would travel arbitrarily across its many nodes and be reassembled by the receiving computer into readable form. Coined as “packet switching” by Donald Davies, a government-funded information scientist at Britain’s National Physical Laboratory, who had serendipitously been working on a remarkably similar set of ideas, the technology was driven by a process Baran called “hot potato routing,” which rapidly sent packets of information from node to node, guaranteeing the security of the message from spies.
“We shape our tools and thereafter our tools shape us,” McLuhan said. And, in a sense, the fate of Baran’s grand idea on computer-to-computer communication that he developed in the early 1960s mirrored the technology itself. For a few years, bits and pieces of his ideas pinged around the computer science community. And then, in the midsixties, they were reassembled back at ARPA.
J. C. R. Licklider, who never stayed in a job more than a few years, was long gone, but his idea of “the Intergalactic Computer Network” remained attractive to Bob Taylor, an ex–NASA computer scientist, who now was in charge of ARPA’s Information Processing Techniques Office. As more and more scientists around America were relying on computers for their research, Taylor recognized that there was a growing need for these computers to be able to communicate with one another. Taylor’s concerns were more prosaic than an imminent Russian nuclear attack. He believed that computer-to-computer communication would cut costs and increase efficiency within the scientific community.
At the time, computers weren’t small and they weren’t cheap. And so one day in 1966, Taylor pitched the ARPA director, Charles Herzfeld, on the idea of connecting them.
“Why not try tying them all together?” he said.
“Is it going to be hard to do?” Herzfeld asked.
“Oh no. We already know how to do it,” Taylor promised.
“Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.”30
And Taylor did indeed get it going. He assembled a team of engineers including Paul Baran and Wesley Clark, the programmer who had gotten J. C. R. Licklider hooked on the TX-2 computer back in the fifties. Relying on Baran’s distributed packet-switching technology, the team developed a plan to develop a trial network of four sites—UCLA, Stanford Research Institute (SRI), the University of Utah, and the University of California, Santa Barbara. They were linked together by something called an Interface Message Processor (IMP), which today we call routers—those little boxes with blinking lights that connect up the networked devices in our homes. In December 1968, Licklider’s old Boston consultancy BBN won the contract to build the network. By October 1969, the network, which became known as ARPANET, and was hosted by refrigerator-sized 900-pound Honeywell computers, was ready to go live.
The first computer-to-computer message was sent from UCLA to SRI on October 1, 1969. While trying to type “login,” the SRI computer crashed after the UCLA programmer had managed to type “log.” For the first, but certainly not for the last time, an electronic message sent from one computer to another was a miscommunication.
The launch of ARPANET didn’t have the same dramatic impact as the Sputnik launch twelve years earlier. By the late sixties, American attention had shifted to transformational issues like the Vietnam War, the sexual revolution, and Black Power. So, in late 1969, nobody—with the exception of a few unfashionable geeks in the military-industrial co
mplex—cared much about two 900-pound computers miscommunicating with each other.
But the achievement of Bob Taylor and his engineering team cannot be underestimated. More than Sputnik and the wasteful space race, the successful building of ARPANET would change the world. It was one of the smartest million dollars ever invested. Had that money come from venture capitalists, it would have returned many billions of dollars to its original investors.
The Internet
In September 1994, Bob Taylor’s team reassembled in a Boston hotel to celebrate the twenty-fifth anniversary of ARPANET. By then, those two original nodes at UCLA and SRI had grown to over a million computers hosting Internet content and there was significant media interest in the event. At one point, an Associated Press reporter innocently asked Taylor and Robert Kahn, another of the original ARPANET team members, about the history of the Internet. What was the critical moment in its creation, this reporter wanted to know.
Kahn lectured the reporter on the difference between ARPANET and the Internet and suggested that it was something called “TCP/IP” that represented the “true beginnings of the Internet.”
“Not true,” Taylor interrupted, insisting that the “Internet’s roots” lay with the ARPANET.31
Both Taylor and Kahn are, in a sense, correct. The Internet would never have been built without ARPANET. Growing from its four original IMPs in 1969, it reached 29 by 1972, 57 by 1975, and 213 IMPs by 1981 before it was shut down and replaced as the Internet’s backbone by the National Science Foundation Network (NSFNET) in 1985. But the problem was that ARPANET’s success led to the creation of other packet-switching networks—such as the commercial TELENET, the French CYCLADES, the radio-based PRNET, and the satellite network SATNET—which complicated internetworked communication. So Kahn was right. ARPANET wasn’t the Internet. And he was right, too, about TCP/IP, the two protocols that finally realized Licklider’s dream of an intergalactic computer network.