by Matthew Lyon
And none of the resources or results was easily shared. If the scientists doing graphics in Salt Lake City wanted to use the programs developed by the people at Lincoln Lab, they had to fly to Boston. Still more frustrating, if after a trip to Boston people in Utah wanted to start a similar project on their own machine, they would need to spend considerable time and money duplicating what they had just seen. In those days, software programs were one-of-a-kind, like original works of art, and not easily transferred from one machine to another. Taylor was convinced of the technical feasibility of sharing such resources over a computer network, though it had never been done.
Beyond cost-cutting, Taylor’s idea revealed something very profound. A machine’s ability to amplify human intellectual power was precisely what Licklider had had in mind while writing his paper on human-machine symbiosis six years earlier. Of course, Licklider’s ideas about time-sharing were already bearing fruit at universities all over the country. But the networking idea marked a significant departure from time-sharing. In a resource-sharing network, many machines would serve many different users, and a researcher interested in using, say, a particular graphics program on a machine two thousand miles away would simply log on to that machine. The idea of one computer reaching out to tap resources inside another, as peers in a collaborative organization, represented the most advanced conception yet to emerge from Licklider’s vision.
Taylor had the money, and he had Herzfeld’s support, but needed a program manager who could oversee the design and construction of such a network, someone who not only knew Licklider’s ideas but believed in them. This person had to be a first-rate computer scientist, comfortable with a wide range of technical issues.
How it was to be achieved didn’t concern Taylor greatly, as long as the network was reliable and fast. Those were his priorities. Interactive computing meant you’d get a quick response from a computer, so in the modern computing environment it made sense that a network also should be highly responsive. And to be useful, it had to be working anytime you needed it. Whoever designed such a network needed to be an expert in telecommunications systems as well. It wasn’t an easy combination to find. But Taylor already had someone in mind: a shy, deep-thinking young computer scientist from the Lincoln Labs breeding ground named Larry Roberts.
In early 1966, Roberts was at Lincoln working on graphics. But he had also done quite a lot of work in communications. He had just completed one of the most relevant proof-of-principle experiments in networking to date, hooking together two computers a continent apart. Taylor had funded Roberts’s experiment. It had been successful enough to build Taylor’s confidence and convince both himself and Herzfeld that a slightly more intricate network was feasible. And Roberts’s knowledge of computers went deep. The son of Yale chemists, Roberts had attended MIT and received his introduction to computers on the TX-0. Although it was the first transistorized digital computer, the TX-0 was limited (subtraction was not in its repertoire; it could subtract only by adding a negative number). Using the TX-0, Roberts taught himself the basics of computer design and operation. Roberts, in fact, had written the entire operating system for its successor, the TX-2 computer at Lincoln, which Wes Clark (who built the TX-0 with Ken Olsen) had fatefully shown off to Licklider. When Clark left Lincoln in 1964, the job of overseeing the TX-2 had fallen to Roberts.
Taylor didn’t know Roberts very well. No one, it seemed, knew Roberts very well. He was as reserved in his manner as Taylor was open in his. The people with whom Roberts worked most closely knew almost nothing about his personal life. What was known about him was that in addition to computing and telecommunications expertise, he had a knack for management. Roberts’s style was simple, direct, unambiguous, and terribly effective.
Roberts had a reputation for being something of a genius. At twenty-eight, he had done more in the field of computing than many scientists were to achieve in a lifetime. Blessed with incredible stamina, he worked inordinately late hours. He was also a quick study: More than a few people had had the experience of explaining to Roberts something they had been working on intensively for years, and finding that within a few minutes he had grasped it, turned it around in his head a couple of times, and offered trenchant comments of his own. Roberts reminded Taylor of Licklider a little—but without Lick’s sense of humor.
Roberts was also known for his nearly obsessive ability to immerse himself in a challenge, pouring intense powers of concentration into a problem. A colleague once recalled the time Roberts took a speed-reading course. He quickly doubled his already rapid reading rate, but he didn’t stop there. He delved into the professional literature of speed-reading and kept pushing himself until he was reading at the phenomenal rate of about thirty thousand words a minute with 10 percent “selective comprehension,” as Roberts described it. After a few months, Roberts’s limiting factor had nothing to do with his eyes or his brain but with the speed at which he could turn the pages. ”He’d pick up a paperback and be through with it in ten minutes,” the friend observed. “It was typical Larry.”
Taylor called Roberts and told him he’d like to come to Boston to see him. A few days later Taylor was sitting in Roberts’s office at Lincoln Lab, telling him about the experiment he had in mind. As Taylor talked, Roberts murmured a nasal “hmm-hmm” as if to say, “please go on.” Taylor outlined not just the project but a job offer. Roberts would be hired as program director for the experimental network, with the understanding that he would be next in line for the IPTO directorship. Taylor made it clear that this project had the full support of ARPA’s director and that Roberts would be given ample latitude to design and build the network however he saw fit. Taylor waited for an answer. “I’ll think about it,” Roberts said flatly.
Taylor read this as Roberts’s polite way of saying no, and he left Boston discouraged. Under any other circumstances, he’d have simply crossed Roberts off the list and called his second choice. But he didn’t have a second choice. Not only did Roberts have the necessary technical understanding, but Taylor knew he would listen to Licklider and Wes Clark, both of whom were supporting Taylor’s idea.
A few weeks later Taylor made a second trip to Lincoln. This time Roberts was more forthcoming. He told Taylor politely but unequivocally that he was enjoying his work at Lincoln and had no desire to become a Washington bureaucrat.
Disconsolate, Taylor went to Cambridge to visit Lick, who was now back at MIT ensconced in a research effort on time-sharing called Project MAC. They discussed who else might be well suited to the job. Lick suggested a few people, but Taylor rejected them. He wanted Roberts. From then on, every two months or so, during visits to ARPA’s other Boston-area contractors, Taylor called on Roberts to try to persuade him to change his mind.
It had been nearly a year since Taylor’s twenty-minute conversation with Herzfeld, and the networking idea was floundering for lack of a program manager. One day in late 1966, Taylor returned to the ARPA director’s office.
“Isn’t it true that ARPA is giving Lincoln at least fifty-one percent of its funding?” Taylor asked his boss.
“Yes, it is,” Herzfeld responded, slightly puzzled.
Taylor then explained the difficulty he was having getting the engineer he wanted to run the networking program.
“Who is it?” Herzfeld asked.
Taylor told him. Then he asked his boss another question. Would Herzfeld call the director of Lincoln Lab and ask him to call Roberts in and tell him that it would be in his own best interest—and in Lincoln’s best interest—to agree to take the Washington job?
Herzfeld picked up his telephone and dialed Lincoln Lab. He got the director on the line and said just what Taylor had asked him to say. It was a short conversation but, from what Taylor could tell, Herzfeld encountered no resistance. Herzfeld hung up, smiled at Taylor, and said, “Well, okay. We’ll see what happens.” Two weeks later, Roberts accepted the job.
Larry Roberts was twenty-nine years old when he walked into the P
entagon as ARPA’s newest draftee. He fit in quickly, and his dislike of idle time soon became legendary. Within a few weeks, he had the place—one of the world’s largest, most labyrinthine buildings—memorized. Getting around the building was complicated by the fact that certain hallways were blocked off as classified areas. Roberts obtained a stopwatch and began timing various routes to his frequent destinations. ”Larry’s Route” soon became commonly known as the fastest distance between any two Pentagon points.
Even before his first day at ARPA, Roberts had a rudimentary outline of the computer network figured out. Then, and for years afterward as the project grew, Roberts drew meticulous network diagrams, sketching out where the data lines should go, and the number of hops between nodes. On tracing paper and quadrille pad, he created hundreds of conceptual and logical sketches like these:
(Later, after the project was under way, Roberts would arrange with Howard Frank, an expert in the field of network topology, to carry out computer-based analyses on how to lay out the network most cost-effectively. Still, for years Roberts had the network’s layout, and the technical particulars that defined it, sharply pictured inside his head.)
A lot was already known about how to build complicated communications networks to carry voice, sound, and other more elemental signals. AT&T, of course, had absolute hegemony when it came to the telephone network. But the systematic conveyance of information predated Ma Bell by at least a few thousand years. Messenger systems date at least as far back as the reign of Egyptian King Sesostris I, almost four thousand years ago. The first relay system, where a message was passed from one guard station to the next, came about in 650 B.C. For hundreds of years thereafter, invention was driven by the necessity for greater speed as the transmission of messages from one place to another progressed through pigeons, shouters, coded flags, mirrors, lanterns, torches, and beacons. Then, in 1793, the first tidings were exchanged using semaphores—pivoting vanes on a tower that resembled a person holding signal flags in outstretched arms.
By the mid-1800s telegraph networks were relying on electricity, and Western Union Telegraph Company had begun blanketing the United States with a network of wires for transmitting messages in the form of electric pulses. The telegraph was a classic early example of what is called a “store-and-forward network.” Because of electrical losses, the signals had to be switched forward through a sequence of relay stations. At first, messages arriving at switching centers were transcribed by hand and forwarded via Morse code to the next station. Later, arriving messages were stored automatically on typed paper ribbons until an operator could retype the message for the next leg. By 1903, arriving messages were encoded on a snippet of paper tape as a series of small holes, and the torn tape was hung on a hook. Tapes were taken in turn from the hooks by clerks and fed through a tape reader that automatically forwarded them by Morse code.
By the middle of the twentieth century, after the telephone had supplanted the telegraph as the primary means of communication, the American Telephone and Telegraph Company held a complete—albeit strictly regulated—monopoly on long-distance communications within the United States. The company was tenacious about its stronghold on both telephone service and the equipment that made such service possible. Attachment of foreign (non-Bell) equipment to Bell lines was forbidden on the grounds that foreign devices could damage the entire telephone system. Everything added to the system had to work with existing equipment. In the early 1950s a company began manufacturing a device called a Hush-A-Phone, a plastic mouthpiece cover designed to permit a caller to speak into a telephone without being overheard. AT&T succeeded in having the Federal Communications Commission ban the device after presenting expert witnesses who described how the Hush-A-Phone damaged the telephone system by reducing telephone quality. In another example of AT&T’s zeal, the company sued an undertaker in the Midwest who was giving out free plastic phone-book covers. AT&T argued that a plastic phone-book cover obscured the advertisement on the cover of the Yellow Pages and reduced the value of the paid advertising, revenues that helped reduce the cost of telephone service.
There was almost no way to bring radical new technology into the Bell System to coexist with the old. It wasn’t until 1968, when the FCC permitted the use of the Carterfone—a device for connecting private two-way radios with the telephone system—that AT&T’s unrelenting grip on the nation’s telecommunications system loosened. Not surprisingly, then, in the early 1960s, when ARPA began exploring an entirely new way of transmitting information, AT&T wanted no part of it.
Coincidental Inventions
Just as living creatures evolve through a process of mutation and natural selection, ideas in science and their applications in technology do the same. Evolution in science, as in nature—normally a gradual sequence of changes—occasionally makes a revolutionary leap breaking with the course of development. New ideas emerge simultaneously but independently. And so they did when the time was ripe for inventing a new way of transmitting information.
In the early 1960s, before Larry Roberts had even set to work creating a new computer network, two other researchers, Paul Baran and Donald Davies—completely unknown to each other and working continents apart toward different goals—arrived at virtually the same revolutionary idea for a new kind of communications network. The realization of their concepts came to be known as packet-switching.
Paul Baran was a good-humored immigrant from Eastern Europe. He was born in 1926, in what was then Poland. His parents sought refuge in the United States two years later, following a lengthy wait for immigration papers. The family arrived in Boston, where Paul’s father went to work in a shoe factory, and later settled in Philadelphia, where he opened a small grocery store. As a boy, Paul delivered groceries for his dad using a small red wagon. Once when he was five, he asked his mother if they were rich or poor. “We’re poor,” she responded. Later he asked his father the same question. “We’re rich,” the older Baran replied, providing his son with the first of many such existential conundrums in his life.
Paul eventually attended school two streetcar hops from home at Drexel Institute of Technology, which later became Drexel University. He was put off by the school’s heavy-handed emphasis in those days on rapid numerical problem solving: Two trivial arithmetic errors on a test (racing against a clock), and you failed, regardless of whether or not you fundamentally understood the problems. At the time, Drexel was trying to create a reputation for itself as a tough, no-nonsense place and took pride in its high dropout rate. Drexel instructors told their budding engineers that employers wanted only those who could calculate quickly and correctly. To his dismay, Baran saw many bright, imaginative friends forced out by the school’s “macho attitude” toward math. But he stuck it out, and in 1949 earned a degree in electrical engineering.
Jobs were scarce, so he took the first offer that came, from the Eckert-Mauchly Computer Corporation. In the relatively mundane capacity of technician, he tested parts for radio tubes and germanium diodes on the first commercial computer—the UNIVAC. Baran soon married, and he and his wife moved to Los Angeles, where he took a job at Hughes Aircraft working on radar data processing systems. He took night classes at UCLA on computers and transistors, and in 1959 he received a master’s degree in engineering.
Baran left Hughes in late 1959 to join the computer science department in the mathematics division at the RAND Corporation while continuing to take classes at UCLA. Baran was ambivalent, but his advisor at UCLA, Jerry Estrin, urged him to continue his studies toward a doctorate. Soon a heavy travel schedule was forcing him to miss classes. But it was finally divine intervention, he said, that sparked his decision to abandon the doctoral work. “I was driving one day to UCLA from RAND and couldn’t find a single parking spot in all of UCLA nor the entire adjacent town of Westwood,” Baran recalled. “At that instant I concluded that it was God’s will that I should discontinue school. Why else would He have found it necessary to fill up all the parking lots at that exact
instant?”
Soon after Baran had arrived at RAND, he developed an interest in the survivability of communications systems under nuclear attack. He was motivated primarily by the hovering tensions of the cold war, not the engineering challenges involved. Both the United States and the Soviet Union were in the process of building hair-trigger nuclear ballistic missile arsenals. By 1960, the escalating arms race between the United States and the Soviet Union heightened the threat of Doomsday—nuclear annihilation—over daily life in both countries.
Baran knew, as did all who understood nuclear weapons and communications technology, that the early command and control systems for missile launch were dangerously fragile. For military leaders, the “command” part of the equation meant having all the weapons, people, and machines of the modern military at their disposal and being able “to get them to do what you want them to do,” as one analyst explained. “Control” meant just the opposite—“getting them not to do what you don’t want them to.” The threat of one country or the other having its command systems destroyed in an attack and being left unable to launch a defensive or retaliatory strike gave rise to what Baran described as “a dangerous temptation for either party to misunderstand the actions of the other and fire first.”
As the strategists at RAND saw it, it was a necessary condition that the communications systems for strategic weapons be able to survive an attack, so the country’s retaliatory capability could still function. At the time, the nation’s long-distance communications networks were indeed extremely vulnerable and unable to withstand a nuclear attack. Yet the president’s ability to call for, or call off, the launch of American missiles (called “minimal essential communication”), relied heavily on the nation’s vulnerable communications systems. So Baran felt that working on the problem of building a more stable communications infrastructure—namely a tougher, more robust network—was the most important work he could be doing.