by John Markoff
Amid this cultural turmoil Charlie Rosen began building the world’s first real robot as a platform for conducting artificial intelligence experiments. Rosen was a Canadian-born applied physicist who was thinking about a wide range of problems related to computing, including sensors, new kinds of semiconductors, and artificial intelligence. He was something of a renaissance man: he coauthored one of the early textbooks on transistors and had developed an early interest in neural nets—computer circuits that showed promise in recognizing patterns, “learning” by simulating behavior of biological neurons.
As a result, Stanford Research Institute became one of the two or three centers of research on neural nets and perceptrons, efforts to mimic human forms of biological learning. Rosen was a nonstop fount of ideas, continually challenging his engineers about the possibility of remarkably far-out experiments. Peter Hart, a young Stanford electrical engineer who had done research on simple pattern recognizers, remembered frequent encounters with Rosen. “Hey, Pete,” Rosen would say while pressing his face up to the young scientist’s, close enough that Hart could see Rosen’s quivering bushy eyebrows while Rosen poked his finger into Hart’s chest. “I’ve got an idea.” That idea might be an outlandish concept for recognizing speech, involving a system to capture spoken words in a shallow tank of water about three meters long, employing underwater audio speakers and a video camera to capture the standing wave pattern created by the sound.
After describing each new project, Rosen would stare at his young protégé and shout, “What are you scared of?” He was one of the early “rainmakers” at SRI, taking regular trips to Washington, D.C., to interest the Pentagon in funding projects. It was Rosen who was instrumental in persuading the military to fund Doug Engelbart for his original idea of augmenting humans with computers. Rosen also wrote and sold the proposal to develop a mobile “automaton” as a test bed for early neural networks and other AI programs. At one meeting with some Pentagon generals he was asked if this automaton could carry a gun. “How many do you need?” was his response. “I think it should easily be able to handle two or three.”
It took the researchers a while to come up with a name for the project. “We worked for a month trying to find a good name for it, ranging from Greek names to whatnot, and then one of us said, ‘Hey, it shakes like hell and moves around, let’s just call it Shakey,’”6 Hart recalled.
Eventually Rosen would become a major recipient of funding from the Defense Advanced Research Projects Agency at the Pentagon, but before that he stumbled across another source of funding, also inside the military. He managed to get an audience with one of the few prominent women in the Pentagon, mathematician Ruth Davis. When Rosen told her he wanted to build an intelligent machine, she exclaimed, “You mean it could be a sentry? Could you use it to replace a soldier?” Rosen confided that he didn’t think robot soldiers would be on the scene anytime soon, but he wanted to start testing prerequisite ideas about machine vision, planning, problem-solving, and understanding human language. Davis became enthused about the idea and was an early funder of the project.
Shakey was key because it was one of just a handful of major artificial intelligence projects that began in the 1960s, causing an explosion of early work in AI that would reverberate for decades. Today Shakey’s original DNA can be found in everything from the Kiva warehouse robot and Google’s autonomous car to Apple’s Siri intelligent assistant. Not only did it serve to train an early generation of researchers, but it would be their first point of engagement with technical and moral challenges that continue to frame the limits and potential of AI and robotics today.
Many people believed Shakey was a portent for the future of AI. In November 1970 Life magazine hyped the machine as something far more than it actually was. The story appeared alongside a cover story about a coed college dormitory, ads for a car with four-wheel drive, and a Sony eleven-inch television. Reporter Brad Darrach’s first-person account took great liberties with Shakey’s capabilities in an effort to engage with the coming-of-age complexities of the machine era. He quoted a researcher at the nearby Stanford Artificial Intelligence Laboratory as acknowledging that the field had so far not been able to endow machines with complex emotional reactions such as human orgasms, but the overall theme of the piece was a reflection of the optimism that was then widespread in the robotics community.
The SRI researchers, including Rosen and his lieutenants, Peter Hart and Bert Raphael, were dismayed by a description claiming that Shakey was able to roll freely through the research laboratory’s hallways at a faster-than-human-walking clip, pausing only to peer in doorways while it reasoned in a humanlike way about the world around it. According to Raphael, the description was particularly galling because the robot had not even been operational when Darrach visited. It had been taken down while it was being moved to a new control computer.7
Marvin Minsky, the MIT AI pioneer, was particularly galled and wrote a long rebuttal, accusing Darrach of fabricating quotes. Minsky was quoted saying that the human brain was just a computer made out of “meat.” However, he was most upset at being linked to an assertion: “In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable.”8 In hindsight, Darrach’s alarms seem almost quaint today. Whether clueless or willfully deceptive, his broader point was simply that sooner or later—and he clearly wanted the reader to believe it was sooner—society would have to decide how they would live with their cybernetic offspring.
Indeed, despite his frustration with the inaccurate popularization of Shakey, two years later, in a paper presented at a technical computing and robotics conference in Boston, Rosen would echo Darrach’s underlying theme. He rejected the idea of a completely automated, “lights-out factory” in the “next generation” largely because of the social and economic chaos that would ensue. Instead, he predicted that by the end of the 1970s, the arrival of factory and service robots (under the supervision of humans) would eliminate repetitive tasks and drudgery. The arrival of the robots would be accompanied by a new wave of technological unemployment, he argued, and it was incumbent upon society to begin rethinking issues such as the length of the workweek, retirement age, and lifetime service.9
For more than five years, SRI researchers attempted to design a machine that was nominally an exercise in pure artificial intelligence. Beneath the veneer of science, however, the Pentagon was funding the project with the notion that it might one day lead to a military robot capable of tracking the enemy without risking lives of U.S. or allied soldiers. Shakey was not only the touchstone for much of modern AI research as well as projects leading to the modern augmentation community—it was also the original forerunner of the military drones that now patrol the skies over Afghanistan, Iraq, Syria, and elsewhere.
Shakey exemplified the westward migration of computing and early artificial intelligence research during the 1960s. Although Douglas Engelbart, whose project was just down the hall, was a West Coast native, many others were migrants. Artificial intelligence as a field of study was originally rooted in a 1956 Dartmouth College summer workshop where John McCarthy was a young mathematics professor. McCarthy had been born in 1927 in Boston of an Irish Catholic father and Lithuanian Jewish mother, both active members of the U.S. Communist Party. His parents were intensely intellectual and his mother committed to the idea that her children could pursue any interests they chose. At twelve McCarthy encountered Eric Temple Bell’s Men of Mathematics, a book that helped determine the career of many of the best and brightest of the era including scientists Freeman Dyson and Stanislaw Ulam. McCarthy was viewed as a high school math prodigy and only applied to Caltech, where Temple Bell was a professor, something he later decided had
been an act of “arrogance.” On his application he described his plans in a single sentence: “I intend to be a professor of mathematics.” Bell’s book had given him a realistic view of what that path would entail. McCarthy had decided that mathematicians were rewarded principally by the quality of their research, and he was taken with the idea of the self-made intellectual.
At Caltech he was an ambitious student. He jumped straight to advanced calculus and simultaneously a range of other courses including aeronautical engineering. He was drafted relatively late in the war, so his army career was more about serving as a cog in the bureaucracy than combat. Stationed close to home at Fort MacArthur in the port city of San Pedro, California, he began as a clerk, preparing discharges, then promotions for soldiers leaving the military. He made his way to Princeton for graduate school and promptly paid a visit to John von Neumann, the applied mathematician and physicist who would become instrumental in defining the basic design of the modern computer.
At this point the notion of “artificial intelligence” was fermenting in McCarthy’s mind, but the coinage had not yet come to him. That wouldn’t happen for another half decade in conjunction with the summer 1956 Dartmouth conference. He had first come to the concept in grad school when attending the Hixon Symposium on Cerebral Mechanisms in Behavior at Caltech.10 At that point there weren’t programmable computers, but the idea was in the air. Alan Turing, for example, had written about the possibility the previous year, to receptive audiences on both sides of the Atlantic. McCarthy was thinking about intelligence as a mathematical abstraction rather than something realizable—along the lines of Turing—through building an actual machine. It was an “automaton” notion of creating human intelligence, but not of the kind of software cellular automata that von Neumann would later pursue. McCarthy focused instead on an abstract notion of intelligence that was capable of interacting with the environment. When he told von Neumann about it, the scientist exclaimed, “Write it up!” McCarthy thought about the idea a lot but never published anything. Years later he would express regret at his inaction. Although his thesis at Princeton would focus on differential equations, he also developed an interest in logic, and a major contribution to the field of artificial intelligence would later come from his application of mathematical logic to common sense reasoning. He arrived at Princeton a year after Marvin Minsky and discovered that they were both already thinking about the idea of artificial intelligence. At the time, however, there were no computers to allow them to work with the ideas, and so the concept would remain an abstraction.
As a graduate student, McCarthy was a contemporary of John Forbes Nash, the mathematician and Nobel laureate who would later be celebrated in Sylvia Nasar’s 1998 biography, A Beautiful Mind. The Princeton graduate students made a habit of playing practical jokes on each other. McCarthy, for example, fell victim to a collapsing bed. He found that another graduate student was a double agent in their games, plotting with McCarthy against Nash while at the same time plotting with Nash against McCarthy. Game theory was in fashion at the time and Nash later received his Nobel Prize in economics for contributions to that field.
During the summer of 1952 both McCarthy and Minsky were hired as research assistants by mathematician and electrical engineer Claude Shannon at Bell Labs. Shannon, known as the father of “information theory,” had created a simple chess-playing machine in 1950, and there was early interest in biological-growth simulating programs known as “automata,” of which John Conway’s 1970 Game of Life would become the most famous.
Minsky was largely distracted by his impending wedding, but McCarthy made the most of his time at Bell Labs, working with Shannon on a collection of mathematical papers that was named at Shannon’s insistence Automata Studies.11 Using the word “automata” was a source of frustration for McCarthy because it shifted the focus of the submitted papers away from the more concrete artificial intelligence ideas and toward more esoteric mathematics.
Four years later he settled the issue when he launched the new field that now, six decades later, is transforming the world. He backed the term “artificial intelligence” as a means of “nail[ing] the idea to the mast”12 and focusing the Dartmouth summer project. One unintended consequence was that the term implied the idea of replacing the human mind with a machine, and that would contribute to the split between the artificial intelligence and intelligence augmentation researchers. The christening of the field, however, happened in 1956 during the event that McCarthy was instrumental in organizing: the Dartmouth Summer Research Project on Artificial Intelligence, which was underwritten with funding from the Rockefeller Foundation. As a branding exercise it would prove a momentous event. Other candidate names for the new discipline included cybernetics, automata studies, complex information processing, and machine intelligence.13
McCarthy wanted to avoid the term “cybernetics” because he thought of Norbert Wiener, who had coined the term, as something of a bombastic bore and he chose to avoid arguing with him. He also wanted to avoid the term “automata” because it seemed remote from the subject of intelligence. There was still another dimension inherent in the choice of the term “artificial intelligence.” Many years later in a book review taking issue with the academic concept known as the “social construction of technology,” McCarthy took pains to distance artificial intelligence from its human-centered roots. It wasn’t about human behavior, he insisted.14
The Dartmouth conference proposal, he would recall years later, had made no reference to the study of human behavior, “because [he] didn’t consider it relevant.”15 Artificial intelligence, he argued, was not considered human behavior except as a possible hint about performing humanlike tasks. The only Dartmouth participants who focused on the study of human behavior were Allen Newell and Herbert Simon, the Carnegie Institute researchers who had already won acclaim for ingeniously bridging the social and cognitive sciences. Years later the approach propounded by the original Dartmouth conference members would become identified with the acronym GOFAI, or “Good Old-Fashioned Artificial Intelligence,” an original approach centered on achieving human-level intelligence through logic and the branch of problem-solving rules called heuristics.
IBM, by the 1950s already the world’s largest computer maker, had initially been involved in the planning for the summer conference. Both McCarthy and Minsky had spent the summer of 1955 in the IBM laboratory that had developed the IBM 701, a vacuum tube mainframe computer of which only nineteen were made. In the wake of the conference, several IBM researchers did important early work on artificial intelligence research, but in 1959 the computer maker pulled the plug on its AI work. There is evidence that the giant computer maker was fearful that its machines would be linked to technologies that destroyed jobs.16 At the time the company chief executive Thomas J. Watson Jr. was involved in national policy discussions over the role and consequences of computers in automation and did not want his company to be associated with the wholesale destruction of jobs. McCarthy would later call the act “a fit of stupidity” and a “coup.”17
During those early years McCarthy and Minsky remained largely inseparable—Minsky’s future wife even brought McCarthy along when she took Minsky home to introduce him to see her parents—even though their ideas about how to pursue AI increasingly diverged. Minsky’s graduate studies had been on the creation of neural nets. As his work progressed, Minsky would increasingly place the roots of intelligence in human experience. In contrast, McCarthy looked throughout his career for formal mathematical-logical ways to model the human mind.
Yet despite their initial difficulties, early on, the field remained remarkably collegial and in the hands of researchers with privileged access to the jealously guarded room-sized computers of the era. As McCarthy recalls it, the MIT Artificial Intelligence Laboratory came into being in 1958 after both he and Minsky had joined the university faculty. One day McCarthy met Minsky in a hallway and said to him, “I think we should have an AI project.” Minsky responded t
hat he thought that was a good idea. Just then the two men saw Jerome Wiesner, then head of the Research Laboratory on Electronics, walking toward them.
McCarthy piped up, “Marvin and I want to have an AI project.”
“What do you want?” Wiesner responded.
Thinking quickly on his feet, McCarthy said, “We’d like a room, a secretary, a keypunch, and two programmers.”
To which Wiesner replied, “And how about six graduate students?”
Their timing would prove to be perfect. MIT had just received a large government grant “to be excellent,” but no one really knew what “excellent” meant. The grant supported six mathematics graduate students at the time, but Wiesner had no idea what they would do. So for Wiesner, McCarthy and Minsky were a serendipitous solution.18
The funding grant came through in the spring of 1958, immediately in the wake of the Soviet Sputnik satellite. U.S. federal research dollars were just starting to flow in large amounts to universities. It was widely believed that the generous support of science would pay off for the U.S. military, and that year President Eisenhower formed the Advanced Research Projects Agency, ARPA, to guard against future technological surprises.
The fortuitous encounter by the three men had an almost unfathomable impact on the world. A number of the “six graduate students” were connected with the MIT Model Railway Club, an unorthodox group of future engineers drawn to computing as if by a magnet. Their club ethos would lead directly to what became the “hacker culture,” which held as its most prized value the free sharing of information.19 McCarthy would help spread the hacker ethic when he left MIT in 1962 and set up a rival laboratory at Stanford University. Ultimately the original hacker culture would also foment social movements such as free/open-source software, Creative Commons, and Network Neutrality movements. While still at MIT, McCarthy, in his quest for a more efficient way to conduct artificial intelligence research, had invented computer time-sharing, as well as the Lisp programming language. He had an early notion that his AI, when it was perfected, would be interactive and logical to design on a computing system shared by multiple users, rather than requiring users to sign up to use the computer one at a time.