by John Markoff
When MIT decided to do a survey on the wisdom of building a time-sharing system instead of immediately building what McCarthy had proposed, he decided to head west. Asking university faculty and staff what they thought of computer time-sharing would be like surveying ditchdiggers about the value of a steam shovel, he would later grouse.20
He was thoroughly converted to the West Coast counterculture. Although he had long since left the Communist Party, he was still on the Left and would soon be attracted to the anti-establishment community around Stanford University. He took to wearing a headband to pair with his long hair and became an active participant in the Free University that sprang up on the Midpeninsula around Stanford. Only when Russia crushed the Czech uprising in 1968 did he experience his final disillusionment with socialism. Not long afterward, while arguing over the wisdom of nonviolence during a Free U meeting, one of the radicals threatened to kill McCarthy, and he consequently ricocheted permanently to the Right. Not long afterward he registered as a Republican.
At the same time his career blossomed. Being a Stanford professor was a hunting license for funding and on his way to Stanford he turned to his friend J. C. R. Licklider, a former MIT psychologist, who headed ARPA’s Information Processing Techniques Office beginning in 1962. Licklider had collaborated with McCarthy on an early paper on time-sharing and he funded an ambitious time-sharing program at MIT after McCarthy moved to Stanford. McCarthy would later say that he never would have left if he had known that Licklider would be pushing time-sharing ideas so heavily.
On the West Coast, McCarthy found few bureaucratic barriers and quickly built an artificial intelligence lab at Stanford to rival the one at MIT. He was able to secure a computer from Digital Equipment Corporation and found space in the hills behind campus in the D.C. Power Laboratory, in a building and on land donated to Stanford by GTE after the telco canceled a plan for a research lab on the West Coast.
The Stanford Artificial Intelligence Laboratory quickly became a California haven for the same hacker sensibility that had spawned at MIT. Smart young computer hackers like Steve “Slug” Russell and Whitfield Diffie followed McCarthy west, and during the next decade and a half a startling array of hardware engineers and software designers would flow through the laboratory, which maintained its countercultural vibe even as McCarthy became politically more conservative. Both Steve Jobs and Steve Wozniak would hold on to sentimental memories of their visits as teenagers to the Stanford laboratory in the hills. SAIL would become a prism through which a stunning group of young technologists as well as full-blown industries would emerge.
Early work in machine vision and robotics began at SAIL, and the laboratory was indisputably the birthplace of speech recognition. McCarthy gave Raj Reddy his thesis topic on speech understanding, and Reddy went on to become the seminal researcher in the field. Mobile robots, paralleling Shakey at Stanford Research Institute, would be pursued at SAIL by researchers like Hans Moravec and later Rodney Brooks, both of whom became pioneering robotics researchers at Carnegie Mellon and MIT, respectively.
It proved to be the first golden era of AI, with research on natural language understanding, computer music, expert systems, and video games like Spacewar. Kenneth Colby, a psychiatrist, even worked on a refined version of Eliza, the online conversation system originally developed by Joseph Weizenbaum at MIT. Colby’s simulated person was known as “Parry,” with an obliquely bent paranoid personality. Reddy, who had previous computing experience using an early IBM mainframe called the 650, remembered that the company had charged $1,000 an hour for access to the machine. Now he found he “owned” a computer that was a hundred times faster for half of each day—from eight o’clock in the evening until eight the next morning. “I thought I had died and gone to heaven,” he said.21
McCarthy’s laboratory spawned an array of subfields, and one of the most powerful early on was known as knowledge engineering, pioneered by computer scientist Ed Feigenbaum. Begun in 1965, his first project, Dendral, was a highly influential early effort in the area of software expert systems intended to capture and organize human knowledge, and was initially intended to help chemists identify unknown organic molecules. It was a cooperative project among computer scientists Feigenbaum and Bruce Buchanan and two superstars from other academic fields—Joshua Lederberg, a molecular biologist, and Carl Djerassi, a chemist known for inventing the birth control pill—to automate the problem-solving strategies of an expert human organic chemist.
Buchanan would recall that Lederberg had a NASA contract related to the possibility of life on Mars and that mass spectrometry would be an essential tool in looking for such life: “That was, in fact, the whole Dendral project laid out with a very specific application, namely, to go to Mars, scoop up samples, look for evidence of organic compounds,”22 recalled Buchanan. Indeed, the Dendral project began in 1965 in the wake of a bitter debate within NASA over what the role of humans would be in the moon mission. Whether to keep a human in the control loop was sharply debated inside the agency at the dawn of spaceflight, and is again today, decades later, concerning a manned mission to Mars.
The original AI optimism that blossomed at SAIL would hold sway throughout the sixties. It is now lost in history, but Moravec, who as a graduate student lived in SAIL’s attic, recalled years later that when McCarthy first set out the original proposal he told ARPA that it would be possible to build “a fully intelligent machine” in the space of a decade.23 From the distance of more than a half century, it seems both quixotic and endearingly naive, but from his initial curiosity in the late 1940s, before there were computers, McCarthy had defined the goal of creating machines that matched human capabilities.
Indeed, during the first decade of the field, AI optimism was immense, as was obvious from the 1956 Dartmouth workshop:
The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.24
Not long afterward Minsky would echo McCarthy’s optimism, turning a lone graduate student loose on the problem of machine vision, figuring that it was a suitable problem to be solved as a summer project.25 “Our ultimate objective is to make programs that learn from their experience as effectively as humans do,” McCarthy wrote.26
As part of that effort he created a laboratory that was a paradise for researchers who wanted to mimic humans in machine form. At the same time it would also create a cultural chasm that resulted in a computing world with two separate research communities—those who worked to replace the human and those who wanted to use the same technologies to augment the human mind. As a consequence, for the past half century an underlying tension between artificial intelligence and intelligence augmentation—AI versus IA—has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world.
It is easy to argue that AI and IA are simply two sides of the same coin. There is a fundamental distinction, however, between approaches to designing technology to benefit humans and designing technology as an end in itself. Today, that distinction is expressed in whether increasingly capable computers, software, and robots are designed to assist human users or to replace them. Early on some of the researchers who passed through SAIL rebelled against McCarthy-style AI. Alan Kay, who pioneered the concept of the modern personal computer at Xerox during the 1970s, spent a year at SAIL, and would later say it was one of the least productive years of his career. He already had fashioned his Dynabook idea—“a personal computer for children of all ages”27—that would serve as the spark for a generation of computing, but
he remained an outsider in the SAIL hacker culture. For others at SAIL, however, the vision was clear: machines would soon match and even replace humans. They were the coolest things around and in the future they would meet and then exceed the capabilities of their human designers.
You must drive several miles from the Carnegie Mellon University campus to reach a pleasantly obscure Pittsburgh residential neighborhood to find Hans Moravec. His office is tucked away in a tiny apartment at the top of a flight of stairs around the corner from a small shopping street. Inside, Moravec, who retains his childhood Austrian accent, has converted a two-room apartment into a hideaway office where he can concentrate without interruption. The apartment opens into a cramped sitting room housing a small refrigerator. At the back is an even smaller office, with curtains down, dominated by large computer displays.
Several decades ago, when he captured the public’s attention as one of the world’s best-known robot designers, magazines often described him as “robotic.” In person, he is anything but, breaking out in laughter frequently and with a self-deprecating sense of humor. Still an adjunct professor at the Robotics Institute at Carnegie Mellon, where he taught for many years, Moravec, one of John McCarthy’s best-known graduate students, has largely vanished from the world he helped create.
When Robert M. Geraci, a religious studies professor at Manhattan College and author of Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality (2010), came to Pittsburgh to conduct his research several years ago, Moravec politely declined to see him, citing his work on a recent start-up. Geraci is one of a number of authors who have painted Moravec as the intellectual cofounder, with Ray Kurzweil, of a techno-religious movement that argues that humanity will inevitably be subsumed as a species by the AIs and robots we are now creating. In 2014 this movement gained generous exposure as high-profile technological and scientific luminaries such as Elon Musk and Stephen Hawking issued tersely worded warnings about the potential threat that futuristic AI systems hold for the human species.
Geraci’s argument is that there is a generation of computer technologists who, in looking forward to the consequences of their inventions, have not escaped Western society’s religious roots but rather recapitulated them. “Ultimately, the promises of Apocalyptic AI are almost identical to those of Jewish and Christian apocalyptic traditions. Should they come true, the world will be, once again, a place of magic,”28 Geraci wrote. For the professor of religion, the movement could in fact be reduced to the concept of alienation, which in his framing is mainly about the overriding human fear of dying.
Geraci’s conception of alienation isn’t simply a 1950s James Dean–like disconnect from society. Yet it is just as hard to pin Moravec on the more abstract concept of fear of death. The robotics pioneer became legendary for taking up residence in the attic of McCarthy’s SAIL lab during the 1970s, when it was a perfect counterculture world for the first generation of computer hackers who discovered that the machines they had privileged access to could be used as “fantasy amplifiers.”
During the 1970s, McCarthy continued to believe that artificial intelligence was within reach even with the meager computing resources then at hand, famously noting that a working AI would require: “1.8 Einsteins and one-tenth the resources of the Manhattan Project.”29 In contrast, Moravec’s perspective was rooted in the rapidly accelerating evolution of computing technology. He quickly grasped the implications of Moore’s law—the assertion that over time computing power would increase exponentially—and extended that observation to what he believed would be the logical conclusion: machine intelligence was inevitable and moreover it would happen relatively soon. He summed up the obstacles faced by the AI field in the late 1970s succinctly:
The most difficult tasks to automate, for which computer performance to date has been most disappointing, are those that humans do most naturally, such as seeing, hearing and common sense reasoning. A major reason for the difficulty has become very clear to me in the course of my work on computer vision. It is simply that the machines with which we are working are still a hundred thousand to a million times too slow to match the performance of human nervous systems in those functions for which humans are specially wired. This enormous discrepancy is distorting our work, creating problems where there are none, making others impossibly difficult, and generally causing effort to be misdirected.30
He first outlined his disagreement with McCarthy in 1975 in the SAIL report “The Role of Raw Power in Intelligence.”31 It was a powerful manifesto that steeled his faith in the exponential increase in processing power and simultaneously convinced him that the current limits were merely a temporary state of affairs. The lesson he drew early on, and to which he would return throughout his career, was that if you were stymied as an AI designer, just wait a decade and your problems would be solved by the inexorable increase in computing performance. In a 1978 essay for the science-fiction magazine Analog, he laid out his argument for a wider public. Indeed in the Analog essay he still retained much of McCarthy’s original faith that machines would cross the level of human intelligence in about a decade: “Suppose my projections are correct, and the hardware requirements for human equivalence are available in 10 years for about the current price of a medium large computer,” he asked. “What then?”32 The answer was obvious. Humans would be “outclassed” by the new species we were helping to evolve.
After leaving Stanford in 1980, Moravec would go on to write two popular books sketching out the coming age of intelligent machines. Mind Children: The Future of Robot and Human Intelligence (1988) contains an early detailed argument that the robots that he has loved since childhood are in the process of evolving into an independent intelligent species. A decade later he refined the argument in Robot: Mere Machine to Transcendent Mind (1998).
Significantly, although it is not widely known, Doug Engelbart had made the same observation, that computers would increase in power exponentially, at the dawn of the interactive computing age in 1960.33 He used this insight to launch the SRI-based augmentation research project that would help lead ultimately to both personal computing and the Internet. In contrast, Moravec built on his lifelong romance with robots. Though he has tempered his optimism, his overall faith never wavered. During the 1990s, in addition to writing his second book, he took two sabbaticals in an effort to hurry the process of perfecting the ability to permit machines to see and understand their environments so they could navigate and move freely.
The first sabbatical he spent in Cambridge, Massachusetts, at Danny Hillis’s Thinking Machines Corporation, where Moravec hoped to take advantage of a supercomputer. But the new supercomputer, the CM-5, wasn’t ready. So he contented himself with refining his code on a workstation while waiting for the machine. By the end of his stay, he realized that he only needed to wait for the power of a supercomputer to come to his desktop rather than struggle to restructure his code so it would run on a special-purpose machine. A half decade later, on a second sabbatical at a Mercedes-Benz research lab in Berlin, he again had the same realization.
Moravec still wasn’t quite willing to give up and so after coming back from Germany he took a DARPA contract to continue work on autonomous mobile robotic software. But after writing two best-selling books over a decade arguing for a technological promised land, he decided it was really time to settle down and do something about it. The idea that the exponential increase of computing power would inevitably lead to artificially intelligent machines was becoming more deeply ingrained in Silicon Valley, and a slick packaging of the underlying argument was delivered in 2005 by Ray Kurzweil’s The Singularity Is Near. “It was becoming a spectacle and it was interfering with real work,” he decided. By now he had taken to heart Alan Kay’s dictum that “the best way to predict the future is to invent it.”
His computer cave is miles from the offices of Seegrid, the robotic forklift company he founded in 2003, but within walking distance of his Pittsburgh home. For the p
ast decade he has given up his role as futurist and became a hermit. In a way, it is the continuation of the project he originally began as a child. Growing up in Canada, at age ten Moravec had built his first robot from tin cans, batteries, lights, and a motor. Later, in high school, he went on to build a robotic turtle capable of following a light and a robotic hand. At Stanford, he became the force behind the Stanford Cart project, a mobile robot with a TV camera that could negotiate obstacle courses. He had inherited the Cart system when he arrived at Stanford in 1971 and then gradually rebuilt the entire system.
Shakey was the first autonomous robot, but the Stanford Cart, with a long and colorful history of its own, is the true predecessor of the autonomous car. It had first come to life as a NASA-funded project in the mechanical engineering department in 1960, based on the idea that someday a vehicle would be remotely driven on the surface of the moon. The challenge was how to control such a vehicle given the 2.7-second propagation delay defining the round-trip radio signal between the Earth and the moon.
Funding for the initial project was rejected because the logic of keeping a human in the loop had won out. When in 1962 President Kennedy committed the nation to the manned exploration of the moon, the original Cart was shelved34 as unnecessary. The robot, about the size of a card table with four bicycle wheels, sat unused until in 1966 SAIL’s deputy director Les Earnest rediscovered it. He persuaded the mechanical engineering department to lend it to SAIL to experiment in making an autonomous vehicle. Eventually, using the computing power of the SAIL mainframe, a graduate student was able to program the robot to follow a white line on the floor at a speed of less than one mile per hour. A radio control link enabled remote operation. Tracking would have been simpler with two photocell sensors, but a video camera connected to a computer was seen as a tour de force at the time.