Machines of Loving Grace
Page 2
Duvall had grown up on the Peninsula south of San Francisco, the son of a physicist who was involved in classified research at Stanford Research Institute, the military-oriented think tank where Shakey resided. At UC Berkeley he took all the computer programming courses the university offered in the mid-1960s. After two years he dropped out to join the think tank where his father worked, just miles from the Stanford campus, entering a cloistered priesthood where the mainframe computer was the equivalent of a primitive god.
For the young computer hacker, Stanford Research Institute, soon after renamed SRI International, was an entry point into a world that allowed skilled programmers to create elegant and elaborate software machines. During the 1950s SRI pioneered the first check-processing computers. Duvall arrived to work on an SRI contract to automate an English bank’s operations, but the bank had been merged into a larger bank, and the project was put on an indefinite hold. He used the time for his first European vacation and then headed back to Menlo Park to renew his romance with computing, joining the team of artificial intelligence researchers building Shakey.
Like many hackers, Duvall was something of a loner. In high school, a decade before the movie Breaking Away, he joined a local cycling club and rode his bike in the hills behind Stanford. In the 1970s the movie would transform the American perception of bike racing, but in the 1960s cycling was still a bohemian sport, attracting a ragtag assortment of individualists, loners, and outsiders. That image fit Duvall’s worldview well. Before high school he attended the Peninsula School, an alternative elementary and middle school that adhered to the philosophy that children should learn by doing and at their own pace. One of his teachers had been Ira Sandperl, a Gandhi scholar who was a permanent fixture behind the cash register at Kepler’s, a bookstore near the Stanford Campus. Sandperl had also been Joan Baez’s mentor and had imbued Duvall with an independent take on knowledge, learning, and the world.
Duvall was one of the first generation of computer hackers, a small subculture that had originally emerged at MIT, where computing was an end in itself and where the knowledge and code needed to animate the machines were both freely shared. The culture had quickly spread to the West Coast, where it had taken root at computing design centers like Stanford and the University of California at Berkeley.
It was an era in which computers were impossibly rare—a few giant machines were hidden away in banks, universities, and government-funded research centers. At SRI, Duvall had unfettered access to a room-sized machine first acquired for an elite military-funded project and then used to run the software controlling Shakey. At both SRI and at the nearby Stanford Artificial Intelligence Laboratory (SAIL), tucked away in the hills behind Stanford University, there was a tightly knit group of researchers who already believed in the possibility of building a machine that mimicked human capabilities. To this group, Shakey was a striking portent of the future, and they believed that the scientific breakthrough to enable machines to act like humans would come in just a few short years.
Indeed, during the mid-sixties there was virtually boundless optimism among the small community of artificial intelligence researchers on both coasts. In 1966, when SRI and SAIL were beginning to build robots and AI programs in California, another artificial intelligence pioneer, Marvin Minsky, assigned an undergraduate to work on the problem of computer vision on the other side of the country, at MIT. He envisioned it as a summer project. The reality was disappointing. Although AI might be destined to transform the world, Duvall, who worked on several SRI projects before transferring to the Shakey project to work in the trenches as a young programmer, immediately saw that the robot was barely taking baby steps.
Shakey lived in a large open room with linoleum floors and a couple of racks of electronics. Boxlike objects were scattered around for the robot to “play” with. The mainframe computer providing the intelligence was nearby. Shakey’s sensors would capture the world around it and then “think”—standing motionless for minutes on end—before resuming its journey, even in its closed and controlled world. It was like watching grass grow. Moreover, it frequently broke down or would drain its batteries after just minutes of operation.
For a few months Duvall made the most of his situation. He could see that the project was light-years away from the stated goal of an automated military sentry or reconnaissance agent. He tried to amuse himself by programming the rangefinder, a clunky device based on a rotating mirror. Unfortunately it was prone to mechanical failure, making software development a highly unsatisfying exercise in error prediction and recovery. One of the managers told him that the project was in need of a “probabilistic decision tree” to refine the robot’s vision system. So rather than working on that special-purpose mechanism, he spent his time writing a programming tool that could generate such trees programmatically. Shakey’s vision system worked better than the rangefinder. Even with the simplest machine vision processing, it could identify both edges and basic shapes, essential primitives to understand and travel in its surroundings.
Duvall’s manager believed in structuring his team so that “science” would only be done by “scientists.” Programmers were low-status grunt workers who implemented the design ideas of their superiors. While some of the leaders of the group appeared to have a high-level vision to pursue, the project was organized in a military fashion, making work uninspiring for a low-level programmer like Duvall, stuck writing device drivers and other software interfaces. That didn’t sit well with the young computer hacker.
Robots seemed like a cool idea to him, but before Star Wars there weren’t a lot of inspiring models. There was Robby the Robot from Forbidden Planet in the 1950s, but it was hard to find inspiration in a broader vision. Shakey simply didn’t work very well. Fortunately Stanford Research Institute was a big place and Duvall was soon attracted by a more intriguing project.
Just down the hall from the Shakey laboratory he would frequently encounter another research group that was building a computer to run a program called NLS, the oN-Line System. While Shakey was managed hierarchically, the group run by computer scientist Doug Engelbart was anything but. Engelbart’s researchers, an eclectic collection of buttoned-down white-shirted engineers and long-haired computer hackers, were taking computing in a direction so different it was not even in the same coordinate system. The Shakey project was struggling to mimic the human mind and body. Engelbart had a very different goal. During World War II he had stumbled across an article by Vannevar Bush, who had proposed a microfiche-based information retrieval system called Memex to manage all of the world’s knowledge. Engelbart later decided that such a system could be assembled based on the then newly available computers. He thought the time was right to build an interactive system to capture knowledge and organize information in such a way that it would now be possible for a small group of people—scientists, engineers, educators—to create and collaborate more effectively. By this time Engelbart had already invented the computer mouse as a control device and had also conceived of the idea of hypertext links that would decades later become the foundation for the modern World Wide Web. Moreover, like Duvall, he was an outsider within the insular computer science world that worshipped theory and abstraction as fundamental to science.
Artificial intelligence pioneer Charles Rosen with Shakey, the first autonomous robot. The Pentagon funded the project to research the idea of a future robotic sentry. (Image courtesy of SRI International)
The cultural gulf between the worlds defined by artificial intelligence and Engelbart’s contrarian idea, deemed “intelligence augmentation”—he referred to it as “IA”—was already palpable. Indeed, when Engelbart paid a visit to MIT during the 1960s to demonstrate his project, Marvin Minsky complained that it was a waste of research dollars on something that would create nothing more than a glorified word processor.
Despite earning no respect from establishment computer scientists, Engelbart was comfortable with being viewed as outside the mainstream academic wor
ld. When attending the Pentagon DARPA review meetings that were held regularly to bring funded researchers together to share their work, he would always begin his presentations by saying, “This is not computer science.” And then he would go on to sketch a vision of using computers to permit people to “bootstrap” their projects by making learning and innovation more powerful.
Even if it wasn’t in the mainstream of computer science, the ideas captivated Bill Duvall. Before long he switched his allegiance and moved down the hall to work in Engelbart’s lab. In the space of less than a year he went from struggling to program the first useful robot to writing the software code for the two computers that first connected over a network to demonstrate what would evolve to become the Internet. Late in the evening on October 29, 1969, Duvall connected Engelbart’s NLS software in Menlo Park to a computer in Los Angeles controlled by another young hacker via a data line leased from the phone company. Bill Duvall would become the first to make the leap from research to replace humans with computers to using computing to augment the human intellect, and one of the first to stand on both sides of an invisible line that even today divides two rival, insular engineering communities.
Significantly, what started in the 1960s was then accelerated in the 1970s at a third laboratory also located near Stanford. Xerox’s Palo Alto Research Center extended ideas originally incubated at McCarthy’s and Engelbart’s labs, in the form of the personal computer and computer networking, which were in turn successfully commercialized by Apple and Microsoft. Among other things, the personal computing industry touched off what venture capitalist John Doerr identified during the 1990s as the “largest legal accumulation of wealth in history.”1
Most people know Doug Engelbart as the inventor of the mouse, but his more encompassing idea was to use a set of computer technologies to make it possible for small groups to “bootstrap” their projects by employing an array of ever more powerful software tools to organize their activities, creating what he described as the “collective IQ” that outstripped the capabilities of any single individual. The mouse was simply a gadget to improve our ability to interact with computers.
In creating SAIL, McCarthy’s impact upon the world was equal to Engelbart’s in many ways. People like Alan Kay and Larry Tesler, who were both instrumental in the design of the modern personal computer, passed through his lab on their way to Xerox and subsequently to Apple Computer. Whitfield Diffie took away ideas that would lead to the cryptographic technology that secures modern electronic commerce.
There were, however, two other technologies being developed simultaneously at SRI and SAIL that are only now beginning to have a substantial impact: robotics and artificial intelligence software. Both of these are not only in the process of transforming economies; they are fostering a new era of intelligent machines that is fundamentally changing the way we live.
The impact of both computing and robotics had been forecast before these laboratories were established. Norbert Wiener invented the concept of cybernetics at the very dawn of the computing era in 1948. In his book Cybernetics, he outlined a new engineering science of control and communication that foreshadowed both technologies. He also foresaw the implications of these new engineering disciplines, and two years after he wrote Cybernetics, his companion book, The Human Use of Human Beings, explored both the value and the danger of automation.
He was one of the first to foresee the twin possibilities that information technology might both escape human control and come to control human beings. More significantly he posed an early critique of the arrival of machine intelligence: the danger of passing decisions on to systems that, incapable of thinking abstractly, would make decisions in purely utilitarian terms rather than in consideration of richer human values.
Engelbart worked as an electronics technician at NASA’s Ames Research Center during the 1950s, and he had watched as aeronautical engineers first built small models to test in a wind tunnel and then scaled them up into full-sized airplanes. He quickly realized that the new silicon computer circuits could be scaled in the opposite direction—down into what would become known as the “microcosm.” By shrinking the circuitry it would be possible to place more circuits in the same space for the same cost. And dramatically, each time the circuit density increased, performance improvement would not be additive, but rather multiplicative. For Engelbart, this was a crucial insight. Within a year after the invention of the modern computer chip in the late 1950s he understood that there would ultimately be enough cheap and plentiful computing power to change the face of humanity.
This notion of exponential change—Moore’s law, for example—is one of the fundamental contributions of Silicon Valley. Computers, Engelbart and Moore saw, would become more powerful ever more quickly. Equally dramatically, their cost would continue falling, not incrementally, but also at an accelerating rate, to the point where soon remarkably powerful computers would be affordable by even the world’s poorest people. During the past half decade that acceleration has led to rapid improvement in technologies that are necessary components for artificial intelligence: computer vision, speech recognition, and robotic touch and manipulation. Machines now also taste and smell, but recently more significant innovations have come from modeling human neurons in electronic circuits, which has begun to yield advances in pattern recognition—mimicking human cognition.
The quickening pace of AI innovation has led some, such as Rice University computer scientist Moshe Vardi, to proclaim the imminent end of a very significant fraction of all tasks performed by humans, perhaps as soon as 2045.2 Even more radical voices argue that computers are evolving at such a rapid pace that they will outstrip the intellectual capabilities of humans in one, or at the most two more generations. The science-fiction author and computer scientist Vernor Vinge posed the notion of a computing “singularity” in which machine intelligence will make such rapid progress that it will cross a threshold and then in some as yet unspecified leap, become superhuman.
It is a provocative claim, but far too early to answer definitively. Indeed, it is worthwhile recalling the point made by longtime Silicon Valley observer Paul Saffo when thinking about the compounding impact of computing. “Never mistake a clear view for a short distance,” he has frequently reminded the Valley’s digerati. For those who believe that human labor will be obsolete in the space of a few decades, it’s worth remembering that even against the background of globalization and automation, between 1980 and 2010, the U.S. labor force actually continued to expand. Economists Frank Levy and Richard J. Murnane recently pointed out that since 1964 the economy has actually added seventy-four million jobs.3
MIT economist David Autor has offered a detailed explanation of the consequences of the current wave of automation. Job destruction is not across the board, he argues, but instead has focused on the routinized tasks performed by those in the middle of the job structure—the post–World War II white-collar expansion. The economy has continued to expand at both the bottom and the top of the pyramid, leaving the middle class vulnerable while expanding markets for both menial and expert jobs.
Rather than extending that debate here, however, I am interested in exploring a different question first posed by Norbert Wiener in his early alarms about the introduction of automation. What will the outcome of McCarthy’s and Engelbart’s differing approaches be? What are the consequences of the design decisions made by today’s artificial intelligence researchers and roboticists, who, with ever greater ease, can choose between extending and replacing the “human in the loop” in the systems and products they create? By the same token, what are the social consequences of building intelligent systems that substitute for or interact with humans in business, entertainment, and day-to-day activities?
Two distinct technical communities with separate traditions, values, and priorities have emerged in the computing world. One, artificial intelligence, has relentlessly pressed ahead toward the goal of automating the human experience. The other, the field o
f human-computer interaction, or HCI, has been more concerned with the evolution of the idea of “man-machine symbiosis” that was foreseen by pioneering psychologist J. C. R. Licklider at the dawn of the modern computing era as an interim step on the way to brilliant machines. Significantly, Licklider, as director of DARPA’s Information Projects Techniques Office in the mid-1960s, would be an early funder of both McCarthy and Engelbart. It was the Licklider era that would come to define the period when the Pentagon agency operated as a truly “blue-sky” funding organization, a period when, many argue, the agency had its most dramatic impact.
Wiener had raised an early alert about the relationship between man and computing machines. A decade later Licklider pointed to the significance of the impending widespread use of computing and how the arrival of computing machines was different from the previous era of industrialization. In a darker sense Licklider also forecast the arrival of the Borg of Star Trek notoriety. The Borg, which entered popular culture in 1988, was a proposed cybernetic alien species that assembles into a “hive mind” in which the collective subsumes the individual, intoning the phrase, “You will be assimilated.”
Licklider wrote in 1960 about the distance between “mechanically extended man” and “artificial intelligence,” and warned about the early direction of automation technology: “If we focus upon the human operator within the system, however, we see that, in some areas of technology, a fantastic change has taken place during the last few years. ‘Mechanical extension’ has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped. In some instances, particularly in large computer-centered information and control systems, the human operators are responsible mainly for functions that it proved infeasible to automate.”4 That observation seems fatalistic in accepting the shift toward automation rather than augmentation.