Book Read Free

Machines of Loving Grace

Page 18

by John Markoff


  Taylor, first at NASA and then at DARPA, would pave the way for systems used both to augment humans and to replace them. NASA was three years old when, in 1961, President Kennedy had announced the goal of getting an American to the moon—and back—safely during that decade. Taylor found himself at an agency with a unique charter, to fundamentally shape how humans and machines interact, not just in flight, but ultimately in all computer-based systems from the desktop PC to today’s mobile robots.

  The term “cyborg,” for “cybernetic organism,” had been coined originally in 1960 by medical researchers thinking about intentionally enhancing humans to prepare them for the exploration of space.1 They foresaw a new kind of creature—half human, half mechanism—capable of surviving in harsh environments.

  In contrast, Taylor’s organization was funding the design of electronic systems that closely collaborated with humans while retaining a bright line distinguishing what was human and what was machine.

  In the early 1960s NASA was a brand-new government bureaucracy deeply divided by the question of the role of humans in spaceflight. For the first time it was possible to conceive of entirely automated flight in space. The deeply unsettling idea was an obvious future direction—one in which machines would steer and humans would be passengers, then already the default approach pursued by the Soviet space program. In contrast, the U.S. program underscored the deep division that was highlighted by a series of incidents where American astronauts had intervened, thus proving the survival value of what in NASA parlance came to be called “human in the loop.” On Gemini VI, for example, Wally Schirra was hailed as a hero after he held off pushing the abort button during a launch sequence, even though he was violating a NASA mission rule.2

  The human-in-the-loop debates became a series of intensely fought battles inside NASA during the 1950s and 1960s. When Taylor arrived at the agency in 1961 he found an engineering culture in love with a body of mathematics known as control theory, Norbert Wiener’s cybernetic legacy. These NASA engineers were designing the nation’s aeronautic as well as astronautic flight systems. These were systems of such complexity that the engineers found them abstractly, some might say inherently, beautiful. Taylor could see early on that the aerospace designers were wedded to the aesthetics of control as much as the fact that the systems needed to be increasingly automated because humans weren’t fast or reliable enough to control them.

  He had stumbled into an almost intractable challenge, and hence a deeply divided technical culture. NASA was split on the question of the role of humans in spaceflight. Taylor saw that the dispute pervaded even the highest echelons of the agency, and that it was easy to predict which side of the debate each particular manager would take. Former jet pilots would be in favor of keeping a human in the system, while experts in control theory would choose full automation.

  As a program manager in 1961, Taylor was responsible for several areas of research funding, one of them called “manned flight control systems.” Another colleague in the same funding office was responsible for “automatic control systems.” The two got along well enough, but they were locked in a bitter budgetary zero-sum game. Taylor began to understand the arguments his colleagues made in support of automated control, though he was responsible for mastering arguments for manned control. His best card in the debate was that he had the astronauts on his side and they had tremendous clout. NASA’s corps of astronauts had mostly been test pilots. They were the pride of the space agency and proved Taylor’s invaluable allies. Taylor had funded the design and construction of simulator technology used extensively in astronaut training—systems for practicing a series of spacecraft maneuvers, like docking—since the early days of the Mercury program, and had spent hours talking with astronauts about the strengths and weaknesses of the different virtual training environments. He found that the astronauts were keenly aware of the debate over the proper role of humans in the space programs. They had a huge stake in whether they would have a role in future space systems or be little more than another batch of dogs and smart monkeys coming along for the ride.

  The political battle over the human in the loop was waged over two divergent narratives: that of the heroic astronauts landing on the surface of the moon and that of the specter of a catastrophic accident culminating in the deaths of the astronauts—and potentially, as a consequence, the death of the agency. The issue, however, was at least temporarily settled when during the first human moon landing Neil Armstrong heroically took command after a computer malfunction and piloted the Apollo 11 spacecraft safely to the lunar surface. The moon landing and other similar feats of courage, such as Wally Schirra’s decision not to abort the earlier Gemini flight, have firmly established a view of human-machine interaction that elevates human decision-making beyond the fallible machines of our mythology. Indeed, the macho view of astronauts as modern-day Lewises and Clarks was from the beginning deeply woven into the NASA ethos, as well as being a striking contrast with the early Soviet decision to train women cosmonauts.3 The American view of human-controlled systems was long partially governed by perceived distinctions between U.S. and Soviet approaches to aeronautics as well as astronautics. The Vostok spacecraft were more automated, and so Soviet astronauts were basically passengers rather than pilots. Yet the original American commitment to human-controlled spaceflight was made when aeronautical technology was in its infancy. In the ensuing half century, computers and automated systems have become vastly more reliable.

  For Taylor, the NASA human-in-the-loop wars were a formative experience that governed his judgment at both NASA and DARPA, where he projected and sponsored technological breakthroughs in computing, robotics, and artificial intelligence. While at NASA, Taylor fell into the orbit of J. C. R. Licklider, whose interests in psychology and information technology led him to anticipate the full potential of interactive computing. In his seminal 1960 paper “Man-Computer Symbiosis,” Licklider foresaw an era when computerized systems would entirely displace humans. However, he also predicted an interim period that might span from fifteen to five hundred years in which humans and computers would cooperate. He believed that that period would be the most “intellectually and most creative and exciting [time] in the history of mankind.”

  Taylor moved to ARPA in 1965 as Licklider’s protégé. He set about funding the ARPAnet, the first nationwide research-oriented computer network. In 1968 the two men coauthored a follow-up to Licklider’s symbiosis paper titled “The Computer as a Communication Device.” In it, Licklider and Taylor were possibly the first to delineate the coming impact of computer networks on society.

  Today, even after decades of research in human-machine and human-computer interaction in the airplane cockpit, the argument remains unsettled—and has emerged again with the rise of autonomous navigation in trains and automobiles. While Google leads in research in driverless cars, the legacy automobile industry has started to deploy intelligent systems that can offer autonomous driving in some well-defined cases, such as during stop-and-go traffic jams, but then return the car to human control in situations recognized as too complex or risky to autopilot. It may take seconds for a human sitting in the driver’s seat, possibly distracted by an email or worse, to return to “situational awareness” and safely resume control of the car. Indeed the Google researchers may have already come up against the limits to autonomous driving. There is currently a growing consensus that the “handoff” problem—returning manual control of an autonomous car to a human in the event of an emergency—may not actually be a solvable one. If that proves true, the development of the safer cars of the future will tend toward augmentation technology rather than automation technology. Completely autonomous driving might ultimately be limited to special cases like low-speed urban services and freeway driving.

  Nevertheless, the NASA disputes were a harbinger of the emerging world of autonomous machines. During the first fifty years of interactive computing, beginning in the mid-sixties, computers largely augmented humans instea
d of replacing them. The technologies that became the hallmark of Silicon Valley—personal computing and the Internet—largely amplified human intellect, although it was undeniably the case that an “augmented” human could do the work of several (former) coworkers. Today, in contrast, system designers have a choice. As AI technologies including vision, speech, and reasoning have begun to mature, it is increasingly possible to design humans either in or out of “the loop.”

  Funded first by J. C. R. Licklider and then, beginning in 1965, by Bob Taylor, John McCarthy and Doug Engelbart worked in laboratories just miles apart from each other at the outset of the modern computing era. They might as well have been in different universes. Both were funded by ARPA, but they had little if any contact. McCarthy was a brilliant, if somewhat cranky, mathematician and Engelbart was an Oregon farm boy and a dreamer.

  The outcome of their competing pioneering research was unexpected. When McCarthy came to Stanford to create the Stanford Artificial Intelligence Laboratory in the mid-1960s, his work was at the very heart of computer science, focusing on big concepts like artificial intelligence and proof of software program correctness using formal logic. Engelbart, on the other hand, set out to build a “framework” for augmenting the human intellect. It was initially a more nebulous concept viewed as far outside the mainstream of academic computer science, and yet for the first three decades of the interactive computing era Engelbart’s ideas had more worldly impact. Within a decade both the first modern personal computers and then later information-sharing technologies like the World Wide Web—both of which can be traced in part to Engelbart’s research—emerged.

  Since then Engelbart’s adherents have transformed the world. They have extended human capabilities everywhere in modern life. Today, shrunk into smartphones, personal computers will soon be carried by all but the allergic or iconoclastic adult and teenager. Smartphones are almost by definition assembled into a vast distributed computing fabric woven together by the wireless Internet. They are also relied on as artificial memories. Today many people are literally unable to hold a conversation or find their way around town without querying them.

  While Engelbart’s original research led directly to the PC and the Internet, McCarthy’s lab was most closely associated with two other technologies—robotics and artificial intelligence. There had been no single dramatic breakthrough. Rather, the falling cost of computing (both in processing and storage), the gradual shift from the symbolic logic-based approach of the first generation of AI research to more pragmatic statistics and machine-learning algorithms of the second generation of AI, and the declining price of sensors now offer engineers and programmers the canvas to create computerized systems that see, speak, listen, and move around in the world.

  The balance has shifted. Computing technologies are emerging that can be used to replace and even outpace humans. At the same time, in the ensuing half century there has been little movement toward unification in the two fields, IA and AI, the offshoots of Engelbart’s and McCarthy’s original work. Rather, as computing and robotics systems have grown from laboratory curiosities into the fabric that weaves together modern life, the opposing viewpoints of those in each community have for the most part continued to speak past each other.

  The human-computer interaction community keeps debating metaphors ranging from windows and mice to autonomous agents, but has largely operated within the philosophical framework originally set down by Engelbart—that computers should be used to augment humans. In contrast, the artificial intelligence community has for the most part pursued performance and economic goals elaborated in equations and algorithms, largely unconcerned with defining or in any way preserving a role for individual humans. In some cases the impact is easily visible, such as manufacturing robots that directly replace human labor. In other cases it is more difficult to discern the direct effect on employment caused by deployment of new technologies. Winston Churchill said: “We shape our buildings, and afterwards our buildings shape us.” Today our systems have become immense computational edifices that define the way we interact with our society, from how our physical buildings function to the very structure of our organizations, whether they are governments, corporations, or churches.

  As the technologies marshaled by the AI and IA communities continue to reshape the world, alternative visions of the future play out: In one world humans coexist and prosper with the machines they’ve created—robots care for the elderly, cars drive themselves, and repetitive labor and drudgery vanish, creating a new Athens where people do science, make art, and enjoy life. It will be wonderful if the Information Age unfolds in that fashion, but how can it be a foregone conclusion? It is equally possible to make the case that these powerful and productive technologies, rather than freeing humanity, will instead facilitate a further concentration of wealth, fomenting vast new waves of technological unemployment, casting an inescapable surveillance net around the globe, while unleashing a new generation of autonomous superweapons.

  When Ed Feigenbaum finished speaking the room was silent. No polite applause, no chorus of boos. Just a hush. Then the conference attendees filed out of the room and left the artificial intelligence pioneer alone at the podium.

  Shortly after Barack Obama was elected president in 2008, it seemed possible that the Bush administration plan for space exploration, which focused on placing a manned base on the moon, might be replaced with an even more audacious program that would involve missions to asteroids and possibly even manned flights to Mars with human landings on the Martian moons Phobos and Deimos.4 Shorter-term goals included the possibility of sending astronauts to Lagrangian points one million miles from Earth where the Earth’s and Sun’s gravitational pull cancel each other and create convenient long-term parking for ambitious devices like a next-generation Hubble Space Telescope.

  Human exploration of the solar system was the pet project of G. Scott Hubbard, a head of NASA’s Ames Research Center in Mountain View, California, who was heavily backed by the Planetary Society, a nonprofit that advocates for space exploration and science. As a result, NASA organized a conference to discuss the possible resurrection of human exploration of the solar system. A star-studded cast of space luminaries, including astronaut Buzz Aldrin, the second human to set foot on the moon, and celebrity astrophysicist Neil deGrasse Tyson, showed up for the day. One of the panels focused on the role of robots, which were envisioned by the conference organizers as providing intelligent systems that would assist humans on long flights to other worlds.

  Feigenbaum had been a student of one of the founders of the field of AI, Herbert Simon, and he had led the development of the first expert systems as a young professor at Stanford. A believer in the potential of artificial intelligence and robotics, he had been irritated by a past run-in with a Mars geologist who had insisted that sending a human to Mars would provide more scientific information in just a few minutes than a complete robot mission might return. Feigenbaum also had a deep familiarity with the design of space systems. Moreover, having once served as chief scientist of the air force, he was a veteran of the human-in-the-loop debates stretching back to the space program.

  He showed up to speak at the panel with a chip on his shoulder. Speaking from a simple set of slides, he sketched out an alternative to the manned flight to Mars vision. He rarely used capital letters in his slides, but he did this time:

  ALMOST EVERYTHING THAT HAS BEEN LEARNED ABOUT THE SOLAR SYSTEM AND SPACE BEYOND HAS BEEN LEARNED BY PEOPLE ON EARTH ASSISTED BY THEIR NHA (NON-HUMAN AGENTS) IN SPACE OR IN ORBIT5

  The whole notion of sending humans to another planet when robots could perform just as well—and maybe even better—for a fraction of the cost and with no risk of human life seemed like a fool’s errand to Feigenbaum. His point was that AI systems and robots in the broader sense of the term were becoming so capable so quickly that the old human-in-the-loop idea had lost its mystique as well as its value. All the coefficients on the nonhuman side of the equation had changed.
He wanted to persuade the audience to start thinking in terms of agents, to shift gears and think about humans exploring the solar system with augmented senses. It was not a message that the audience wanted to hear. As the room emptied, a scientist who worked at NASA’s Goddard Space Flight Center came to the table and quietly said that she was glad that Feigenbaum had said what he did. In her job, she whispered, she could not say that.

  Feigenbaum’s encounter underscores the reality that there isn’t a single “right” answer in the dichotomy between AI and IA. Sending humans into space is a passionate ideal for some. For others like Feigenbaum, however, the vast resources the goal entails are wasted. Intelligent machines are perfectly suited for the hostile environment beyond Earth, and in designing them we can perfect technologies that can be used to good effect on Earth. His quarrel is also indicative that there won’t be any easy synthesis of the two camps.

  While the separate fields of artificial intelligence and human-computer interaction have largely remained isolated domains, there are people who have lived in both worlds and researchers who have famously crossed from one camp to the other. Microsoft cognitive psychologist Jonathan Grudin first noted that the two fields have risen and fallen in popularity, largely in opposition to each other. When the field of artificial intelligence was more prominent, human-computer interaction generally took a backseat, and vice versa.

  Grudin thinks of himself as an optimist. He has written that he believes it is possible that in the future there will be a grand convergence of the fields. Yet the relationship between the two fields remains contentious and the human-computer interaction perspective as pioneered by Engelbart and championed by people like Grudin and his mentor Donald Norman is perhaps the most significant counterweight to artificial intelligence–oriented technologies that have the twin potential for either liberating or enslaving humanity.

 

‹ Prev