Machines of Loving Grace

Home > Other > Machines of Loving Grace > Page 35
Machines of Loving Grace Page 35

by John Markoff


  Trower does not think that Robby will displace human workers. Rising costs and a shrinking supply of workers will instead create a situation in which a helper robot can extend the capabilities of both human patients and helpers. Human caregivers already cost $70,000 or more a year, Trower argues, and a low-cost robot will actually extend assistance to those who cannot afford it.

  Ignoring Tufekci’s fears, Trower has focused his engineering skills on extending and assisting humans. But when will these machines meet our expectations for them? And how will those who are cared for greet them? These remain open questions, although there is a wealth of anecdotal evidence that suggests that, as speech recognition and speech synthesis technologies continue to improve, as sensors fall in cost, and as roboticists develop more agile machines, we will gratefully accept them. Moreover, for an Internet-savvy generation that has grown up with tablets, iPhones, and Siri, caregiving machines will seem like second nature. Robots—elder-care workers, service workers, drivers, and soldiers—are an inevitability. It is more difficult, however, to predict our relationship with these robots. Tales such as that of the golem weave the idea of a happy slave that serves our every desire deeply into our psyches as well as our mythology. In the end the emergence of intelligent machines that largely displace human labor will undoubtedly instigate a crisis of human identity.

  For now, Trower has focused on a clear and powerful role for robots as assistants for the infirm and the elderly. This is an excellent example of AI used directly in the service of humans, but what happens if AI-based machines spread quickly through the economy? We can only hope that the Keynesians are vindicated—in the long run.

  The twin paths of AI and IA place a tremendous amount of power and responsibility in the hands of the two communities of designers described in this book. For example, when Steve Jobs set out to assemble a team of engineers to reinvent personal computing with Lisa and the Macintosh, he had a clear goal in mind. Jobs thought of computing as a “bicycle for our minds.” Personal computing, which was initially proposed by a small group of engineers and visionaries in the 1970s, has since then had a tremendous impact on the economy and the modern workforce. It has both empowered individuals and unlocked human creativity on a global scale.

  Three decades later, Andy Rubin’s robotics project at Google is representative of a similar small group of engineers who are advancing the state-of-the-art of robots. Rubin set out with an equally clear—if dramatically different—vision in mind. When he started acquiring technology and talent for Google’s foray into robotics, he described a ten- to fifteen-year-long effort to radically advance an array of developments in robotics, from walking machines to robot arms and sensor technology. He sketched a vision of bipedal Google delivery robots arriving at homes by sitting on the back of Google cars, from which they would hop off to deliver packages.

  Designing humans either into or out of computer systems is increasingly possible today. Further advances in both artificial intelligence and augmentation tools will confront roboticists and computer scientists with clear choices about the design of the systems in the workplace and, increasingly, in the surrounding world. We will soon be living—either comfortably or uncomfortably—with autonomous machines.

  Brad Templeton, a software designer and consultant to the Google car project, has asserted, “A robot will be truly autonomous when you instruct it to go to work and it decides to go to the beach instead.”5 It is a wonderful turn of phrase, but he has conflated self-awareness with autonomy. Today, machines are beginning to act without meaningful human intervention, or at a level of independence that we can consider autonomous. This level of autonomy poses difficult questions for designers of intelligent machines. For the most part, however, engineers ignore the ethical issues posed by the use of computer technologies. Only occasionally does the community of artificial intelligence researchers sense a quiver of foreboding.

  At the Humanoids 2013 conference in Atlanta, which focused on the design and application of robots that appear humanlike, Ronald Arkin, a Georgia Tech roboticist, made a passionate plea to audiences in his speech “How to NOT Build a Terminator.” He reminded the group that in addition to his famous three laws, Asimov later added the fundamental “zeroth” law of robotics, which states, “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”6 Speaking to a group of more than two hundred roboticists and AI experts from universities and corporations, Arkin challenged them to think more deeply about the consequences of automation. “We all know that [the DARPA Robotics Challenge] is motivated by urban seek-and-destroy,” he said sardonically, adding, “Oh no, I meant urban search-and-rescue.”

  The line between robots as rescuers and enforcers is already gray, if it exists at all. Arkin showed clips from sci-fi movies, including James Cameron’s 1984 The Terminator. Each of the clips depicted evil robots performing tasks that DARPA has specified as part of its robotics challenge: clearing debris, opening doors, breaking through walls, climbing ladders and stairs, and riding in utility vehicles. Designers can exploit these capabilities either constructively or destructively, depending on their intent. The audience laughed nervously—but Arkin refused to let them off the hook. “I’m being facetious,” he said, “but I’m just trying to tell you that these kinds of technologies you are developing may have uses in places you may not have fully envisioned.” In the world of weapons design, the potential for unexpected consequences has long been true for what are described as “dual-use” technologies, like nuclear power, which can be used to produce both electric power and weapons. Now it is also increasingly true of robotics and artificial intelligence technologies. These technologies are dual-use not just as weapons, but also in terms of their potential to either augment or replace humans. Today, we are still “in the loop”—machines that either replace or augment humans are the product of human designers, so the designers cannot easily absolve themselves of the responsibility for the consequences of their inventions. “If you would like to create a Terminator, then I would contend: Keep doing what you are doing, because you are creating component technologies for such a device,” Arkin said. “There is a big world out there, and this world is listening to the consequences of what we are creating.”

  The issues and complications of automation have extended beyond the technical community. In a little-noted, unclassified Pentagon report entitled “The Role of Autonomy in DoD Systems,”7 the report’s authors pointed out the ethical quandaries involved in the automation of battle systems. The military itself is already struggling to negotiate the tension between autonomous systems, like drones, that promise both accuracy and cost efficiency, and the consequences of stepping ever closer to the line where humans are no longer in control of decisions on life and death. Arkin has argued elsewhere that, unlike human soldiers, autonomous war-fighting robots might have the advantage that they wouldn’t feel a threat to their personal safety, which could potentially reduce collateral damage and avoid war crimes. This question is part of a debate that dates back at least to the 1970s, when the air force generals who controlled the nation’s fleets of strategic bombers used the human-in-the-loop argument—that it was possible to recall a bomber and use human pilots to assess damage—in an attempt to justify the value of bomber aircraft in the face of more modern ballistic missiles.

  But Arkin also posed a new set of ethical questions in his talk. What if we have moral robots but the enemy doesn’t? There is no easy answer to that question. Indeed, increasingly intelligent and automated weapons technologies have inspired the latest arms race. Adding inexpensive intelligence to weapons systems threatens to change the international balance of power between nations.

  When Arkin concluded his talk at the stately Historic Academy of Medicine in Atlanta, Gill Pratt, the DARPA director of the agency’s Robotics Challenge, was one of the first to respond. He didn’t refute Arkin’s point. Instead, he acknowledged that robots are a “dual-use” technology. “It’s very easy to pick on ro
bots that are funded by the Defense Department,” he said. “It’s very easy to pick on a robot that looks like the Terminator, but in fact with dual-use being everywhere, it really doesn’t matter. If you’re designing a robot for health care, for instance, the autonomy it needs is actually in excess of what you would need for a disaster response robot.”8 Advanced technologies have long posed questions about dual-use. Now, artificial intelligence and machine autonomy have reframed the problem. Until now, dual-use technologies have explicitly required that humans make ethical decisions about their use. The specter of machine autonomy either places human ethical decision-making at a distance or removes it entirely.

  In other fields, certain issues have forced scientists and technologists to consider the potential consequences of their work, and many of those scientists acted to protect humanity. In February of 1975, for example, Nobel laureate Paul Berg encouraged the elite of the then new field of biotechnology to meet at the Asilomar Conference Grounds in Pacific Grove, California. At the time, recombinant DNA—inserting new genes into the DNA of living organisms—was a fledgling development. It presented both the promise for dramatic advances in medicine, agriculture, and new materials and the horrifying possibility that scientists could unintentionally bring about the end of humanity by engineering a synthetic plague. For the scientists, the meeting led to an extraordinary resolution. The group recommended that molecular biologists refrain from certain kinds of research and embark on a period of self-regulation during which they would pause their research while the scientists considered how to make it safe. To monitor the field, biotechnologists set up an independent committee at the National Institutes of Health to review research. After a little more than a decade, the NIH had gathered sufficient evidence from a wide array of experiments to suggest that it should lift the restrictions on research. It was a singular example of how society might thoughtfully engage with the consequences of scientific advance.

  Following in the footsteps of the biologists, in February of 2009, a group of artificial intelligence researchers and roboticists also met at Asilomar to discuss the progress of AI after decades of failure. Eric Horvitz, the Microsoft AI researcher who was serving as president of the Association for the Advancement of Artificial Intelligence, called the meeting. During the previous five years, the researchers in the field had begun discussing twin alarms. One came from Ray Kurzweil, who had heralded the relatively near-term arrival of computer superintelligences. Bill Joy, a founder of Sun Microsystems, also offered a darker view of artificial intelligence. He wrote a Wired magazine article that detailed a trio of technology threats from the fields of robotics, genetic engineering, and nanotechnology.9 Joy believed that the technologies represented a triple threat to human survival and he did not see an obvious solution.

  The artificial intelligence researchers who met at Asilomar chose to act less cautiously than their predecessors in the field of biotechnology. The group of computer science and robotics luminaries, including Sebastian Thrun, Andrew Ng, Manuela Veloso, and Oren Etzioni, who is now the director of Paul Allen’s Allen Institute for Artificial Intelligence, generally discounted the possibility of superintelligences that would surpass humans as well as the possibility that artificial intelligence might spring spontaneously from the Internet. They agreed that robots that can kill autonomously have already been developed, yet, when it emerged toward the end of 2009, the group’s report proved to be an anticlimax. The field of AI had not yet arrived at the moment of imminent threat. “The 1975 meeting took place amidst a recent moratorium on recombinant DNA research. In stark contrast to that situation, the context for the AAAI panel is a field that has shown relatively graceful, ongoing progress. Indeed, AI scientists openly refer to progress as being somewhat disappointing in its pace, given hopes and expectations over the years,”10 the authors wrote in a report summarizing the meeting.

  Five years later, however, the question of machine autonomy emerged again. In 2013, when Google acquired DeepMind, a British artificial intelligence firm that specialized in machine learning, popular belief held that roboticists were very close to building completely autonomous robots. The tiny start-up had produced a demonstration that showed its software playing video games, in some cases better than human players. Reports of the acquisition were also accompanied by the claim that Google would set up an “ethics panel” because of concerns about potential uses and abuses of the technology. Shane Legg, one of the cofounders of DeepMind, acknowledged that the technology would ultimately have dark consequences for the human race. “Eventually, I think human extinction will probably occur, and technology will likely play a part in this.”11 For an artificial intelligence researcher who had just reaped hundreds of millions of dollars, it was an odd position to take. If someone believes that technology will likely evolve to destroy humankind, what could motivate them to continue developing that same technology?

  At the end of 2014, the 2009 AI meeting at Asilomar was reprised when a new group of AI researchers, funded by one of the Skype founders, met in Puerto Rico to again consider how to make their field safe. Despite a new round of alarming statements about AI dangers from luminaries such as Elon Musk and Stephen Hawking, the attendees wrote an open letter that notably fell short of the call to action that had been the result of the original 1975 Asilomar biotechnology meeting.

  Given that DeepMind had been acquired by Google, Legg’s public philosophizing is particularly significant. Today, Google is the clearest example of the potential consequences of AI and IA. Founded on an algorithm that efficiently collected human knowledge and then returned it to humans as a powerful tool for finding information, Google is now engaged in building a robot empire. The company will potentially create machines that replace human workers, like drivers, delivery personnel, and electronics assembly workers. Whether it will remain an “augmentation” company or become a predominately AI-oriented organization is unclear.

  The new concerns about the potential threat from AI and robotics evoke the issues that confronted the fictional Tyrell Corporation in the science-fiction movie Blade Runner, which raised the ethical issues posed by the design of intelligent machines. Early in the movie Deckard, a police detective, confronts Rachael, an employee of a firm that makes robots, or replicants, and asks her if an artificial owl is expensive. She suggests that he doesn’t believe the company’s work is of value. “Replicants are like any other machine,” he responds. “They’re either a benefit or a hazard. If they’re a benefit, it’s not my problem.”12

  How long will it be before Google’s intelligent machines, based on technologies from DeepMind and Google’s robotics division, raise the same questions? Few movies have had the cultural impact of Blade Runner. It has been released seven different times, once with a director’s cut, and a sequel is on the docket. It tells the story of a retired Los Angeles police detective in 2019 who is recalled to hunt down and kill a group of genetically engineered artificial beings known as replicants. These replicants were originally created to work off-planet and have returned to Earth illegally in an effort to force their designer to extend their artificially limited life spans. A modern-day Wizard of Oz, it captured a technologically literate generation’s hopes and fears. From the Tin Man, who gains a heart and thus a measure of humanity, to the replicants who are so superior to humanity that Deckard is ordered to terminate them, humanity’s relations to robots have become the defining question of the era.

  These “intelligent” machines may never be intelligent in a human sense or self-aware. That’s beside the point. Machine intelligence is improving quickly and approaching a level where it will increasingly offer the compelling appearance of intelligence. When it opened in December 2013, the movie Her struck a chord, most likely because millions of people already interact with personal assistants such as Apple’s Siri. Her-like interactions have become commonplace. Increasingly as computing moves between desktops and laptops and becomes embedded in everyday objects, we will expect them to communicate int
elligently. In the years while he was designing Siri and the project was still hidden from the public eye, Tom Gruber referred to this trend as “intelligence at the interface.” He felt he had found a way to blend the competing worlds of AI and IA.

  And indeed, the emergence of software-based intelligent assistants hints at a convergence between the work in disparate communities of AI and human-computer interaction designers. Alan Kay, who conceived of the first modern personal computer, has said that in his early explorations of computer interfaces, he was working roughly ten to fifteen years in the future, while Nicholas Negroponte, one of the first people to explore the ideas of immersive media, virtual reality, and conversational interfaces, was working twenty-five to thirty years in the future. Like Negroponte, Kay asserts that the best computerized interfaces are the ones that are closer to theater, and the best theater draws the audience into its world so completely that they feel as if they are part of it. That design focus on interactive performance points directly toward interactive systems that will function more as AI-based “colleagues” than computerized tools.

  How will these computer avatars transform society? Humans are already spending a significant fraction of their waking hours either interacting with other humans through computers or directly interacting with humanlike machines, either in fantasy and video games or in a plethora of computerized assistance systems that range from so-called FAQbots to Siri. We even use search engines in our everyday conversations with others.

  Will these AI avatars be our slaves, our assistants, our colleagues, or some mixture of all three? Or more ominously, will they become our masters? Considering robots and artificial intelligences in terms of social relationships may initially seem implausible. However, given that we tend to anthropomorphize our machines, we will undoubtedly develop social relationships with them as they become increasingly autonomous. Indeed, it is not much different to reflect on human-robot relations than it is to consider traditional human relations with slaves, who have been dehumanized by their masters throughout history. Hegel explored the relationship between master and slave in The Phenomenology of Spirit and his ideas about the “master-slave dialectic” have influenced other thinkers ranging from Karl Marx to Martin Buber. At the heart of Hegel’s dialectic is the insight that both the master and the slave are dehumanized by their relationship.

 

‹ Prev