Book Read Free

Army of None

Page 27

by Paul Scharre


  More advanced AI is certainly coming, but artificial general intelligence in the sense of machines that think like us may prove to be a mirage. If our benchmark for “intelligent” is what humans do, advanced artificial intelligence may be so alien that we never recognize these superintelligent machines as “true AI.”

  This dynamic already exists to some extent. Micah Clark pointed out that “as soon as something works and is practical it’s no longer AI.” Armstrong echoed this observation: “as soon as a computer can do it, they get redefined as not AI anymore.”

  If the past is any guide, we are likely to see in the coming decades a proliferation of narrow superintelligent systems in a range of fields—medicine, law, transportation, science, and others. As AI advances, these systems will be able to take on a wider and wider array of tasks. These systems will be vastly better than humans in their respective domains but brittle outside of them, like tiny gods ruling over narrow dominions.

  Regardless of whether we consider them “true AI,” many of the concerns about general intelligence or superintelligence still apply to these narrow systems. An AI could be dangerous if it has the capacity to do harm, its values or goals are misaligned with human intentions, and it is unresponsive to human correction. General intelligence is not required (although it certainly could magnify these risks). Goal misalignment is certain to be a flaw that will come up in future systems. Even very simple AIs like EURISKO or the Tetris-pausing bot have demonstrated a cleverness to accomplish their goals in unforeseen ways that should give us pause.

  AIs are also likely to have access to powerful capabilities. As AI advances, it will be used to power more-autonomous systems. If the crude state of AI today powers learning thermostats, automated stock trading, and self-driving cars, what tasks will the machines of tomorrow manage?

  To help get some perspective on AI risk, I spoke with Tom Dieterrich, the president of the Association for the Advancement of Artificial Intelligence (AAAI). Dietterich is one of the founders of the field of machine learning and, as president of the AI professional society, is now smack in the middle of this debate about AI risk. The mission of AAAI is not only to promote scientific research in AI but also to promote its “responsible use,” which presumably would include not killing everyone.

  Dietterich said “most of the discussion about superintelligence is often in the realm of science fiction.” He is skeptical of an intelligence explosion and has written that it “runs counter to our current understandings of the limitations that computational complexity places on algorithms for learning and reasoning.” Dietterich did acknowledge that AI safety was an important issue, but said that risks from AI have more to do with what humans allow AI-enabled autonomous systems to do. “The increasing abilities of AI are now encouraging us to consider much more sophisticated autonomous systems,” he said. “It’s when we have those autonomous systems and we put them in control of life-and-death decisions that we enter this very high risk space . . . where cyberattack or bugs in the software lead to undesirable outcomes.”

  Dietterich said there is a lot of work under way in trying to understand how to build safe and robust AI, including AI that is “robust to adversarial attack.” He said, “People are trying to understand, ‘under what conditions should I trust a machine learning system?’ ”

  Dietterich said that the optimal model is likely to be one that combines human and machine cognition, much like Bob Work’s “centaur” vision of human-machine collaboration. “The human should be taking the actions and the AI’s job should be to give the human the right information that they need to make the right decisions,” Dietterich said. “So that’s human in the loop or very intimately involved.” He acknowledged the model “breaks down . . . when there’s a need to act at rates faster than humans are capable of acting, like on Wall Street trading.” The downside, as demonstrated vividly in automated trading, is that machine speed can exacerbate risks. “The ability to scale it up and do it at faster than human decision making cycles means that we can very quickly cause a lot of trouble,” Dietterich said. “And so we really need to assess whether we want to go there or not.”

  When it comes to warfare, Dietterich saw both the military desire for autonomy and its risks. He said, “The whole goal in military doctrine is to get inside your opponent’s OODA loop, right? You want to make your decisions faster than they can. That leads us to speed of light warfare and speed of light catastrophe.”

  MILITARY AI: TERMINATOR VS. IRON MAN

  If autonomous weapons are the kind of thing that keep you up at night, militarized advanced AI is pure nightmare fuel. If researchers don’t know how to control an AI that they built themselves, it’s hard to imagine how they could counter a hostile one. Yet however AI evolves, it is almost certain that advanced AI will be militarized. To expect that humans will refrain from bending such a broad and powerful technology to destructive ends seems optimistic to the point of naïveté. It would be the equivalent of asking nations to refrain from militarizing the internal combustion engine or electricity. How militaries use AI and how much autonomy they give AI-powered systems is an open question. It may be some comfort that Bob Work—the person in charge of implementing military AI—stated explicitly, multiple times in our interview that artificial general intelligence was not something he could envision applying to weapons. He cited AGI as “dangerous” and, if it came to pass, something the Defense Department would be “extremely careful” with.

  Work has made robotics, autonomy, and AI a central component of his Third Offset Strategy to renew American military technological superiority, but he sees those technologies as assisting rather than replacing humans. Work has said his vision of AI and robotics is more Iron Man than Terminator, with the human at the center of the technology. The official DoD position is that machines are tools, not independent agents themselves. The Department of Defense Law of War Manual states that the laws of war “impose obligations on persons . . . not on the weapons themselves.” From DoD’s perspective, machines—even intelligent, autonomous ones—cannot be legal agents. They must always be tools in the hands of people. That doesn’t mean that others might not build AI agents, however.

  Selmer Bringsjord is chair of the cognitive science department and head of the Rensselaer Artificial Intelligence and Reasoning Lab. He pointed out that the DoD position is at odds with the long-term ambition of the field of AI. He quoted the seminal AI textbook, Introduction to Artificial Intelligence, that says “the ultimate goal of AI . . . is to build a person.” Bringsjord said that even if not every AI researcher openly acknowledges it, “what they’re aiming at are human-level capabilities without a doubt. . . . There has been at least since the dawn of modern AI a desire to build systems that reach a level of autonomy where they write their own code.” Bringsjord sees a “disconnect” between the DoD’s perspective and what AI researchers are actually pursuing.

  I asked Bringsjord whether he thought there should be any limits to how we apply AI technology in the military domain and he had a very frank answer. He told me that what he thinks doesn’t matter. What will answer that question for us is “the nature of warfare.” History suggests “we can plan all we want,” but the reality of military competition will drive us to this technology. If our adversaries build autonomous weapons, “then we’ll have to react with suitable technology to defend against that. If that means we need machines that are themselves autonomous because they have to operate at a different timescale, we both know we’re going to do that. . . . I’m only looking at the history of what happens in warfare,” he said. “It seems obvious this is going to happen.”

  HOSTILE AI

  The reality is that for all of the thought being put into how to make advanced AI safe and controllable, there is little effort under way on what to do if it isn’t. In the AI field, “adversarial AI” and “AI security” are about making one’s own AI safe from attack, not how to cope with an adversary’s AI. Yet malicious applications of AI are inevit
able. Powerful AI with insufficient safeguards could slip out of control and cause havoc, much like the Internet Worm of 1988. Others will surely build harmful AI deliberately. Even if responsible militaries such as the United States’ eschew dangerous applications of AI, the ubiquity of the technology all but assures that other actors—nation-states, criminals, or hackers—will use AI in risky or deliberately harmful ways. The same AI tools being developed to improve cyberdefenses, like the fully autonomous Mayhem used in the Cyber Grand Challenge, could also be used for offense. Elon Musk’s reaction to the Cyber Grand Challenge was to compare it to the origins of Skynet—hyperbole to be sure, but the darker side of the technology is undeniable. Introspective, learning and adaptive software could be potentially extremely dangerous without sufficient safeguards. While David Brumley was dismissive of the potential for software to become “self-aware,” he agreed it was possible to envision creating something that was “adaptive and unpredictable . . . [such that] the inventors wouldn’t even know how it’s going to evolve and it got out of control and could do harm.” Ironically, the same open-source ethos in AI research that aims to make safe AI tools readily available to all also places potentially dangerous AI tools in the hands of those who might want to do harm or who simply are not sufficiently cautious.

  Militaries will need to prepare for this future, but the appropriate response may not be a headlong rush into more autonomy. In a world of intelligent adaptive malware, autonomous weapons are a massive vulnerability, not an advantage. The nature of autonomy means that if an adversary were to hack an autonomous system, the consequences could be much greater than a system that kept humans in the loop. Delegating a task to a machine means giving it power. It entails putting more trust in the machine, trust that may not be warranted if cybersecurity cannot be guaranteed. A single piece of malware could hand control over an entire fleet of robot weapons to the enemy. Former Secretary of the Navy Richard Danzig has compared information technologies to a “Faustian bargain” because of their vulnerabilities to cyberattack: “the capabilities that make these systems attractive make them risky.” He has advocated safeguards such as “placing humans in decision loops, employing analog devices as a check on digital equipment, and providing for non-cyber alternatives if cybersystems are subverted.” Human circuit breakers and hardware-level physical controls will be essential to keeping future weapons under human control. In some cases, Danzig says “abnegation” of some cybertechnologies may be the right approach, forgoing their use entirely if the risks outweigh the benefits. As AI advances, militaries will have to carefully weigh the benefits of greater autonomy against the risks if enemy malware took control. Computer security expert David Brumley advocated an approach of thinking about the “ecosystem” in which future malware will operate. The ecosystem of autonomous systems that militaries build should be a conscious choice, one made weighing the relative risks of different alternative approaches, and one that retains humans in the right spots to manage those risks.

  BREAKING OUT

  The future of AI is unknown. Armstrong estimated an 80 percent chance of AGI occurring in the next century, and a 50 percent chance of superintelligence. But his guess is as good as anyone else’s. What we do know is that intelligence is powerful. Without tooth or claw, humans have climbed to the top of the food chain, conquered the earth, and even ventured beyond, all by the power of our intelligence. We are now bestowing that power on machines. When machines begin learning on their own, we don’t know what will happen. That isn’t a prediction; it’s an observation about AI today.

  I don’t lose sleep worrying about Frankenstein or Skynet or Ava or any of the other techno-bogeymen science fiction writers have dreamed up. But there is one AI that gives me chills. It doesn’t have general intelligence; it isn’t a person. But it does demonstrate the power of machine learning.

  DeepMind posted a video online in 2015 of their Atari-playing neural network as it learned how to play Breakout (an early version of the popular Arkanoid arcade game). In Breakout, the player uses a paddle to hit a ball against a stack of bricks, chipping away at the bricks one by one. In the video, the computer fumbles around hopelessly at first. The paddle moves back and forth seemingly at random, hitting the ball only occasionally. But the network is learning. Every time the ball knocks out a brick, the point total goes up, giving the neural net positive feedback, reinforcing its actions. Within two hours, the neural net plays like a pro, moving the paddle adeptly to bounce the ball. Then, after four hours of play, something unexpected happens. The neural net discovers a trick that human players know: using the ball to make a tunnel through the edge of the block of bricks, then sending the ball through the tunnel to bounce along the top of the block, eroding the bricks from above. No one taught the AI to do that. It didn’t even reason its way there through some understanding of a concept of “brick” and “ball.” It simply discovered this exploit by exploring the space of possibilities, the same way the Tetris-playing bot discovered pausing the game to avoid losing and EURISKO discovered the rule of taking credit for other rules. AI surprises us, in good ways and bad ways. When we prepare for the future of AI, we should prepare for the unexpected.

  PART V

  The Fight to Ban Autonomous Weapons

  16

  ROBOTS ON TRIAL

  AUTONOMOUS WEAPONS AND THE LAWS OF WAR

  War is horrible, but the laws of war are supposed to protect humanity from its worst evils. Codes of conduct in war date back to antiquity. The biblical book of Deuteronomy and ancient Sanskrit texts Mahābhārata, Dharmaśāstras, and Manusmti (“Laws of Manu”) all prohibit certain conduct in war. Modern-day laws of war emerged in the late-nineteenth and early-twentieth centuries. Today a series of treaties, such as the Geneva Conventions, form the law of armed conflict, or international humanitarian law.

  International humanitarian law (IHL) has three core principles: The principle of distinction means militaries must distinguish between enemy combatants and civilians on the battlefield; they cannot deliberately target civilians. IHL acknowledges that civilians may be incidentally killed when targeting enemy combatants, so-called “collateral damage.” However, the principle of proportionality says that any collateral civilian casualties cannot be disproportionate to the military necessity of attacking that target. The principle of avoiding unnecessary suffering prohibits militaries from using weapons that cause superfluous injury beyond their military value. For example, IHL prohibits weapons that leave fragments inside the body that cannot be detected by X-rays, such as glass shards, which would have no immediate benefit in taking the enemy off the battlefield but could make it harder for wounded soldiers to heal.

  IHL has other rules as well: Militaries must exercise precautions in the attack to avoid civilian casualties. Combatants who are ‘hors de combat’—out of combat because they have surrendered or have been incapacitated—cannot be targeted. And militaries cannot employ weapons that are, by their nature, indiscriminate or uncontrollable.

  So what does IHL have to say about autonomous weapons? Not much. Principles of IHL such as distinction and proportionality apply to the effects on the battlefield, not the decision-making process. Soldiers have historically made the decision whether or not to fire, but nothing in the laws of war prohibits a machine from doing it. To be used lawfully, though, autonomous weapons would need to meet the IHL principles of distinction, proportionality, and other rules.

  Steve Goose, director of the Human Rights Watch’s Arms Division, doesn’t think that’s possible. Goose is a leading figure in the Campaign to Stop Killer Robots and has called for a legally-binding treaty banning autonomous weapons. I visited Goose at Human Rights Watch’s Washington, DC, office overlooking Dupont Circle. He told me he sees autonomous weapons as “highly likely to be used in ways that violate international humanitarian law.” From his perspective, these would be weapons that “aren’t able to distinguish combatants from civilians, that aren’t able to tell who’s hors de combat, tha
t aren’t able to tell who’s surrendering, that are unable to do the proportionality assessment required under international humanitarian law for each and every individual attack, and that are unable to judge military necessity in the way that today’s commanders can.” The result, Goose said, would be “lots of civilians dying.”

  Many of these criteria would be tough for machines today. Impossible, though? Machines have already conquered a long list of tasks once thought impossible: chess, Jeopardy, go, poker, driving, image recognition, and many others. How hard it would be to meet these criteria depends on the target, surrounding environment, and projections about future technology.

  DISTINCTION

  To comply with the principle of distinction, autonomous weapons must be able to accurately distinguish between military and civilian targets. This means not only recognizing the target, but also distinguishing it from other “clutter” in the environment—confusing objects that are not targets. Even for “cooperative” targets that emit signatures, such as a radar, separating a signature from clutter can be challenging. Modern urban environments are rife with electromagnetic signals from Wi-Fi routers, cell towers, television and radio broadcasts, and other confusing emissions. It is even harder to distinguish non-cooperative targets, such as tanks and submarines, that use decoys or try to blend into the background with camouflage.

 

‹ Prev