Book Read Free

Army of None

Page 31

by Paul Scharre


  [E]nabling a system to essentially fire at will removes a sense of accountability from human decision makers, who then can offload responsibility to the inanimate computer when mistakes are made.

  Cummings argued that a semiautonomous control mode where the human has to take a positive action before the weapon fires would be a more appropriate design that would better facilitate human responsibility.

  These design choices are not panaceas. Humans can also fall victim to automation bias, trusting too much in the machine, even when they are technically in the loop. A human was in the loop for the first Patriot fratricide. And of course humans have killed a great deal in wars throughout history without automation of any kind. Pressure from authority, diffusion of responsibility, physical and psychological distance, and dehumanizing the enemy all contribute to overcoming the innate human resistance to killing. Modern psychological conditioning has also overcome much of this resistance. The U.S. Army changed its marksmanship training in the years after World War II, shifting to firing on human-shaped pop-up targets, and by Vietnam firing rates had increased to 90–95 percent. Nevertheless, automation could further lower the barriers to killing.

  War at the distant edge of moral accountability can become truly horrific, especially in an age of mechanized slaughter. In the documentary Fog of War, former U.S. Defense Secretary Robert McNamara gave the example of the U.S. strategic bombing campaign against Japanese cities during World War II. Even before the United States dropped nuclear bombs on Hiroshima and Nagasaki, U.S. aerial firebombing killed 50–90 percent of the civilian population of sixty-seven Japanese cities. McNamara explained that Air Force General Curtis LeMay, who commanded U.S. bombers, saw any action that shortened the war as justified. McNamara, on the other hand, was clearly troubled by these actions, arguing that both he and LeMay “were behaving as war criminals.”

  Sometimes these concerns can lead to restraint at the strategic level. In 1991, images of the so-called “Highway of Death,” where U.S. airplanes bombed retreating Iraqi troops, caused President George H. W. Bush to call an early end to the war. Then–chairman of the Joint Chiefs Colin Powell later wrote in his memoirs that “the television coverage was starting to make it look as if we were engaged in slaughter for slaughter’s sake.”

  The risk is not merely that an autonomous weapon might kill the naked soldier and continue bombing the Highway of Death. The risk is that no human might feel troubled enough to stop it.

  BETTER THAN HUMANS

  Human behavior in war is far from perfect. The dehumanization that enables killing in war unleashes powerful demons. Enemy lives do not regain value once they have surrendered. Torture and murder of prisoners are common war crimes. Dehumanization often extends to the enemy’s civilian population. Rape, torture, and murder of civilians often follow in war’s wake.

  The laws of war are intended to be a bulwark against such barbarity, but even law-abiding nations are not immune to their seductions. In a series of mental health surveys of deployed U.S. troops in 2006 and 2007, the U.S. military found that an alarming number of soldiers expressed support for abuse of prisoners and noncombatants. Over one-third of junior enlisted soldiers said they thought torture should be allowed in order to gather important information about insurgents (a war crime). Less than half said they would report a unit member for injuring or killing an innocent noncombatant. Actual reported unethical behavior was much lower. Around 5 percent said they had physically hit or kicked noncombatants when not necessary. While these survey results are certainly not evidence of actual war crimes, they show disturbing attitudes among U.S. troops. (Perhaps most troubling, the U.S. military stopped asking questions about ethical behavior in its mental health surveys after 2007. This suggests that at the institutional level there was, at a minimum, insufficient interest in addressing this problem, if not willful blindness.)

  Ron Arkin is a roboticist who believes robots might be able to do better. Arkin is a regents’ professor, associate dean, and director of the Georgia Tech Mobile Robot Laboratory. He is a serious roboticist whose resume is peppered with publications like “Temporal Coordination of Perceptual Algorithms for Mobile Robot Navigation” and “Multiagent Teleautonomous Behavioral Control.” He is also heavily engaged in the relatively new field of robot ethics, or “roboethics.”

  Arkin had been a practicing roboticist for nearly twenty years before he started wondering, “What if [robotics] actually works?” Arkin told me that in the early 2000s, he began to see autonomy rapidly advance in robots. “That gave me pause, made me reflect on what it is we are creating.” Arkin realized that roboticists were “creating things that may have a profound impact on humanity.” Since then, he has worked to raise consciousness within the robotics community about the ethical implications of their work. Arkin cofounded the IEEE-RAS* Technical Committee on Roboethics and has given lectures at the United Nations, the International Committee of the Red Cross, and the Pentagon.

  Arkin’s interest in roboethics encompasses not just autonomous weapons, but also societal applications such as companion robots. He is particularly concerned about how vulnerable populations, such as children or the elderly, relate to robots. The common question across these different applications of robotics is, “What should we be building and what safeguards should be in place?” “I don’t care about the robots,” Arkin said. “Some people worry about robot sentience, superintelligence . . . I’m not concerned about that. I worry about the effect on people.”

  Arkin also applies this focus on human effects to his work in the military domain. In 2008, Arkin did a technical report for the U.S. Army Research Office on the creation of an “ethical governor” for lethal autonomous weapons. The question was whether, in principle, it might be possible to create an autonomous weapon that could comply with the laws of war. Arkin concluded it was theoretically possible and outlined, in a broad sense, how one might design such a system. An ethical governor would prohibit the autonomous weapon from taking an illegal or unethical act. Arkin takes the consequentialist view that if robots can be more ethical than humans, we have a “moral imperative to use this technology to help save civilian lives.”

  Just as autonomous cars might reduce deaths from driving, Arkin says autonomous weapons could possibly do the same in war. There is precedent for this point of view. Arguably the biggest life-saving innovation in war to date isn’t a treaty banning a weapon, but a weapon: precision-guided munitions. In World War II, bombadiers couldn’t have precisely hit military targets and avoided civilian ones even if they wanted to; the bombs simply weren’t accurate enough. A typical bomb had only a 50–50 chance of landing inside a 1.25-mile diameter circle. With bombs this imprecise, mass saturation attacks were needed to have any reasonable probability of hitting a target. More than 9,000 bombs were needed to achieve a 90 percent probability of destroying an average-sized target. Blanketing large areas may have been inhumane, but precision air bombardment wasn’t technologically possible. Today, some precision-guided munitions are accurate to within five feet, allowing them to hit enemy targets and leave nearby civilian objects untouched.

  The motivation behind the U.S. military’s move into precision guidance was increased operational effectiveness. Because of the bombs’ inaccuracy, in World War II 3,000 bombing sorties were needed to drop the 9,000 bombs required to take out a target. Today, a single sortie can take out multiple targets. A military with precision-guided weapons is orders of magnitude more effective in destroying the enemy. Fewer civilians killed in collateral damage is a beneficial side effect of greater precision.

  This increased accuracy saves lives. It also shifted public expectations about the degree of precision expected in war. We debate civilian casualties from drone strikes today, but tens of thousands of civilians were killed by U.S. and British bombers in the German cities of Hamburg, Kassel, Darmstadt, Dresden, and Pforzheim in World War II. Historians estimate that the U.S. strategic bombing of Japanese cities in World War II killed over 300
,000 civilians. Over 100,000 were killed on a single night in the firebombing of Tokyo. By contrast, according to the independent watchdog group The Bureau of Investigative Journalism, U.S. drone strikes against terrorists in Somalia, Pakistan, and Yemen killed a total of between three and sixteen civilians in 2015 and four civilians in 2016. Sentiment has shifted so far that Human Rights Watch has argued that “the use of indiscriminate rockets in populated areas violates international humanitarian law, or the laws of war, and may amount to war crimes.” This position effectively requires precision-guided weapons. As technology has made it easier to reduce collateral damage, societal norms have shifted too; we have come to expect fewer civilian casualties in war.

  Arkin sees autonomous weapons as “next-generation precision-guided munitions.” It isn’t just that autonomous weapons could be more precise and reliable than people. Arkin’s argument is that people just aren’t that good, morally. While some human behavior on the battlefield is honorable, he said, “some of it is quite dishonorable and criminal.” He says the status quo is “utterly and wholly unacceptable” with respect to civilian casualties. Brutal dictators like Saddam Hussein, Muammar Gaddafi, and Bashar al-Assad intentionally target civilians, but individual acts of violence against civilians even occur within otherwise law-abiding militaries.

  Autonomous weapons, Arkin has argued, could be programmed to never break the laws of war. They would be incapable of doing so. They wouldn’t seek revenge. They wouldn’t get angry or scared. They would take emotion out of the equation. They could kill when necessary and then turn killing off in an instant, if it was no longer lawful.

  Arkin told me he envisions a “software safety” on a rifle that evaluates the situation and acts as an ethical advisor for soldiers. He recounted to me a story he heard third-hand about a Marine who was about to commit an atrocity, “and his lieutenant came up to him and just said, ‘Marines don’t do that.’ And that just stopped the whole situation. Just a little nudge—pulled him back, pulled him back from the precipice of doing this criminal act. . . . The same thing could be used with ethical advisors for humans as well.” Arkin acknowledged the idea has downsides. Introducing “a moment of doubt” could end up getting soldiers killed. Still, he sees ample opportunity to improve on human behavior. “We put way too much faith in human warfighters,” he said.

  Arkin worries that an outright ban on autonomous weapons might prohibit research on these potentially valuable uses of autonomy. To be effective, the weapon Arkin envisions would need to be able to assess the situation on the battlefield and make a call as to whether an engagement should proceed. To do this, Arkin said, the governor would have to be at the actual point of killing. It can’t be “back in some general’s office. You’ve got to embed it in the weapon.”

  This technology, which has all of the enabling pieces of an autonomous weapon, is precisely the kind of weapon that many ban advocates fear. Their fear is that once the technology is created, the temptation to use it would be too great. Jody Williams told me she viewed autonomous weapons as more terrifying than nuclear weapons not because they were more destructive, but because she saw them as weapons that would be used. “There is no doubt in my mind that autonomous weapons would be used,” Williams said, even if plans today call for a human in the loop.

  I asked Arkin whether he thought it was realistic that militaries might refrain from using technology at their fingertips. He wasn’t sure. “Should we create caged tigers and always hold the potential for those cages to be opened and unleash these fearsome beasts on humanity?” he asked rhetorically. Arkin is sympathetic to concerns about autonomous weapons. It would be incorrect to characterize him as pro–autonomous weapons. “I’m not arguing that everything should be autonomous. That would be ludicrous. I don’t see fully automated robot armies like you see in Terminator and the like. . . . Why would we do that? . . . My concern is not just winning. It’s winning correctly, ethically, and keeping our moral compass as we win.”

  Arkin said he has the same goal of those who advocate a ban: reducing unnecessary civilian deaths. While he acknowledges there are risks with autonomous weapons, he sees the potential to improve on human behavior too. “Where does the danger lurk?” he asked. “Is it the robots or is it the humans?” He said he sees a role for humans on the battlefield of the future, but there is a role for automation as well, just like in airplane cockpits today. He said the key question is, “Who makes what decision when?”

  To answer that question, Arkin said “we need to do the research on it. . . . we need to know what capabilities they have before we say they’re unacceptable.” Arkin acknowledged that “technology is proceeding at a pace faster than we are able to control it and regulate it right now.” That’s why he said he supports a moratorium on autonomous weapon development “until we can get a better understanding of what we’re gaining and what we’re losing with this particular technology,” but he doesn’t go so far as to support a ban. “Banning is like Luddism,” he said. “It is basically saying, this can never turn out in any useful way, so let’s never ever do that. Slowing down the process, inspecting the process, regulating the process as you move forward makes far more sense. . . . I think there’s great hope and potential for positive outcomes with respect to saving non-combatant lives, and until someone can show me that, in all cases, that this isn’t feasible, I can’t support a ban.”

  Arkin says the only ban he “could possibly support” would be one limited to the very specific capability of “target generalization through machine learning.” He would not want to see autonomous weapons that could learn on their own in the field in an unsupervised manner and generalize to new types of targets. “I can’t see how that could turn out well,” he said. Even still, Arkin’s language is cautious, not categorical. “I tend not to be prescriptive,” he acknowledged. Arkin wants “discussion and debate . . . as long as we can keep a rational discussion as opposed to a fear-based discussion.”

  “FUNDAMENTALLY INHUMAN”

  Arkin acknowledged that he was considering autonomous weapons from a “utilitarian, consequentialist” perspective. (Utilitarianism is a moral philosophy of doing actions that will result in the most good overall.) From that viewpoint, Arkin’s position to pause development with a moratorium, have a debate, and engage in further research makes sense. If we don’t yet know whether autonomous weapons would result in more harm than good, then we should be cautious in ruling anything out. But another category of arguments against autonomous weapons use a different, deontological framework, which is rules-based rather than effects-based. These arguments don’t hinge on whether or not autonomous weapons would be better than humans at avoiding civilian casualties. Jody Williams told me she believed autonomous weapons were “fundamentally inhuman, a-human, amoral—whatever word you want to attach to it.” That’s strong language. If something is “inhuman,” it is wrong, period, even if it might save more lives overall.

  Peter Asaro, philosopher at The New School, also studies robot ethics, like Ron Arkin. Asaro writes on ethical issues stemming not just from military applications of robots but also personal, commercial, and industrial uses, from sex to law enforcement. Early on, Asaro came to a different conclusion than Arkin. In 2009, Asaro helped to cofound the International Committee for Robot Arms Control (ICRAC), which called for a ban on autonomous weapons years before they were on anyone else’s radar. Asaro is thoughtful and soft-spoken, and I’ve always found him to be one of the most helpful voices in explaining the ethical issues surrounding autonomous weapons.

  Asaro said that from a deontological perspective, some actions are considered immoral regardless of the outcome. He compared the use of autonomous weapons to actions like torture or slavery that are mal en se, “evil in themselves,” regardless of whether doing them results in more good overall. He admitted that torturing a terrorist who has information on the location of a ticking bomb might be utilitarian. But that doesn’t make it right, he said. Similarly, Asaro said there w
as a “fundamental question of whether it’s appropriate to allow autonomous systems to kill people,” regardless of the consequences.

  One could, of course, take the consequentialist position that the motives for actions don’t matter—all that matters is the outcome. And that is an entirely defensible ethical position. But there may be situations in war where people care not only about the outcome, but also the process for making a decision. Consider, for example, a decision about proportionality. How many civilian lives are “acceptable” collateral damage? There is no clear right answer. Reasonable people might disagree on what is considered proportionate.

  For these kinds of tasks, what does it mean for a machine to be “better” than a human? For some tasks, there are objective metrics for “better.” A better driver is one who avoids collisions. But some decisions, like proportionality, are about judgment—weighing competing values.

  One could argue that in these situations, humans should be the ones to decide, not machines. Not because machines couldn’t make a decision, but because only humans can weigh the moral value of the human lives that are at stake in these decisions. Law professor Kenneth Anderson asked, “What decisions require uniquely human judgment?” His simple question cuts right to the heart of deontological debates about autonomous weapons. Are there some decisions that should be reserved for humans, even if we had all of the automation and AI that we could imagine? If so, why?

  A few years ago, I attended a small workshop in New York that brought together philosophers, artists, engineers, architects, and science fiction writers to ponder the challenges that autonomous systems posed in society writ large. One hypothetical scenario was an algorithm that could determine whether to “pull the plug” on a person in a vegetative state on life support. Such decisions are thorny moral quandaries with competing values at stake—the likelihood of the person recovering, the cost of continuing medical care and the opportunity cost of using those resources elsewhere in society, the psychological effect on family members, and the value of human life itself. Imagine a super-sophisticated algorithm that could weigh all these factors and determine whether the net utilitarian benefit—the most good for everyone overall—weighed in favor of keeping the person on life support or turning it off. Such an algorithm might be a valuable ethical advisor to help families walk through these challenging moral dilemmas. But imagine one then took the next step and simply plugged the algorithm into the life-support machine directly, such that the algorithm could cease life support.

 

‹ Prev