by Paul Scharre
Anderson explained that the Geneva Conventions were negotiated in the aftermath of World War II and so the context for the term “attack” was the kind of attacks on whole cities that happened during the war. “The notion of the launching of an attack is broader than simply the firing of any particular weapon,” he said. “An attack is going to very often involve many different soldiers, many different units, air and ground forces.” Anderson said determinations about proportionality and precautions in attack were “human questions,” but there could be situations in an attack’s execution where humans used machines “to respond at speed without consulting.”
Humans, not machines themselves, are bound by the laws of war. This means some human involvement is needed to ensure attacks are lawful. A person approving an attack would need to have sufficient information about the target(s), the environment, the weapon(s), and the context for the attack in order to make a determination about its legality. And the weapon’s autonomy would need to be sufficiently bounded in time and space such that conditions are not likely to change and render its employment unlawful. This is the problem with persistent land mines. They remain lethal even after the context changes (the war ends).
The fact that human judgment is needed in the decision to launch an attack leads to some minimum requirement for human involvement in the use of force. There is considerable flexibility in how an “attack” might be defined, though. This answer may not be satisfactory to some proponents of a ban, who may want tighter restrictions. Some proponents of a ban have looked outside the law, to ethics and morals, as justification.
* Horowitz is also codirector of the Ethical Autonomy Project at the Center for a New American Security and a frequent coauthor of mine.
17
SOULLESS KILLERS
THE MORALITY OF AUTONOMOUS WEAPONS
No one had to tell us on that mountaintop in Afghanistan that shooting a little girl was wrong. It would have been legal. If she was directly participating in hostilities, then she was a combatant. Under IHL, she was a valid target. But my fellow soldiers and I knew killing her would have been morally wrong. We didn’t even discuss it. We just felt it. Autonomous weapons might be lawful in some settings, but would they be moral?
Jody Williams is a singular figure in humanitarian disarmament. She was a leading force behind the original—and successful—campaign to ban land mines, for which she shared a Nobel Peace Prize in 1997. She speaks with clarity and purpose. Autonomous weapons are “morally reprehensible,” she told me. They cross “a moral and ethical Rubicon. I don’t understand how people can really believe it’s okay to allow a machine to decide to kill people.” Williams readily admits that she isn’t a legal or ethical scholar. She’s not a scientist. “But I know what’s right and wrong,” she said.
Williams helped found the Campaign to Stop Killer Robots along with her husband, Steve Goose of Human Rights Watch. The campaign’s case has always included moral and ethical arguments in addition to legal ones. Ethical arguments fall into two main categories. One stems from an ethical theory called consequentialism, the idea that right and wrong depend on the outcome of one’s actions. Another comes from deontological ethics, which is the concept that right and wrong are determined by rules governing the actions themselves, not the consequences. A consequentialist would say, “the ends justify the means.” From a deontological perspective, however, some actions are always wrong, regardless of the outcome.
THE CONSEQUENCES OF AUTONOMOUS WEAPONS
A consequentialist case for a ban assumes that introducing autonomous weapons would result in more harm than not introducing them. Ban advocates paint a picture of a world with autonomous weapons killing large numbers of civilians. They argue that while autonomous weapons might be lawful in the abstract, in practice the rules in IHL are too flexible or vague and permitting autonomous weapons will inevitably lead to a slippery slope where they are used in ways that cause harm. Thus, a ban is justified on ethical (and practical) grounds. Conversely, some opponents of a ban argue that autonomous weapons might be more precise and reliable than humans and thus better at avoiding civilian casualties. In that case, they argue, combatants would have an ethical responsibility to use them.
These arguments hinge primarily on the reliability of autonomous weapons. This is partly a technical matter, but it is also a function of the organizational and bureaucratic systems that guide weapon development and testing. One could, for example, be optimistic that safe operation might someday be possible with robust testing, but pessimistic that states would ever invest in sufficient testing or marshal their bureaucratic organizations well enough to capably test such complex systems. The U.S. Department of Defense has published detailed policy guidance on autonomous weapon development, testing, and training. Other nations have not been so thorough.
What might the consequences be if autonomous weapons were able to reliably comply with IHL? Many are skeptical that autonomous weapons could comply with IHL in the first place. But what if they could? How would that change war?
EMPATHY AND MERCY IN WAR
In his book Just and Unjust Wars, philosopher Michael Walzer cites numerous examples throughout history of soldiers refraining from firing on an enemy because they recognized the other’s humanity. He calls these incidents “naked soldier” moments where a scout or sniper stumbles across an enemy alone and often doing something mundane and nonthreatening, such as bathing, smoking a cigarette, having a cup of coffee, or watching the sunrise. Walzer notes:
It is not against the rules of war as we currently understand them to kill soldiers who look funny, who are taking a bath, holding up their pants, reveling in the sun, smoking a cigarette. The refusal of these men [to kill], nevertheless, seems to go to the heart of the war convention. For what does it mean to say that someone has a right to life?
These moments of hesitation are about more than the enemy not posing an immediate threat. In these moments, the enemy’s humanity is exposed, naked for the firer to see. The target in the rifle’s cross hairs is no longer “the enemy.” He is another person, with hopes, dreams, and desires—same as the would-be shooter. With autonomous weapons, there would be no human eye at the other end of the rifle scope, no human heart to stay the trigger finger. The consequence of deploying autonomous weapons would be that these soldiers, whose lives might be spared by a human, would die. From a consequentialist perspective, this would be bad.
There is a counterargument against empathy in war, however. I raised this concern about mercy with an Army colonel on the sidelines of a meeting on the ethics of autonomous weapons at the United States Military Academy in West Point, New York, a few years ago and he gave me a surprising answer. He told me a story about a group of his soldiers who came across a band of insurgents in the streets of Baghdad. The two groups nearly stumbled into each other and the U.S. soldiers had the insurgents vastly outnumbered. There was no cover for the insurgents to hide behind. Rather than surrender, though, the insurgents threw their weapons to the ground, turned and fled. The American soldiers didn’t fire.
The colonel was incensed. Those insurgents weren’t surrendering. They were escaping, only to return to fight another day. An autonomous weapon would have fired, he told me. It would have known not to hold back. Instead, his soldiers’ hesitation may have cost other Americans their lives.
This is an important dissenting view against the role of mercy in war. It channels General William Tecumseh Sherman from the American Civil War, who waged a campaign of total war against the South. During his infamous 1864 “March to the Sea,” Sherman’s troops devastated the South’s economic infrastructure, destroying railroads and crops, and appropriating livestock. Sherman’s motivation was to bring the South to its knees, ending the war sooner. “War is cruelty,” Sherman said. “There is no use trying to reform it. The crueler it is, the sooner it will be over.”
The incidents Walzer cites of soldiers who refrained from firing contain this dilemma. After one such incident
, a sergeant chastised the soldiers for not killing the enemy they saw wandering through a field, since now the enemy would report back their position, putting the other men in the unit at risk. In another example, a sniper handed his rifle to his comrade to shoot an enemy he saw taking a bath. “He got him, but I had not stayed to watch,” the sniper wrote in his memoirs. Soldiers understand this tension, that sparing the enemy in an act of kindness might prolong the war or lead to their own friends being put at risk later. The sniper who handed his rifle to his teammate understood that killing the enemy, even while bathing, was a necessary part of winning the war. He simply couldn’t be the one to do it.
Autonomous weapons wouldn’t defy their orders and show mercy on an enemy caught unawares walking through a field or taking a bath. They would follow their programming. One consequence of deploying autonomous weapons, therefore, could be more deaths on the battlefield. These moments of mercy would be eliminated. It might also mean ending the war sooner, taking the Sherman approach. The net result could be more suffering in war or less—or perhaps both, with more brutal and merciless wars that end faster. In either case, one should be careful not to overstate the effect of these small moments of mercy in war. They are the exception, not the rule, and are minuscule in scale compared to the many engagements in which soldiers do fire.
THE CONSEQUENCES OF REMOVING MORAL RESPONSIBILITY FOR KILLING
Removing the human from targeting and kill decisions could have other broader consequences, beyond these instances of mercy. If the people who launched autonomous weapons did not feel responsible for the killing that ensued, the result could be more killing, with more suffering overall.
In his book On Killing, Army psychologist Lieutenant Colonel Dave Grossman explained that most people are reluctant to kill. During World War II, Army historian S. L. A. Marshall interviewed soldiers directly coming off the front lines and found, to his surprise, that most soldiers weren’t shooting at the enemy. Only 15 to 20 percent of soldiers were actually firing at the enemy. Most soldiers were firing above the enemy’s head or not firing at all. They were “posturing,” Grossman explained, pretending to fight but not actually trying to kill the enemy. Grossman drew on evidence from a variety of wars to show that this posturing has occurred throughout history. He argued that humans have an innate biological resistance to killing. In the animal kingdom, he explained, animals with lethal weaponry find nonlethal ways of resolving intraspecies conflict. Deaths from these fights occasionally occur, but usually one animal submits first. That’s because killing isn’t the point: dominance is. Humans’ innate resistance to killing can be overcome, however, through psychological conditioning, pressure from authority, diffused responsibility for killing, dehumanizing the enemy, or increased psychological distance from the act of killing.
One factor, Grossman argued, is how intimately soldiers see the reality of their actions. If they are up close to the enemy, such that they can see the other as a person, as the soldiers in Walzer’s examples did, then many will refrain from killing. This resistance diminishes as the psychological distance from the enemy grows. A person who at 10 meters might look like a human being—a father, a brother, a son—is merely a dark shape at 300 meters. Twentieth-century tools of war increased this psychological distance even further. A World War II bombardier looked down his bombsight at a physical object: a bridge, a factory, a base. The people were invisible. With this kind of distance, war can seem like an exercise in demolition, detached from the awful human consequences of one’s actions. In World War II, the United States and United Kingdom leveled whole cities through strategic bombing, killing hundreds of thousands of civilians. It would have been far harder for most soldiers to carry out the same equivalent killing, much of which was against civilians, if they had to see the reality of their actions up close.
Modern information technology allows warfare at unprecedented physical distances, but recompresses the psychological distance. Drone operators today may be thousands of miles away from the killing, but their psychological distance is very close. With high-definition cameras, drone crews have an intimate view of a target’s life. Drones can loiter for long periods of time and operators may watch a target for days or weeks, building up “patterns of life” before undertaking a strike. Afterward, drone operators can see the human costs of their actions, as the wounded suffer or friends and relatives come to gather the dead. Reports of post-traumatic stress disorder among drone crews attest to this intimate relationship with killing and the psychological costs associated with it.
All military innovation since the first time a person threw a rock in anger has been about striking the enemy without putting oneself at risk. Removing the soldier from harm’s way might lower the barrier to military action. Uninhabited systems need not be autonomous, though. Militaries could use robotic systems to reduce physical risk and still keep a human in the loop.
Delegating the decision to kill, however, could increase the psychological distance from killing, which could be more problematic. By not having to choose the specific targets, even via a computer screen, the human would be even further removed from killing. Grossman’s work on the psychology of killing suggests the result could be less restraint.
Autonomous weapons could also lead to an off-loading of moral responsibility for killing. Grossman found that soldiers were more willing to kill if responsibility for killing was diffused. While only 15–20 percent of World War II riflemen reported firing at the enemy, firing rates for machine-gun crews were much higher, nearly 100 percent. Grossman argued that each team member could justify his actions without taking responsibility for killing, which only occurred because of the collective actions of the team. The soldier feeding the ammunition wasn’t killing anyone; he was only feeding ammunition. The spotter wasn’t pulling the trigger; he was just telling the gunner where to aim. Even the gunner could absolve himself of responsibility; he was merely aiming where the spotter told him to aim. Grossman explained, “if he can get others to share in the killing process (thus diffusing his personal responsibility by giving each individual a slice of the guilt), then killing can be easier.” Grossman argued that much of the killing in war has historically been done by crew-served weapons: machine guns, artillery, cannon, and even the chariot. If the person launching the autonomous weapon felt that the weapon was doing the killing, the lessening of moral responsibility might lead to more killing.
Mary “Missy” Cummings is director of the Humans and Autonomy Lab (HAL) at Duke University. Cummings has a PhD in systems engineering, but her focus isn’t just on the automation itself, but rather on how humans interact with automation. It’s part engineering, part design, and part psychology. When I visited her lab at Duke she showed me a van they were using to test how pedestrians interact with self-driving cars. The secret, Cummings told me, was that the car wasn’t self-driving at all. There was a person behind the wheel. The experiment was to see if pedestrians would change their behavior if they thought the car was self-driving. And they did. “We see some really dangerous behaviors,” she said. People would carelessly walk in front of the van, assuming it would stop. Pedestrians perceived the automation as more reliable than a human driver and changed their behavior as a result, acting more recklessly themselves.
In a 2004 article, Cummings wrote that automation could create a “moral buffer,” reducing individuals’ perceptions of moral responsibility for their actions:
[P]hysical and emotional distancing can be exacerbated by any automated system that provides a division between a user and his or her actions . . . These moral buffers, in effect, allow people to ethically distance themselves from their actions and diminish a sense of accountability and responsibility.
Cummings understands this not only as a researcher, but also as a former Navy F-18 fighter pilot. She wrote:
[It] is more palatable to drop a laser guided missile on a building than it is to send a Walleye bomb into a target area with a live television feed that transmits back t
o the pilot the real-time images of people who, in seconds, will no longer be alive.
In addition to the greater psychological distance automation provides, humans tend to anthropomorphize machines and assign them moral agency. People frequently name their Roomba. “It is possible that without consciously recognizing it, people assign moral agency to computers, despite the fact that they are inanimate objects,” Cummings wrote. Like crew-served weapons, humans may off-load moral responsibility for killing to the automation itself. Cummings cautioned that this “could permit people to perceive themselves as unaccountable for whatever consequences result from their actions, however indirect.”
Cummings argued that human-machine interfaces for weapon systems should be designed to encourage humans to feel responsible for their actions. The manner in which information is relayed to the human plays a role. In her article “Creating Moral Buffers in Weapon Control Interface Design” she criticized the interface for a decision-support tool for missile strikes that used a Microsoft Excel puppy dog icon to communicate with the user. The “cheerful, almost funny graphic only helps to enforce the moral buffer.” The human role in decision-making also matters. Cummings criticized the Army’s decision to use the Patriot in a supervised autonomous mode, arguing that may have been a role in the F-18 fratricide. She said: