Army of None

Home > Other > Army of None > Page 32
Army of None Page 32

by Paul Scharre


  Everyone in the workshop recoiled at the notion. I personally find it deeply unsettling. But why? From a purely utilitarian notion, using the algorithm might result in a better outcome. In fact, to the extent that the algorithm relieved family members of the burden of having to make the decision themselves, it might reduce suffering overall even if it had the same outcome. And yet . . . it feels repugnant to hand over such an important decision to a machine.

  Part of the objection, I think, is that we want to know that someone has weighed the value of a human life. We want to know that, if a decision is made to take a human life, that it has been a considered decision, that someone has acknowledged that this life has merit and it wasn’t capriciously thrown away.

  HUMAN DIGNITY

  Asaro argued that the need to appreciate the value of human life applies not just to judgments about civilian collateral damage but to decisions to take enemy lives as well. He told me “the most fundamental and salient moral question [surrounding autonomous weapons] is the question of human dignity and human rights.” Even if autonomous weapons might reduce civilian deaths overall, Asaro still saw them as unjustified because they would be “violating that human dignity of having a human decide to kill you.”

  Other prominent voices agree. Christof Heyns, a South African professor of human rights law, was the United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions from 2010 to 2016. In the spring of 2013, Heyns called on states to declare national moratoria on developing autonomous weapons and called for international discussions on the technology. Because of his formal UN role, his call for a moratorium played a significant role in sparking international debate.

  I caught up with Heyns in South Africa by phone, and he told me that he thought autonomous weapons violated the right to life because it was “arbitrary for a decision to be taken based on an algorithm.” He felt it was impossible for programmers to anticipate ahead of time all of the unique circumstances surrounding a particular use of force, and thus no way for an algorithm to make a fully informed contextual decision. As a result, he said the algorithm would be arbitrarily depriving someone of life, which he saw as a violation of their right to life. Peter Asaro had expressed similar concerns, arguing that it was a “fundamental violation” of human rights and human dignity to “delegate the authority to kill to the machine.”

  When viewed from the perspective of the soldier on the battlefield being killed, this is an unusual, almost bizarre critique of autonomous weapons. There is no legal, ethical, or historical tradition of combatants affording their enemies the right to die a dignified death in war. There is nothing dignified about being mowed down by a machine gun, blasted to bits by a bomb, burning alive in an explosion, drowning in a sinking ship, slowly suffocating from a sucking chest wound, or any of the other horrible ways to die in war.

  When he raised this issue before the UN, Heyns warned, “war without reflection is mechanical slaughter.” But much of war is mechanical slaughter. Heyns may be right that this is undignified, but this is a critique of war itself. Arguing that combatants have a right to die a dignified death appears to harken back to a romantic era of war that never existed. The logical extension of this line of reasoning is that the most ethical way to fight would be in hand-to-hand combat, when warriors looked one another in the eye and hacked each other to bits like civilized people. Is being beheaded or eviscerated with a sword dignified? What form of death is dignified in war?

  A better question is: What about autonomous weapons is different, and does that difference diminish human dignity in a meaningful way? Autonomous weapons automate the decision-making process for selecting specific targets. The manner of death is no different, and there is a human ultimately responsible for launching the weapon and putting it into operation, just not selecting the specific target. It is hard to see how this difference matters from the perspective of the victim, who is dead in any case. It is similarly a stretch to argue this difference matters from the perspective of a victim’s loved one. For starters, it might be impossible to tell whether the decision to drop a bomb was made by a person or a machine. Even if it were clear, victims’ families don’t normally get to confront the person who made the decision to launch a bomb and ask whether he or she stopped to weigh the value of the deceased’s life before acting. Much of modern warfare is impersonal killing at a distance. A cruise missile might be launched from a ship offshore, the trigger pulled by a person who was just given the target coordinates for launch, the target decided by someone looking at intelligence from a satellite, the entire process run by people who had never stepped foot in the country.

  When war is personal, it isn’t pretty. In messy internecine wars around the world, people kill each other based on ethnic, tribal, religious, or sectarian hatred. The murder of two million Cambodians in the 1970s by the Khmer Rouge was up close and personal. The genocide of 800,000 people in Rwanda in the 1990s, largely ethnic Tutsis killed by the Hutu majority, was personal. Many of those killed were civilians (and therefore those acts were war crimes), but when they were combatants, were those dignified deaths? In the abstract, it might seem more comforting to know that a person made that decision, but when much of the killing in actual wars is based on racial or ethnic hatred, is that really more comforting? Is it better to know that your loved one was killed by someone who hated him because of his race, ethnicity, or nationality—because they believed he was subhuman and not worthy of life—or because a machine made an objective calculation that killing him would end the war sooner, saving more lives overall?

  Some might say, yes, that automating death by algorithm is beyond the pale, a fundamental violation of human rights; but when compared to the ugly reality of war this position seems largely a matter of taste. War is horror. It has always been so, long before autonomous weapons came on the scene.

  One way autonomous weapons are clearly different is for the person behind the weapon. The soldier’s relationship to killing is fundamentally transformed by using an autonomous weapon. Here, Heyns’s concern that delegating life-or-death decisions to machines cheapens society overall gets some traction. For what does it say about the society using autonomous weapons if there is no one to bear the moral burden of war? Asaro told me that giving algorithms the power to decide life and death “changes the nature of society globally in a profound way,” not necessarily because the algorithms would get it wrong, but because that suggests a society that no longer values life. “If you eliminate the moral burden of killing,” he said, “killing becomes amoral.”

  This argument is intriguing because it takes a negative consequence of war—post-traumatic stress from killing—and holds it up as a virtue. Psychologists are increasingly recognizing “moral injury” as a type of psychological trauma that soldiers experience in war. Soldiers with these injuries aren’t traumatized by having experienced physical danger. Rather, they suffer enduring trauma for having seen or had to do things themselves that offend their sense of right and wrong. Grossman argued that killing is actually the most traumatic thing a soldier can experience in war, more so than fear of personal injury. These moral injuries are debilitating and can destroy veterans’ lives years after a war ends. The consequences are depression, substance abuse, broken families, and suicide.

  In a world where autonomous weapons bore the burden of killing, fewer soldiers would presumably suffer from moral injury. There would be less suffering overall. From a purely utilitarian, consequentialist perspective, that would be better. But we are more than happiness-maximizing agents. We are moral beings and the decisions we make matter. Part of what I find upsetting about the life-support algorithm is that, if it were my loved one, it seems to me that I should bear the burden of responsibility for deciding whether to pull the plug. If we lean on algorithms as a moral crutch, it weakens us as moral agents. What kind of people would we be if we killed in war and no one felt responsible? It is a tragedy that young men and women are asked to shoulder society’s guilt
for the killing that happens in war when the whole nation is responsible, but at least it says something about our morality that someone sleeps uneasy at night. Someone should bear the moral burden of war. If we handed that responsibility to machines, what sort of people would we be?

  THE ROLE OF THE MILITARY PROFESSIONAL

  I served four combat tours in Iraq and Afghanistan. I saw horrible things. I lost friends. Only one moment in those four tours repeatedly returns to me, though, sometimes haunting me in the middle of the night.

  I was on a mountaintop in Afghanistan with two other Rangers, Nick and Johnny, conducting a long-range scouting patrol, looking for Taliban encampments. The remainder of our reconnaissance team, also on foot, was far away. At the furthest extent of our patrol, we paused on a rocky summit to rest. A deep narrow valley opened up before us. In the distance was a small hamlet. The nearest city was a day’s drive. We were at the farthest extent of Afghanistan’s wilderness. The only people out there with us were goat herders, woodcutters, and foreign fighters crossing over the border from Pakistan.

  From our perch on the mountaintop, we saw a young man in his late teens or early twenties working his way along a spur toward our position. He had a few goats in trail, but I had long since learned that goat herding was often a cover used by enemy spotters. Of course, it was also possible that he was just a goat herder. I watched him from a distance through binoculars and discussed with my teammates whether he was a scout or just a local herder. Nick and Johnny weren’t concerned, but as we rested, catching our breath before the long hike back, the man kept coming closer. Finally, he crossed under our position and into a place where he was out of sight. A few minutes passed, and since Nick and Johnny weren’t yet ready to head back, I began to get concerned about where this other man was. Odds were good he was just a herder and in all likelihood was unaware that we were even there. Still, the terrain was such that he could use the rocks for cover to get quite close without us noticing, if he wanted to sneak up on us. Other small patrols had been ambushed by similar ruses—individual insurgents pretending to be civilians until they got close enough to pull out a weapon from underneath their coat and fire. He probably couldn’t get all three of us, but he could possibly kill one of us if he came upon us suddenly.

  I told Johnny and Nick I was going to look over the next rock to see where the man had gone, and they said fine, just to stay in sight. I picked up my sniper rifle and crept my way along the rocky mountaintop, looking for the man who had dropped out of sight.

  Before long, I spotted him through a crack in the rocks. He was not far at all—maybe seventy-five meters away—crouching down with his back to me. I raised my rifle and peered through my scope. I wanted to see if he was carrying a rifle. It wouldn’t have necessarily meant he was a combatant, since Afghans often carried weapons for personal protection in this area, but it would have at least meant that he was a potential threat and I should keep an eye on him. If he was concealing the rifle under his cloak, that certainly wouldn’t be a good sign. From my angle, though, I couldn’t quite tell. It looked like he had something in his hands. Maybe it was a rifle. Maybe it was a radio. Maybe it was nothing. I couldn’t see; his hands were in front of him and his back blocked my view.

  The wind shifted and the man’s voice drifted over the rocks. He was talking to someone. I didn’t see anyone else, but my field of view was hemmed in by rocks on either side. Perhaps there was someone out of sight. Perhaps he was talking on a radio, which would have been even more incriminating than a rifle, since goat herders didn’t generally carry radios.

  I settled into a better position to watch him . . . and a more stable firing position if I had to shoot him. I was above him, looking down on him at an angle, and it was steep enough that I remember thinking I would have to adjust my aim to compensate for the relative rise of the bullet. I considered the range, angle, and wind to determine where I would aim if I had to fire. Then I watched him through my scope.

  No one else came into view. If he had a rifle, I couldn’t see it, but I couldn’t verify that he didn’t have one either. He stopped talking for a while, then resumed.

  I didn’t speak Pashto so I didn’t know what he was saying, but as his voice picked up again, his words came into context. He was singing. He was singing to the goats, or maybe to himself, but I was confident he wasn’t singing out our position over a radio. That would be peculiar.

  I relaxed. I watched him for a little longer till I was comfortable that there was nothing I had missed, then headed back to Nick and Johnny. The man never knew I was there.

  I’ve often wondered why that event, more than any other, comes back to me. I didn’t do anything wrong and neither did anyone else. He was clearly an innocent man and not a Taliban fighter. I have no doubt I made the right call. Yet there is something about that moment when I did not yet know for certain that has stuck with me. I think it is because in that moment, when the truth was still uncertain, I held this man’s life in my hands. Even now, years later, I can feel the gravity of that decision. I didn’t want to get it wrong. The four of us—me, Johnny, Nick, and this Afghan goat herder—we were nothing in the big scheme of the war. But our lives still mattered. The stakes were high for us—the ultimate stakes.

  Making life-or-death decisions on the battlefield is the essence of the military profession. Autonomous weapons don’t just raise ethical challenges in the abstract—they are a direct assault on the heart of the military profession. What does it mean for the military professional if decisions about the use of force are programmed ahead of time by engineers and lawyers? Making judgment calls in midst of uncertainty, ambiguous information, and conflicting values is what military professionals do. It is what defines the profession. Autonomous weapons could change that.

  The U.S. Department of Defense has been surprisingly transparent about its thought processes on autonomous weapons, with individuals like Deputy Secretary of Defense Bob Work discussing the dilemma in multiple public forums. Much of this discussion has come from civilian policy and technology officials, many of whom were very open with me in interviews. Senior U.S. military personnel have said far less publicly, but this question of military professional ethics is one of the few issues they have weighed in on. Vice chairman of the Joint Chiefs General Paul Selva said in 2016:

  One of the places that we spend a great deal of time is determining whether or not the tools we are developing absolve humans of the decision to inflict violence on the enemy. And that is a fairly bright line that we’re not willing to cross. . . . Because it is entirely possible that as we work our way through this process of bringing enabling technologies into the Department, that we could get dangerously close to that line. And we owe it to ourselves and to the people we serve to keep it a very bright line.

  Selva reiterated this point a year later in testimony before the Senate Armed Services Committee, when he said:

  Because we take our values to war and because many of the things that we must do in war are governed by the laws of war, . . . I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life. . . . [W]e should all be advocates for keeping the ethical rules of war in place, lest we unleash on humanity a set of robots that we don’t know how to control.

  I often hear General Selva’s sentiments echoed by other military personnel, that they have no interest in diminishing human accountability and moral responsibility. As technology moves forward, it will raise challenging questions about how to put that principle into practice.

  THE PAIN OF WAR

  One of the challenges in weighing the ethics of autonomous weapons is untangling which criticisms are about autonomous weapons and which are really about war. What does it mean to say that someone has the right to life in war, when killing is the essence of war? In theory, war might be more moral if lives were carefully considered and only taken for the right reasons. In practice, killing often occurs without careful consideration of the value of enemy lives.
Overcoming the taboo of killing often involves dehumanizing the enemy. The ethics of autonomous weapons should be compared to how war is actually fought, not some abstract ideal.

  Recognizing the awful reality of war doesn’t mean one has to discard all concern for morality. Jody Williams told me she doesn’t believe in the concept of a “just war.” She had a much more cynical view: “War is about attempting to increase one’s power. . . . It’s not about fairness in any way. It’s about power. . . . It’s all bullshit.” I suspect there isn’t a weapon or means of warfare that Williams is in favor of. If we could ban all weapons or even war itself, I imagine she’d be on board. And if it worked, who wouldn’t be? But in the interim, she and others see an autonomous weapon as something that “crosses a moral and ethical Rubicon.”

  There is no question that autonomous weapons raise fundamental questions about the nature of our relationship to the use of force. Autonomous weapons would depersonalize killing, further removing human emotions from the act. Whether that is a good or a bad thing depends on one’s point of view. Emotions lead humans to commit both atrocities and acts of mercy on the battlefield. There are consequentialist arguments either way, and deontological arguments either resonate with people or don’t.

 

‹ Prev