Army of None

Home > Other > Army of None > Page 29
Army of None Page 29

by Paul Scharre


  THE ACCOUNTABILITY GAP

  Advocates of a ban on autonomous weapons raise concerns beyond these IHL rules. Bonnie Docherty is a lecturer at Harvard Law School and a senior researcher in the Arms Division at Human Rights Watch. A leading voice in the campaign to ban autonomous weapons, Docherty is one of a number of scholars who have raised the concern that autonomous weapons could create an “accountability gap.” If an autonomous weapon were to go awry and kill a large number of civilians, who would be responsible? If the person who launched the weapon intended to kill civilians, it would be a war crime. But if the person launching the weapon did not intend to kill civilians, then the situation becomes murkier. Docherty told me it wouldn’t be “fair nor legally viable to . . . hold the commander or operator responsible.” At the same time, Docherty writes “‘punishing’ the robot after the fact would not make sense.” The robot would not be legally considered a “person.” Technically speaking, there would have been no crime. Rather, this would be an accident. In civilian settings, civil liability would come into play. If a self-driving car killed someone, the manufacturer might be liable. In war, though, military and defense contractors are generally shielded from civil liability.

  The result would be a gap in accountability. No one would be responsible. Docherty sees this as an unacceptable situation. She told me she is particularly troubled because she sees autonomous weapons as likely to be used in situations where they are prone to killing civilians, which she described as a “dangerous combination” when there is no accountability for their actions. Accountability, she said, allows for “retributive justice” for victims or their families and for deterring future actions. The solution, Docherty argues, is to “eliminate this accountability gap by adopting an international ban on fully autonomous weapons.”

  An accountability gap is a concern, but only arises if the weapon behaves in an unpredictable fashion. When an autonomous system correctly carries out a person’s intent, then accountability is clear: the person who put the autonomous system into operation is accountable. When the system does something unexpected, the person who launched it could reasonably claim they weren’t responsible for the system’s actions, since it wasn’t doing what they intended.

  Better design, testing, and training can reduce these risks, but accidents will happen. Accidents happen with people too, though, and not always in circumstances where people can be held accountable. Accidents are not always the result of negligence or malicious intent. That’s why they’re called accidents.

  Docherty’s solution of keeping a human in the loop so there is someone to blame doesn’t solve the problem. People can make mistakes resulting in terrible tragedies without a crime being committed. The USS Vincennes shootdown of Iran Air Flight 655 is an example. The shootdown was a mistake, not a war crime, which would require intent. No individual was charged with a crime, but the U.S. government was still responsible. The U.S. government paid $61.8 million in compensation to the victims’ families (without admitting fault) in 1996 to settle a suit Iran brought to the International Court of Justice.

  Docherty said accountability is an issue that “resonates with everyone, from military to lawyers to diplomats to ethicists.” The desire to hold someone accountable for harm is a natural human impulse, but there is no principle in IHL that says there must be an individual to hold accountable for every death on the battlefield. States are ultimately responsible for the actions their militaries take. It makes sense to hold individuals responsible for criminal acts, but an accountability gap already exists with human-induced accidents today. Charles Dunlap, Duke law professor and former deputy judge advocate general for the U.S. Air Force, has argued that for those concerned about an accountability gap, the “issue is not with autonomous weapons, it is with the fundamental precepts of criminal law.”

  Bonnie Docherty also said she thought accountability was important to deter future harmful acts. While accidental killings are unintentional by definition and thus something that cannot be deterred, an accountability gap could create an insidious danger of moral hazard. If those who launch autonomous weapons do not believe they are accountable for the killing that results, they could become careless, launching the weapon into places where perhaps its performance was not assured. In theory, compliance with IHL should prevent this kind of reckless behavior. In practice, the fuzziness of principles like precautions in attack and the fact that machines would be doing the targeting would increasingly separate humans from killing on the battlefield. Complying with IHL might require special attention to human-machine interfaces and operator training to instill a mindset in human operators that they are responsible for the autonomous weapon’s actions.

  THE DICTATES OF PUBLIC CONSCIENCE

  Some advocates of a ban argue that complying with these IHL principles isn’t enough. They argue autonomous weapons violate the “public conscience.” An IHL concept known as the Martens Clause states: “In cases not covered by the law in force, the human person remains under the protection of the principles of humanity and the dictates of the public conscience.” Bonnie Docherty and others believe that the Martens Clause justifies a ban.

  The Martens Clause is a thin reed to lean on. For starters, it has never been used to ban a weapon before. Even the legal status of the Martens Clause itself is highly debated. Some view the Martens Clause as an independent rule of IHL that can be used to ban weapons that violate the “public conscience.” A more conservative interpretation of the Martens Clause is that it is merely a recognition of “customary international law.” Customary laws exist by state practice, even they aren’t explicitly written down. As one legal expert succinctly put it: “There is no accepted interpretation of the Martens Clause.”

  Even if one were to grant the Martens Clause sufficient legal weight to justify a ban, how does one measure the public conscience? And which public? The American public? The Chinese public? All of humanity?

  Public opinions on morality and ethics vary around the globe, shaped by religion, history, the media, and even pop culture. I am continually struck by how much the Terminator films influence debate on autonomous weapons. In nine out of ten serious conversations on autonomous weapons I have had, whether in the bowels of the Pentagon or the halls of the United Nations, someone invariably mentions the Terminator. Sometimes it’s an uncomfortable joke—the looming threat of humanity’s extinction the proverbial elephant in the room. Sometimes the Terminator references are quite serious, with debates about where the Terminator would fall on a spectrum of autonomy. I wonder if James Cameron had not made the Terminator movies how debates on autonomous weapons would be different. If science fiction had not primed us with visions of killer robots set to extinguish humanity, would we fear autonomous lethal machines?

  Measuring public attitudes is notoriously tricky for this very reason. Responses to polls can be swayed by “priming” subjects with information to tilt them for or against an issue. Mentioning a word or topic early in a survey can subconsciously place ideas in a person’s mind and measurably change the answers they give to later questions. Two political scientists have tried to use polling to measure the public conscience on autonomous weapons. They came to very different conclusions.

  Charli Carpenter, a professor of political science at University of Massachusetts at Amherst, made the first attempt to measure public views on autonomous weapons in 2013. She found that 55 percent of respondents somewhat or strongly opposed “the trend towards using completely autonomous robotic weapons in war.” Only 26 percent of respondents somewhat or strongly favored autonomous weapons, with the remaining unsure. Most interestingly, Carpenter found stronger opposition among military service members and veterans. Carpenter’s survey became a sharp arrow in the quiver of ban advocates, who frequently cite her results.

  Political scientist Michael Horowitz disagreed. Horowitz, a professor at University of Pennsylvania,* released a study in 2016 that showed a more complicated picture. Asking respondents in a vacuum for
their views on autonomous weapons, Horowitz found results similar to Carpenter’s: 48 percent opposed autonomous weapons and 38 percent supported them, with the remainder undecided. When Horowitz varied the context for use of autonomous weapons, however, public support rose. If told that autonomous weapons were both more effective and helped protect friendly troops, respondents’ support rose to 60 percent and opposition fell to 27 percent. Horowitz argued the public’s views on autonomous weapons depended on context. He concluded, “it is too early to argue that [autonomous weapon systems] violate the public conscience provision of the Martens Clause because of public opposition.”

  These dueling polls suggest that measuring the public conscience is hard. Peter Asaro—a professor and philosopher of science, technology, and media at The New School in New York and another proponent of a ban on autonomous weapons—suggests it might be impossible. Asaro distinguishes “public conscience” from public opinion. “‘Conscience’ has an explicitly moral inflection that ‘opinion’ lacks,” he writes. It is a “disservice to reduce the ‘dictates of public conscience’ to mere public opinion.” Instead, we should discern the public conscience “through public discussion, as well as academic scholarship, artistic and cultural expressions, individual reflection, collective action, and additional means, by which society deliberates its collective moral conscience.” This approach is more comprehensive, but essentially disqualifies any one metric for understanding the public conscience. But perhaps that is best if it is so. Reflecting on this debate, Horowitz concluded, “The bar for claiming to speak for humanity should be high.”

  Maybe attempts to measure the public conscience don’t really matter. It was the public conscience in the form of advocacy by peace activist groups and governments that led to bans on land mines and cluster munitions. Steve Goose told me, “the clearest manifestation of the ‘dictates of the public conscience’ is when citizens generate enough pressure on their governments that the politicians are compelled to take action.” If action is the metric, then the jury is still out on the public conscience on autonomous weapons.

  FROM ANALYSIS TO ACTION

  The legal issues surrounding autonomous weapons are fairly clear. What one decides to do about autonomous weapons is another matter. I’ve observed in the eight years I’ve been working on autonomous weapons that people tend to gravitate quickly to one of three positions. One view is to ban autonomous weapons because they might violate IHL. Another is that since those illegal uses would, by definition, already be prohibited under IHL, there is no reason for a ban; we should let IHL work as intended. And then there is a third, middle position that perhaps the solution is some form of regulation.

  Because fully autonomous weapons do not yet exist, in some respects they end up being a kind of Rorschach test for how one views the ability of IHL to deal with new weapons. If one is confident in the ability of IHL to handle emerging technologies, then no new law is needed. If one is generally skeptical that IHL will succeed in constraining harmful technologies, one might favor a ban.

  Law professor Charles Dunlap is firmly in the camp that we should trust IHL. From his perspective, ad hoc weapons bans are not just unnecessary, they are harmful. In a series of essays, Dunlap has argued that if we were really concerned about protecting civilians, we would abandon efforts to “demonize specific technologies” and instead “emphasize effects rather than weapons.”

  One of Dunlap’s concerns is that weapons bans based on a “technological ‘snapshot in time’” do not leave open the possibility for technology improvements that may lead to the development of more humane weapons later. Dunlap cited modern-day CS gas (a form of tear gas) as an example of a weapon that could have beneficial effects on the battlefield by incapacitating, rather than killing, soldiers, but is prohibited for use in combat by the Chemical Weapons Convention. The prohibition seems especially nonsensical given that CS gas is regularly used by law enforcement and is legal for military use against civilians for riot-control purposes, but not against enemy combatants. The U.S. military also uses it in its own troops in training. Dunlap also opposes bans on land mines and cluster munitions because they preclude the use of “smart mines” that self-deactivate after a period of time or cluster munitions with low dud rates. Both of these innovations solve the core problem of land mines and cluster munitions: their lingering effects after war’s end.

  Without these tools at a military’s disposal, Dunlap has argued, militaries may be forced to resort to more lethal or indiscriminate methods to accomplish the same objectives, resulting in “the paradox that requires nations to use far more deadly (though lawful) means to wage war.” He gave a hypothetical example of a country that could use self-neutralizing mines to temporarily shut down an enemy airfield, but for the prohibition on mines, which forces them to use high-explosive weapons instead. As a result, when the war is over, the runways are not operable for deliveries of humanitarian aid to help civilians affected by the war. Dunlap concluded:

  Given the pace of accelerated scientific development, the assumptions upon which the law relies to justify barring certain technologies could become quickly obsolete in ways that challenge the wisdom of the prohibition.

  In short, banning a weapon based on the state of technology at a given point in time is ill-conceived, Dunlap argues, because technology is always changing, often in ways we cannot predict. A better approach, he has suggested, is to regulate the use of weapons, focusing on “strict compliance with the core principles of IHL.” His critique is particularly relevant for autonomous weapons, for which technology is moving forward at a rapid pace, and Dunlap has been a forceful critic of a ban.

  Bonnie Docherty and Steve Goose, on the other hand, aren’t interested in whether autonomous weapons could theoretically comply with IHL someday. They are interested in what states are likely to actually do. Docherty cut her teeth doing field research on cluster munitions and other weapons, interviewing victims and their families in Afghanistan, Iraq, Lebanon, Libya, Georgia, Israel, Ethiopia, Sudan, and Ukraine. Goose is a veteran of prior (successful) campaigns to ban land mines, cluster munitions, and blinding lasers. Their backgrounds shape how they see the issues. Docherty told me, “even though there are no victims yet, if [these weapons are] allowed to exist, there will be and I’ll be doing field missions on them. . . . We shouldn’t forget that these things would have real human effect. They aren’t just merely a matter for academics.” Goose acknowledged that there might be isolated circumstances where autonomous weapons could be used lawfully, but he said he had “grave concern” that once states had them, they would use them in ways outside those limited circumstances.

  There is precedent for Goose’s concern. Protocol II of the Convention on Certain Conventional Weapons (CCW) regulates the use of mines in order to protect civilians, such as keeping mines away from populated areas and clearly marking minefields. If the rules had been strictly followed, much of the harm from mines likely would never have occurred. But they weren’t followed. The Ottawa Treaty banning land mines was the reaction, to simply take away antipersonnel mines entirely as a tool of war. Goose sees autonomous weapons in a similar light. “The dangers just far outweigh the potential benefits,” he said.

  Dunlap is similarly concerned with what militaries actually do, but he’s coming from a very different place. Dunlap was a major general in the Air Force and deputy judge advocate general from 2006 to 2010. He spent thirty-four years in the Air Force’s judge advocate general corps, where he provided legal advice to commanders at all levels. There’s an old saying in Washington: “where you stand depends on where you sit,” meaning that one’s stance on an issue depends on one’s job. This aphorism helps to explain, in part, the views of different practitioners who are well versed in the law and its compliance, or lack thereof, on battlefields. Dunlap is concerned about the humanitarian consequences of weapons but also about military effectiveness. One of his concerns is that the only nations who will pay attention to weapons bans are thos
e who already care about IHL. Their enemies may not be similarly shackled. Odious regimes like Saddam Hussein’s Iraq, Muammar Gaddafi’s Libya, or Bashar al-Assad’s Syria care nothing for the rule of law, making weapons prohibitions one-sided. Dunlap has argued “law-abiding nations need to be able to bring bear the most effective technologies,” consistent with IHL. “Denying such capabilities to nations because of prohibitions . . . could, paradoxically, promote the nefarious interests of those who would never respect IHL in the first place.”

  BOUND BY THE LAWS OF WAR

  There is one critical way the laws of war treat machines differently from people: Machines are not combatants. People fight wars, not robots. The Department of Defense Law of War Manual concludes:

  The law of war rules on conducting attacks (such as the rules relating to discrimination and proportionality) impose obligations on persons. These rules do not impose obligations on the weapons themselves; of course, an inanimate object could not assume an “obligation” in any event. . . . The law of war does not require weapons to make legal determinations, even if the weapon (e.g., through computers, software, and sensors) may be characterized as capable of making factual determinations, such as whether to fire the weapon or to select and engage a target.

  This means that any person using an autonomous weapon has a responsibility to ensure that the attack is lawful. A human could delegate specific targeting decisions to the weapon, but not the determination whether or not to attack.

  This begs the question: What constitutes an “attack”? The Geneva Conventions define an “attack” as “acts of violence against the adversary, whether in offence or in defence.” The use of the plural “acts of violence” suggests that an attack could consist of many engagements. Thus, a human would not need to approve every single target. An autonomous weapon that searched for, decided to engage, and engaged targets would be lawful, provided it was used in compliance with the other rules of IHL and a human approved the attack. At the same time, an attack is bounded in space and time. Law professor Ken Anderson told me “the size of something that constitutes an attack . . . doesn’t include an entire campaign. It’s not a whole war.” It wouldn’t make sense to speak of a single attack going on for months or to call the entirety of World War II a single attack. The International Committee of the Red Cross (ICRC), an NGO charged with safeguarding IHL, made this point explicitly in a 1987 commentary on the definition of an attack, noting an attack “is a technical term relating to a specific military operation limited in time and place.”

 

‹ Prev