Army of None

Home > Other > Army of None > Page 28
Army of None Page 28

by Paul Scharre


  New machine learning approaches such as deep neural networks are very good at object recognition but are vulnerable to “fooling image” attacks. Without a human in the loop as a final check, using this technology to do autonomous targeting today would be exceedingly dangerous. Neural networks with these vulnerabilities could be manipulated into avoiding enemy targets and attacking false ones.

  In the near term, the best chances for high-reliability target recognition lie with the kind of sensor fusion that DARPA’s CODE project envisions. By fusing together data from multiple angles and multiple types of sensors, computers could possibly distinguish between military targets and civilian objects or decoys with high reliability. Objects that are dual-use for military and civilian purposes, such as trucks, would be more difficult since determining whether they are lawful targets might depend on context.

  Distinguishing people would be far and away the most difficult task. Two hundred years ago, soldiers wore brightly colored uniforms and plumed helmets to battle, but that era of warfare is gone. Modern warfare often involves guerrillas and irregulars wearing a hodgepodge of uniforms and civilian clothes. Identifying them as a combatant often depends on their behavior on the battlefield. I frequently encountered armed men in the mountains of Afghanistan who were not Taliban fighters. They were farmers or woodcutters who carried firearms to protect themselves or their property. Determining whether they were friendly or not depended on how they acted, and even then was often fraught with ambiguity.

  Even simple rules like “If someone shoots at you, then they’re hostile” do not always hold in the messy chaos of war. During the 2007–2008 “surge” in Iraq, I was part of a civil affairs team embedded with U.S. advisors to Iraqi troops. One day, we responded to reports of a gun battle between Iraqi police and al Qaeda in Iraq’s volatile Diyala province.

  As we entered the city center, Iraqi Army troops led the way into the deserted marketplace. The gunfire, which had been constant for about thirty minutes, immediately ceased. The city streets were silent, like in an old Western when the bad guy rides into town.

  The end of the street was blocked to prevent suicide car bomb attacks, so we stopped our trucks. The Iraqi soldiers dismounted their vehicles and headed in on foot while the U.S. advisors provided cover from the gun trucks.

  The Iraqi soldiers were dragging a wounded civilian to safety when gunfire erupted from a rooftop. An Iraqi soldier was shot and the civilian was killed. The Iraqis returned fire while the U.S. advisors tried to maneuver their gun trucks into position to fire on the rooftop. From where I was, I couldn’t see the rooftop, but I saw one lone Iraqi soldier run into the street. Firing his AK-47 with one hand, he dragged his wounded comrade across the open street into a nearby building.

  In response, the entire marketplace lit up. Rounds started coming our way from people we couldn’t see further down the street. We could hear bullets pinging all around our truck. The Iraqi troops returned ferocious fire. U.S. Apache gunships radioed to say they were coming in on a gun run to hit the people on the rooftops. The U.S. advisors frantically ordered the Iraqi soldiers to pull back. Then at the last minute, the Apaches called off their attack. They told us they thought the “enemies” on the rooftop shooting at us were friendlies. From their vantage point, it looked like a friendly fire engagement.

  The U.S. advisors yelled at the Iraqis to stop firing and mass confusion followed. Word came down that the people at the other end of the street—probably the ones who were shooting at us—were Iraqi police. Some of them weren’t in uniform because they were members of an auxiliary battalion that had been tapped to aid in the initial gunfight. So there were Iraqis running around in civilian clothes with AK-47s shooting at us who were allegedly friendly. It was a mess.

  As the fighting subsided, people began to come out from behind cover and use the lull in fire as an opportunity to move to safety. I saw civilians fleeing the area. I also saw men in civilian clothes carrying AK-47s running away. Were they Iraqi police? Were they insurgents escaping?

  Or perhaps they were both? The Iraqi police were often sectarian. In nearby villages, people wearing police uniforms had carried out sectarian killings at night. We didn’t know whether they were insurgents with stolen uniforms or off-duty police officers.

  As for this firefight, it was hard to parse what had happened. Someone had been shooting at us. Was that an accident or intentional? And what happened to the man on the roof who shot the Iraqi Army soldier? We never found out.

  That wasn’t the only confusing firefight I witnessed. In fact, during that entire year I was in Iraq I was never once in a situation where I could look down my rifle and say for certain that the person I was looking at was an insurgent. Many other firefights were similarly fraught, with local forces’ loyalties and motives suspect.

  An autonomous weapon could certainly be programmed with simple rules, like “shoot back if fired upon,” but in confusing ground wars, such a weapon would guarantee fratricide. Understanding human intent would require a machine with human-level intelligence and reasoning, at least within the narrow domain of warfare. No such technology is on the horizon, making antipersonnel applications very challenging for the foreseeable future.

  PROPORTIONALITY

  Autonomous weapons that targeted only military objects such as tanks could probably meet the criteria for distinction, but they would have a much harder time with proportionality. The principle of proportionality says that the military necessity of any strike must outweigh any expected civilian collateral damage. What this means in practice is open to interpretation. How much collateral damage is proportional? Reasonable people might disagree. Legal scholars Kenneth Anderson, Daniel Reisner, and Matthew Waxman have pointed out, “there is no accepted formula that gives determinate outcomes in specific cases.” It’s a judgment call.

  Autonomous weapons don’t necessarily need to make these judgments themselves to comply with the laws of war. They simply need to be used in ways that comply with these principles. This is a critical distinction. Even simple autonomous weapons would pass the principle of proportionality if used in an environment devoid of civilians, such as undersea or in space. A large metal object underwater is very likely to be a military submarine. It might be a friendly submarine, and that raises important practical concerns about avoiding fratricide, but as a legal matter avoiding targeting civilians or civilian collateral damage would be much easier undersea. Other environments, such as space, are similarly devoid of civilians.

  Complying with the principle of proportionality in populated areas becomes much harder. If the location of a valid military target meant that collateral damage would be disproportionate to the target’s military value, then it would be off-limits under international law. For example, dropping a 2,000-pound bomb on a single tank parked in front of a hospital would likely be disproportionate. Nevertheless, there are a few ways autonomous weapons could be used in populated areas consistent with the principle of proportionality.

  The hardest approach would be to have the machine itself make a determination about the proportionality of the attack. This would require the machine to scan the area around the target for civilians, estimate possible collateral damage, and then judge whether the attack should proceed. This would be very challenging to automate. Detecting individual people from a missile or aircraft is hard enough, but at least in principle could be accomplished with advanced sensors. How should those people be counted, however? If there are a half dozen adults standing around a military radar site or a mobile missile launcher with nothing else nearby, it might be reasonable to assume that they are military personnel. In other circumstances, such as in dense urban environments, civilians could be near military objects. In fact, fighters who don’t respect the rule of law will undoubtedly attempt to use civilians as human shields. How would an autonomous weapon determine whether or not people near a military target are civilians or combatants? Even if the weapon could make that determination satisfactorily, how
should it weigh the military necessity of attacking a target against the expected civilian deaths? Doing so would require complex moral reasoning, including weighing different hypothetical courses of action and their likely effects on both the military campaign and civilians. Such a machine would require human-level moral reasoning, beyond today’s AI.

  A simpler, although still difficult, approach would be to have humans set a value for the number of allowable civilian casualties for each type of target. In this case, the human would be making the calculation about military necessity and proportionality ahead of time. The machine would only sense the number of civilians nearby and call off the attack if it exceeded the allowable number for that target. This would still be difficult from a sensing standpoint, but would at least sidestep the tricky business of programming moral judgments and reasoning into a machine.

  Even an autonomous weapon with no ability to sense the surrounding environment could still be used lawfully in populated areas provided that the military necessity was high enough, the expected civilian harm was low enough, or both. Such scenarios would be unusual, but are certainly conceivable. The military necessity of destroying mobile launchers armed with nuclear-tipped missiles, for example, would be quite high. Millions of lives would be saved—principally civilians—by destroying the missiles, far outweighing any civilian casualties caused by the autonomous weapons themselves. Conversely, very small and precise warheads, such as shaped charges that destroy vehicles without harming nearby people, could reduce civilian casualties to such an extent that autonomous targeting would be lawful without sensing the environment. In these cases, the human launching the autonomous weapon would need to determine that the military value of any potential engagements outweighed expected civilian harm. This would be a high bar to reach in populated areas, but is not inconceivable.

  UNNECESSARY SUFFERING

  Over centuries of warfare, various weapons have been deemed “beyond the pale” because of the injuries they would cause. Ancient Sanskrit texts prohibit weapons that are poisoned, barbed, or have tips “blazing with fire.” In World War I, German “sawback” bayonets, which have serrated edges on one side for cutting wood, were seen as unethical by troops because of the purported grievous injuries they would cause when pulled out of a human body.

  Current laws of war specifically prohibit some weapons because of the wounds they cause, such as exploding bullets, chemical weapons, blinding lasers, or weapons with non-X-ray-detectable fragments. Why some weapons are prohibited and others allowed can sometimes be subjective. Why is being killed with poison gas worse than being blown up or shot? Is being blinded by a laser really worse than being killed?

  The fact that combatants have agreed to limits at all is a testament to the potential for restraint and humanity even during the horrors of war. The prohibition on weapons intended to cause unnecessary suffering has little bearing on autonomous weapons, though, since it deals with the mechanism of injury, not the decision to target in the first place.

  PRECAUTIONS IN ATTACK

  Other IHL rules impinge on autonomous weapons, but in murky ways. The rule of precautions in attack requires that those who plan or decide upon an attack “take all feasible precautions” to avoid civilian harm. Similar to proportionality, the difficulty of meeting this requirement depends heavily on the environment; it is hardest in populated areas. The requirement to take “feasible” precautions, however, gives military commanders latitude. If the only weapon available was an autonomous weapon, a military commander could claim no other options were feasible, even if it resulted in greater civilian casualties. (Other IHL criteria such as proportionality would still apply.) The requirement to take feasible precautions could be interpreted as requiring a human in the loop or on the loop whenever possible, but again feasibility would be the determining factor. Which technology is optimal for avoiding civilian casualties will also shift over time. If autonomous weapons became more precise and reliable than humans, the obligation to take “all feasible precautions” might require commanders to use them.

  HORS DE COMBAT

  The rule of hors de combat—French for “outside the fight”—prohibits harming combatants who have surrendered or are incapacitated from injuries and unable to fight. The principle that, once wounded and “out of combat,” combatants can no longer be targeted dates back at least to the Lieber Code, a set of regulations handed down by the Union Army in the American Civil War. The requirement to refrain from targeting individuals who are hors de combat has little bearing on autonomous weapons that target objects, but it is a difficult requirement for weapons that target people.

  The Geneva Conventions state that a person is hors de combat if he or she is (a) captured, (b) “clearly expresses an intention to surrender,” or (c) “has been rendered unconscious or is otherwise incapacitated by wounds or sickness, and therefore is incapable of defending himself.” Identifying the first category seems straightforward enough. Presumably a military should have the ability to prevent its autonomous weapons from targeting prisoners under its control, just the same way it would need to prevent autonomous weapons from targeting its own personnel. The second two criteria are not so simple, however.

  Rob Sparrow, a philosophy professor at Monash University and one of the founding members of the International Committee for Robot Arms Control, has expressed skepticism that machines could correctly identify when humans are attempting to surrender. Militaries have historically adopted signals such as white flags or raised arms to indicate surrender. Machines could identify these objects or behaviors with today’s technology. Recognizing an intent to surrender requires more than simply identifying objects, however. Sparrow has pointed out that “recognizing surrender is fundamentally a question of recognizing an intention.”

  Sparrow gives the example of troops that feign surrender as a means of getting an autonomous weapon to call off an attack. Fake surrender is considered “perfidy” under the laws of war and is illegal. Soldiers that fake surrender but intend to keep fighting remain combatants, but discerning fake from real surrender hinges on interpreting human intent, something that machines fail miserably at today. If a weapon was too generous in granting surrender and could not identify perfidy, it would quickly become useless as a weapon. Enemy soldiers would learn they could trick the autonomous weapon. On the other hand, a weapon that was overly skeptical of surrendering troops and mowed them down would be acting illegally.

  Robotic systems would have a major advantage over humans in these situations because they could take more risk, and therefore be more cautious in firing in ambiguous settings. The distinction between semiautonomous systems and fully autonomous ones is critical, however. The advantage of being able to take more risk comes from removing the human from the front lines and exists regardless of how much autonomy the system has. The ideal robotic weapon would still keep a human in the loop to solve these dilemmas.

  The third category of hors de combat—troops who are incapacitated and cannot fight—raises similar problems as recognizing surrender. Simple rules such as categorizing motionless soldiers as hors de combat would be unsatisfactory. Wounded soldiers may not be entirely motionless, but nevertheless still out of the fight. And legitimate combatants could “play possum” to avoid being targeted, fooling the weapon. As with recognizing surrender, identifying who is hors de combat from injuries requires understanding human intent. Even simply recognizing injuries is not enough, as injured soldiers could continue fighting.

  To illustrate these challenges, consider the Korean DMZ. There are no civilians living in the DMZ, yet fully autonomous antipersonnel weapons could still face challenges. North Korean soldiers crossing the DMZ into South Korea could be surrendering. People crossing the DMZ could be civilian refugees. Soldiers guarding heavily armed borders might assume anyone approaching their position from enemy territory is hostile, but that does not absolve them of the IHL requirements to respect hors de combat and the principle of distinction. If an approaching
person is clearly a civilian or a surrendering soldier, then killing that person is illegal.

  Complying with hors de combat is a problem even in situations where other IHL concerns fall away. Imagine sending small robots into a military ship, base, or tunnel complex to kill individual soldiers but leave the infrastructure intact. This would avoid the problem of distinction by assuming everyone was a combatant. But what if the soldiers surrendered? There is no obligation under the laws of war to give an enemy the opportunity to surrender. One doesn’t need to pause before shooting and say, “Last chance, give it up or I’ll shoot!” yet ignoring attempts to surrender is illegal. The general concepts of a flag of truce and surrender date back millennia. The 1907 Hague Convention codified this concept in international law, declaring, “It is expressly forbidden . . . to declare that no quarter will be given.” To employ weapons that could not recognize when soldiers are hors de combat would not only violate the modern laws of war, but would trespass on millennia-old norms of warfare.

  John Canning, a retired U.S. Navy weapons designer, has proposed an elegant solution to this problem. In his paper, “You’ve just been disarmed. Have a nice day!” Canning proposed an autonomous weapon that would not target people directly, but rather would target their weapons. For example, the autonomous weapon would look for the profile of an AK-47 and would aim to destroy the AK-47, not the person. Canning described the idea as “targeting either the bow or the arrow but not the human archer.” In Canning’s concept, these would be ultra-precise weapons that would disarm a person without killing them. (While this level of precision is probably not practical, it is also not required under IHL.) Canning’s philosophy of “let the machines target machines—not people” would avoid some of the most difficult problems of antipersonnel weapons, since civilians or surrendering soldiers could avoid harm by simply moving away from military objects.

 

‹ Prev