Army of None

Home > Other > Army of None > Page 36
Army of None Page 36

by Paul Scharre


  In this example, the pilots were performing all three roles simultaneously. By manually guiding the air-to-ground weapon they were acting as essential operators. They were also acting as fail-safes, observing the weapon while it was in flight and making an on the spot decision to abort once they realized the circumstances were different from what they had anticipated. They were also acting as moral agents when they assessed that the military necessity of the target was not important enough to risk blowing up a church.

  Acting as the essential operator, which is traditionally the primary human role, is actually the easiest role to automate. A network-enabled GPS bomb, for example, gives operators the ability to abort in flight, preserving their role as moral agents and fail-safes, but the weapon can maneuver itself to the target. We see this often in nonmilitary settings. Commercial airliners use automation to perform the essential task of flying the aircraft, with human pilots largely in a fail-safe role. A person on medical life support has machines performing the essential task of keeping him or her alive, but humans make the moral judgment whether to continue life support. As automation becomes more advanced, automating many of the weapon system’s functions could result in far greater accuracy, precision, and reliability than relying on humans. Automating the human’s role as moral agent or fail-safe, however, is far harder and would require major leaps forward in AI that do not appear on the horizon.

  THE ROLE OF THE HUMAN AS MORAL AGENT AND FAIL-SAFE

  The benefit to “centaur” human-machine teaming is that we don’t need to give up the benefits of human judgment to get the advantages of automation. We can have our cake and eat it too (at least in some cases). The U.S. counter-rocket, artillery, and mortar (C-RAM) system is an example of this approach. The C-RAM automates much of the engagement, resulting in greater precision and accuracy, but still keeps a human in the loop.

  The C-RAM is designed to protect U.S. bases from rocket, artillery, and mortar attacks. It uses a network of radars to automatically identify and track incoming rounds. Because the C-RAM is frequently used at bases where there are friendly aircraft in the sky, the system autonomously creates a “Do Not Engage Sector” around friendly aircraft to prevent fratricide. The result is a highly automated system that, in theory, is capable of safely and lawfully completing engagements entirely on its own. However, humans still perform final verification of each individual target before engagement. One C-RAM operator described the role the automation and human operators play:

  The human operators do not aim or execute any sort of direct control over the firing of the C-RAM system. The role of the human operators is to act as a final fail-safe in the process by verifying that the target is in fact a rocket or mortar, and that there are no friendly aircraft in the engagement zone. A [h]uman operator just presses the button that gives the authorization to the weapon to track, target, and destroy the incoming projectile.

  Thus, the C-RAM has a dual-safety mechanism, with both human and automated safeties. The automated safety tracks friendly aircraft in the sky with greater precision and reliability than human operators could, while the human can react to unforeseen circumstances. This model also has the virtue of ensuring that human operators must take a positive action before each engagement, helping to ensure human responsibility for each shot.

  In principle, C-RAM’s blended use of automation and human decision-making is optimal. The human may not be able to prevent all accidents from occurring (after all, humans make mistakes), but the inclusion of a human in the loop dramatically reduces the potential for multiple erroneous engagements. If the system fails, the human can at least halt the weapon system from further operation, while the automation itself may not understand that it is engaging the wrong targets.

  In order for human operators to actually perform the roles of moral agent and fail-safe, they must be trained for and supported by a culture of active participation in the weapon system’s operation. The Patriot fratricides stemmed from “unwarranted and uncritical trust in automation,” Ensuring human responsibility over engagements requires: automation designed so that human operators can program their intent into the machine, human-machine interfaces that give humans the information they need to make informed decisions, training that requires the operators to exercise judgment, and a culture that emphasizes human responsibility. When these best practices are followed, the result can be safe and effective systems like C-RAM, where automation provides valuable advantages but humans remain in control.

  THE LIMITS OF CENTAUR WARFIGHTING: SPEED

  The idealized centaur model of human-machine teaming breaks down, however, when actions are required faster than humans can react or when communications are denied between the human and machine.

  Chess is again a useful analogy. Centaur human-machine teams generally make better decisions in chess, but are not an optimal model in timed games where a player has only thirty to sixty seconds to make a move. When the time to decide is compressed, the human does not add any value compared to the computer alone, and may even be harmful by introducing errors. Over time, as computers advance, this time horizon is likely to expand until humans no longer add any value, regardless of how much time is allowed.

  Already, machines do a better job than humans alone in certain military situations. Machines are needed to defend against saturation attacks from missiles and rockets when the speed of engagements overwhelms human operators. Over time, as missiles incorporate more intelligent features, including swarming behavior, these defensive supervised autonomous weapons are likely to become even more important—and human involvement will necessarily decline.

  By definition, a human on the loop has weaker control than a human in the loop. If the weapon fails, there is a greater risk of harm and of lessened moral responsibility. Nevertheless, human supervision provides some oversight of engagements. The fact that supervised autonomous weapons such as Aegis have been in widespread use for decades suggests that these risks are manageable. In all of these situations, as an additional backup, humans have physical access to the weapon system so that they could disable it at the hardware level. Accidents have occurred with existing systems, but not catastrophes. A world with more defensive supervised autonomous weapons is likely to look not much different than today.

  Why Use Supervised Autonomy?

  There will undoubtedly be offensive settings where speed is also valuable. In those cases, however, speed will be valuable in the execution of attacks, not necessarily in the decision to launch them. For example, swarming missiles will need to be delegated the authority to coordinate their behavior and deconflict targets, particularly if the enemy is another swarm. Humans have more choice over the time and place of attack for offensive applications, though. For some types of targets, it may not be feasible to have humans select every individual enemy object. This will especially be the case if militaries move to swarm warfare, with hundreds or thousands of robots involved. But there are weapon systems today—Sensor Fuzed Weapon and the Brimstone missile, for example—where humans choose a specific group of enemy targets and the weapons divvy up the targets themselves. A human selecting a known group of targets minimizes many of the concerns surrounding autonomous weapons while allowing the human to authorize an attack on the swarm as a whole, without having to specify each individual element.

  DEGRADED COMMUNICATIONS

  Human supervision is not possible when there are no communications with the weapon, such as in enemy air space or underwater. But communications in contested areas is not an all-or-nothing proposition. Communications may be degraded but not necessarily denied. Advanced militaries have jam-resistant communications technology. Or perhaps a human in a nearby vehicle has some connection with an autonomous weapon to authorize engagements. In any event, some communication is likely possible. So how much bandwidth is required to keep a human in the loop?

  Not much. As one example, consider the following screen grab from a video of an F-15 strike in Iraq, a mere 12 kilobytes in size. W
hile grainy, it clearly possesses sufficient resolution to distinguish individual vehicles. A trained operator could discriminate military-specific vehicles, such as a tank or mobile missile launcher, from dual-use vehicles such as buses or trucks.

  DARPA’s CODE program intends to keep a human in the loop via a communications link that could transmit 50 kilobits per second, roughly on par with a 56K modem from the 1990s. This low-bandwidth communications link could transmit one image of this quality every other second. (One kilobyte equals eight kilobits, so a 12-kilobyte image is 96 kilobits.) It would allow a human to view the target and decide within a few seconds whether to authorize an engagement or not.

  Targeting Image from F-15 Strike in Iraq (12 kilobytes in size) This targeting image from an F-15 strike in Iraq shows a convoy of vehicles approaching an intersection. Images like this one could be passed over relatively low-bandwidth networks for human operators to approve engagements.

  This reduced-bandwidth approach would not work in areas where communications are entirely denied. In such environments, semiautonomous weapons could engage targets that had been preauthorized by human controllers, as cruise missiles do today. This would generally only be practical for fixed targets, however (or a mobile target in a confined area with a readily identifiable signature). In these cases, accountability and responsibility would be clear, as a human would have made the targeting decision.

  But things get complicated quickly in communications-denied environments.

  Should uninhabited vehicles be able to defend themselves if they come under attack? Future militaries will likely deploy robotic vehicles and will want to defend them, especially if they are expensive. If there were no communications to a human, any defenses would need to be fully autonomous. Allowing autonomous self-defense incurs some risks. For example, someone could fire at the robot to get it to return fire and then hide behind human shields to deliberately cause an incident where the robot kills civilians. There would also be some risk of fratricide or unintended escalation in a crisis. Even rules of engagement (ROE) intended purely to be defensive could lead to interactions between opposing systems that results in an exchange of fire. Delegating self-defense authority would be risky. However, it is hard to imagine that militaries would be willing to put expensive uninhabited systems in harm’s way and leave them defenseless. Provided the defensive action was limited and proportionate, the risks might be manageable.

  While it seems unlikely that militaries would publicly disclose the specific ROE their robotic systems use, some degree of transparency between nations could help manage the risks of crisis escalation. A “rules of the road” for how robotic systems ought to behave in contested areas might help to minimize the risk of accidents and improve stability overall. Some rules, such as, “if you shoot at a robot, expect it to shoot back,” would be self-reinforcing. Combined with a generally cautious “shoot second” rule requiring robots to withhold fire unless fired upon, such an approach is likely to be stabilizing overall. If militaries could agree on a set of guidelines for how they expect armed robotic systems to interact in settings where there is no human oversight, this would greatly help to manage a problem that is sure to surface as more nations field weaponized robotic systems.

  Why Use Full Autonomy?

  Hunting mobile targets in communications-denied areas presents the greatest challenge for maintaining human supervision. Ships, air defense systems, and missile launchers are all harder to hit precisely because their movement makes it difficult to find them. In an ideal world, a swarm of robotic systems would search for these targets, relay the coordinates and a picture back to a human controller for approval (as CODE intends), then the swarm would attack only human-authorized targets. If a communication link is not available, however, then fully autonomous weapons could be used to search for, select, and mobile attack targets on their own.

  There is no doubt that such weapons would be militarily useful. They would also be risky. In these situations, there would be no ability to recall or abort the weapon if it failed, was hacked, or was manipulated into attacking the wrong target. Unlike a defensive counterfire response, the weapon’s actions would not be limited and proportionate. It would be going on the attack, searching for targets. Given the risks that such weapons would entail, it is worth asking whether their military value would be worth the risk.

  When I asked Captain Galluch from the Aegis training center what he thought of the idea of a fully autonomous weapon, he asked, “What application are we trying to solve?” It’s an important question. For years, militaries have had the ability to build loitering munitions that would search for targets over a wide area and destroy them on their own. With a few exceptions like the TASM and Harpy, these weapons have not been developed. There are no known examples of them being used in a conflict. Fully autonomous weapons might be useful, but it’s hard to make the case for them as necessary outside of the narrow case of immediate self-defense.

  The main rationale for building fully autonomous weapons seems to be the assumption that others might do so. Even the most strident supporters of military robotics have been hesitant about fully autonomous weapons . . . unless others build them. This is a valid problem—and one that could become a self-fulfilling prophecy. The fear that others might build autonomous weapons could be the very thing that drives militaries to build them. Jody Williams asked me, “If they don’t exist, there is no military necessity and are we not, in fact, creating it?”

  Michael Carl Haas, who raised concerns about crisis stability, suggested that countries explore “mutual restraint” as an option to avoid the potentially dangerous consequences of fully autonomous weapons. Others have suggested that such weapons are inevitable. The history of arms control provides evidence for both points of view.

  20

  THE POPE AND THE CROSSBOW

  THE MIXED HISTORY OF ARMS CONTROL

  In the summer of 2015, a group of prominent AI and robotics researchers signed an open letter calling for a ban on autonomous weapons. “The key question for humanity today,” they wrote, “is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable.”

  There have been many attempts to control weapons in the past. Some have succeeded, but many attempts at restricting weapons have failed. Pope Innocent II banned the use of the crossbow (against Christians) in 1139. There is no evidence that it had any effect in slowing the proliferation of the crossbow across medieval Europe. In the early twentieth century, European nations tried to cooperate on rules restricting submarine warfare and banning air attacks on cities. These attempts failed. On the other hand, attempts to restrain chemical weapons use failed in World War I but succeeded in World War II. All the major powers had chemical weapons in World War II but did not use them (on each other). Today, chemical weapons are widely reviled, although their continued use by Bashar al-Assad in Syria shows that no ban is absolute. The Cold War saw a host of arms control treaties, many of which remain in place today. Some treaties, such as bans on biological weapons, blinding lasers, and using the environment as a weapon of war, have been highly successful. In recent years, humanitarian campaigns have led to bans on land mines and cluster munitions, although the treaties have not been as widely adopted and these weapons remain in use by many states. Finally, nonproliferation treaties have been able to slow, but not entirely stop, the proliferation of nuclear weapons, ballistic missiles, and other dangerous technologies.

  DEVELOP TECHNOLOGY

  DEVELOP WEAPON

  PRODUCTION

  USE

  Non-proliferation treaties aim to prevent access to the technology

  Some bands prohibit developing a weapon

  Arms limitation treaties limit the quantities of a weapon

  Some bans only prohibit or regulate use

  Types of Weapons Bans Weapons bans can target different stages of the weapons production proc
ess, preventing access to the technology, prohibiting states from developing the weapon, limiting production, or regulating use.

  These successes and failures provide lessons for those who wish to control autonomous weapons. The underlying technology that enables autonomous weapons is too diffuse, commercially available, and easy to replicate to stop its proliferation. Mutual restraint among nations on how they use this technology may be possible, but it certainly won’t be easy.

  WHY SOME BANS SUCCEED AND OTHERS FAIL

  Whether or not a ban succeeds seems to depend on three key factors: the perceived horribleness of the weapon; its perceived military utility; and the number of actors who need to cooperate for a ban to work. If a weapon is seen as horrific and only marginally useful, then a ban is likely to succeed. If a weapon brings decisive advantages on the battlefield then a ban is unlikely to work, no matter how terrible it may seem. The difference between how states have treated chemical weapons and nuclear weapons illustrate this point. Nuclear weapons are unquestionably more harmful than chemical weapons by any measure: civilian casualties, combatant suffering, and environmental damage. Nuclear weapons give a decisive advantage on the battlefield, though, which is why the Nuclear Non-Proliferation Treaty’s goals of global nuclear disarmament remain unrealized. Chemical weapons, on the other hand, have some battlefield advantages, but are far from decisive. Had Saddam Hussein used them against the United States, the result might have more U.S. casualties, but it would not have changed the course of the first Gulf War or the 2003 Iraq War.

  Successful and Unsuccessful Weapons Bans

  PRE-MODERN ERA

  Era

  Weapon

  Year

  Regulation or Treaty

 

‹ Prev