Book Read Free

Army of None

Page 33

by Paul Scharre


  For some, the answers to these questions are simple. Williams told me, “You know the difference between a good robot and a bad robot.” A good robot was one that saved lives, like a firefighting robot. “You give the sucker a machine gun and set it loose, that’s a bad robot,” she said.

  But not everyone thinks it’s so simple. For Ron Arkin, a good robot is one that fights wars more justly and humanely than humans, saving noncombatant lives. Arkin pointed out that even in the Terminator movies, there were good Terminators. In Japanese culture, robots are often seen as protectors and saviors. Some people see autonomous weapons as inherently wrong. Others don’t.

  For some, the consequentialist view prevails. Ken Anderson told me he had serious problems ruling out a potentially beneficial technology based on a “beyond-IHL principle of human dignity.” It would put militaries in the backwards position of accepting more battlefield harm and more deaths for an abstract concept. He said militaries would be, in effect, saying to those who were killed by human targeting, “Listen, you didn’t have to be killed here had we followed IHL and used the autonomous weapon as being the better one in terms of reducing battlefield harm. You wouldn’t have died. But that would have offended your human dignity . . . and that’s why you’re dead. Hope you like your human dignity.” For Anderson, human life trumps human dignity.

  Christof Heyns acknowledged the possibility that the consequentialist point of view and the deontological might conflict. Heyns said that if autonomous weapons do turn out to be better than humans at avoiding civilians, “then we must ask ourselves whether . . . dignity and this issue of accountability and not arbitrariness, that those are important enough to say that we don’t want an instrument, even if it can save lives.” Heyns said he didn’t know the answer. Rather, it is a “question that those of us who say that these weapons should be banned, that we need to answer for ourselves.”

  It’s hard to say that one perspective is more right than others. Even consequentialists like Ron Arkin acknowledge the deontological issues at play. Arkin told me his hope was that we could use autonomous targeting to reduce civilian deaths in war, “as long as we don’t lose our soul in doing it.” The challenge is figuring out whether there is a way to do both. The strongest ethical objection to autonomous weapons is that as long as war exists, as long as there is human suffering, someone should suffer the moral pain of those decisions. There are deontological reasons for maintaining human responsibility for killing: it weakens our morality to hand the moral burden of war over to machines. There are also consequentialist arguments for doing so, because the moral pain of killing is the only check on the worst horrors of war. This is not about autonomous targeting per se, but rather how it changes humans’ relationship with violence and how they feel about killing as a result.

  Generals William Tecumseh Sherman and Curtis LeMay are an interesting contrast in how warfighters can feel about the violence they mete out in pursuit of victory. Both waged total war, LeMay on a scale that Sherman could never have imagined. There’s no evidence LeMay was ever troubled by his actions, which resulted in the deaths of hundreds of thousands of Japanese civilians. He said:

  Killing Japanese didn’t bother me very much at that time . . . I suppose if I had lost the war, I would have been tried as a war criminal. . . . Every soldier thinks something of the moral aspects of what he is doing. But all war is immoral and if you let that bother you, you’re not a good soldier.

  Sherman, on the other hand, didn’t shy from war’s cruelty but also felt its pain:

  I am tired and sick of war. Its glory is all moonshine. It is only those who have neither fired a shot nor heard the shrieks and groans of the wounded who cry aloud for blood, for vengeance, for desolation. War is hell.

  If there were no one to feel that pain, what would war become? If there were no one to hear the shrieks and groans of the wounded, what would guard us from the worst horrors of war? What would protect us from ourselves?

  For it is humans who kill in war, whether from a distance or up close and personal. War is a human failing. Autonomous targeting would change humans’ relationship with killing in ways that may be good and may be bad. But it may be too much to ask technology to save us from ourselves.

  * IEEE = Institute of Electrical and Electronics Engineers; RAS = IEEE Robotics and Automation Society.

  18

  PLAYING WITH FIRE

  AUTONOMOUS WEAPONS AND STABILITY

  Just because something is legal and ethical doesn’t mean it is wise. Most hand grenades around the world have a fuse three to five seconds long. No treaty mandates this—logic does. Too short of a fuse, and the grenade will blow up in your face right after you throw it. Too long of a fuse, and the enemy might pick it up and throw it back your way.

  Weapons are supposed to be dangerous—that’s the whole point—but only when you want them to be. There have been situations in the past where nations have come together to regulate or ban weapons that were seen as excessively dangerous. This was not because they caused unnecessary suffering to combatants, as was the case for poison gas or weapons with non-x-ray-detectable fragments. And it wasn’t because the weapons were seen as causing undue harm to civilians, as was the case was with cluster munitions and land mines. Rather, the concern was that these weapons were “destabilizing.”

  During the latter half of the twentieth century, a concept called “strategic stability” emerged among security experts.* Stability was a desirable thing. Stability meant maintaining the status quo: peace. Instability was seen as dangerous; it could lead to war. Today experts are applying these concepts to autonomous weapons, which have the potential to undermine stability.

  The concept of stability first emerged in the 1950s among U.S. nuclear theorists attempting to grapple with the implications of these new and powerful weapons. As early as 1947, U.S. officials began to worry that the sheer scale of nuclear weapons’ destructiveness gave an advantage to whichever nation struck first, potentially incentivizing the Soviet Union to launch a surprise nuclear attack. This vulnerability of U.S. nuclear forces to a surprise Soviet attack therefore gave the United States a reason to themselves strike first, if war appeared imminent. Knowing this, of course, only further incentivized the Soviet Union to strike first in the event of possible hostilities. This dangerous dynamic captures the essence of what theorists call “first-strike instability,” a situation in which adversaries face off like gunslingers in the Wild West, each poised to shoot as soon as the other reaches for his gun. As strategist and Nobel laureate Thomas Schelling explained the dilemma, “we have to worry about his striking us to keep us from striking him to keep him from striking us.” The danger is that instability itself can create a self-fulfilling prophecy in which one side launches a preemptive attack, fearing an attack from the other.

  The United States took steps to reduce its first-strike vulnerability, but over time evolved a desire for “stability” more generally. Stability takes into account the perspective of both sides and often involves strategic restraint. A country should avoid deploying its military forces in a way that threatens an adversary with a surprise attack, thus incentivizing him to strike first. A stable situation, Schelling described, is “when neither in striking first can destroy the other’s ability to strike back.”

  A stable equilibrium is one that, if disturbed by an outside force, returns to its original state. A ball sitting at the bottom of a bowl is at a stable equilibrium. If the ball is moved slightly, it will return to the bottom of the bowl. Conversely, an unstable equilibrium is one where a slight disturbance will cause the system to rapidly transition to an alternate state, like a pencil balanced on its tip. Any slight disturbance will cause the pencil to tip over to one side. Nuclear strategists prefer the former to the latter.

  Beyond “first-strike stability” (sometimes called “first-mover advantage”), several variants of stability have emerged. “Crisis stability” is concerned with avoiding conditions that might escalate a crisi
s. These could include perverse incentives for deliberate escalation (“strike them before they strike us”) or accidental escalation (say a low-level commander takes matters into his or her own hands). Automatic escalation by predelegated actions—to humans or machines—is another concern, as is misunderstanding an adversary’s actions or intentions. (Recall the movie War Games, in which a military supercomputer confuses a game with reality and almost initiates nuclear war.) Crisis stability is about ensuring that any escalation in hostilities between countries is a deliberate choice on the part of their national leadership, not an accident, miscalculation, or driven by perverse incentives to strike first. Elbridge Colby explained in Strategic Stability: Contending Interpretations, “In a stable situation, then, major war would only come about because one party truly sought it.”

  Accidental war may seem like a strange concept—how could a war begin by accident? But Cold War strategists worried a great deal about the potential for false alarms, miscalculations, or accidents to precipitate conflict. If anything, history suggests they should have worried more. The Cold War was rife with nuclear false alarms, misunderstandings, and near-use incidents that could have potentially led to a nuclear attack. Even in conventional crises, confusion, misunderstanding enemy intentions, and the fog of war have often played a role in escalating tensions.

  “War termination” is another important component of escalation control. Policymakers need to have the same degree of control over ending a war as they do—or should—over starting one. If policymakers do not have very high control over their forces, because attack orders cannot be recalled or communications links are severed, or if de-escalation could leave a nation vulnerable, policymakers may not be able to de-escalate a conflict even if they wanted to.

  Strategists also analyze the offense-defense balance. An “offense-dominant” warfighting regime is one where it is easier to conquer territory; a defense-dominant regime is one where it is harder to conquer territory. Machine guns, for example, favor the defense. It is extremely difficult to gain ground against a fortified machine gun position. In World War I, millions died in relatively static trench warfare. Tanks, on the other hand, favor the offense because of their mobility. In World War II, Germany blitzkrieged across large swaths of Europe, rapidly acquiring terrain. (Offense-defense balance is subtly different from first-strike stability, which is about whether there is an advantage in making the first move.) In principle, defense-dominant warfighting regimes are more stable since territorial aggression is more costly.

  Strategic stability has proven to be an important intellectual tool for mitigating the risks of nuclear weapons, especially as technologies have advanced. How specific weapons affect stability can sometimes be counterintuitive, however. One of the most important weapon systems for ensuring nuclear stability is the ballistic missile submarine, an offensive strike weapon. Extremely difficult to detect and able to stay at underwater for months at a time, submarines give nuclear powers an assured second-strike capability. Even if a surprise attack somehow wiped out all of a nation’s land-based nuclear missiles and bombers, the enemy could be assured that even a single surviving submarine could deliver a devastating attack. This effectively removes any first-mover advantage. The omnipresent threat of ballistic missile submarines at sea, hiding and ready to strike back, is a strong deterrent to a first strike and helps ensure stability.

  In some cases, defensive weapons can be destabilizing. National missile defense shields, while nominally defensive, were seen as highly destabilizing during the Cold War because they could undermine the viability of an assured second-strike deterrent. Intercepting ballistic missiles is costly and even the best missile defense shield could not hope to stop a massive overwhelming attack. However, a missile defense shield could potentially stop a very small number of missiles. This might allow a country to launch a surprise nuclear first strike, wiping out most of the enemy’s nuclear missiles, and use the missile defense shield to protect against the rest. A shield could make a first strike more viable, potentially creating a first-mover advantage and undermining stability.

  For other technologies, their effect on stability was more intuitive. Satellites were seen as stabilizing during the Cold War since they gave each country the ability to observe the other’s territory. This allowed them to confirm (or deny) whether the other had launched nuclear weapons or whether they were trying to break out and gain a serious edge in the arms race. Attacking satellites in an attempt to blind the other nation was therefore seen as highly provocative, since it could be a prelude to an attack (and the now-blind country could have no way of knowing if there was, in fact, an attack under way). Placing nuclear weapons in space, on the other hand, was seen as destabilizing because it could dramatically shorten the warning time available to the defender if an opponent launched a surprise attack. Not only did this make a surprise attack more feasible, but with less warning time the defender might be more likely to respond to false alarms, undermining crisis stability.

  During the Cold War, and particularly at its end, the United States and Soviet Union engaged in a number of unilateral and cooperative measures designed to increase stability and avoid potentially unstable situations. After all, despite their mutual hostility, neither side was interested in an accidental nuclear war. These efforts included a number of international treaties regulating or banning certain weapons. The Outer Space Treaty (1967) bans placing nuclear weapons in space or weapons of any kind on the moon. The Seabed Treaty (1971) forbids placing nuclear weapons on the floor of the ocean. The Environmental Modification Convention (1977) prohibits using the environment as a weapon of war. The Anti-Ballistic Missile (ABM) Treaty (1972) strictly limited the number of strategic missile defenses the Soviet Union and the United States could deploy in order to prevent the creation of robust national missile defense shields. (The United States withdrew from the ABM Treaty in 2002.) The Intermediate-Range Nuclear Forces (INF) Treaty bans intermediate-range nuclear missiles, which were seen as particularly destabilizing, since there would be very little warning time before they hit their targets.

  In other cases, there were tacit agreements between the United States and Soviet Union not to pursue certain weapons that might have been destabilizing, even though no formal treaties or agreements were ever signed. Both countries successfully demonstrated antisatellite weapons, but neither pursued large-scale operational deployment. Similarly, both developed limited numbers of neutron bombs (a “cleaner” nuclear bomb that kills people with radiation but leaves buildings intact), but neither side openly pursued large-scale deployment. Neutron bombs were seen as horrifying since they could allow an attacker to wipe out a city’s population without damaging its infrastructure. This could make their use potentially more likely, since an attacker could use the conquered territory without fear of harmful lingering radiation. In the late 1970s, U.S. plans to deploy neutron bombs to Europe caused considerable controversy, forcing the United States to change course and halt deployment.

  The logic of stability also applies to weapons below the nuclear threshold. Ship-launched anti-ship missiles give a significant first-mover advantage in naval warfare, for example. The side who strikes first, by sinking some fraction of the enemy’s fleet, instantly reduces the number of enemy missiles threatening them, giving a decisive advantage to whoever strikes the first blow. Many technologies will not significantly affect stability one way or the other, but some military technologies do have strategic effects. Autonomous weapons, along with space/counter-space weapons and cyberweapons, rank among the most important emerging technologies that should be continually evaluated in that context.

  AUTONOMOUS WEAPONS AND STABILITY

  Michael Horowitz began exploring similar questions in a recent monograph, “Artificial Intelligence, War, and Crisis Stability.” For starters, he argued we should distinguish between “what is unique about autonomy, per se, versus what are things that autonomy accentuates.”

  Autonomous weapons could come in m
any forms, from large intercontinental bombers to small ground robots or undersea vehicles. They could have long ranges or short ranges, heavy payloads or light payloads. They could operate in the air, land, sea, undersea, space, or cyberspace. Speculating about the first-strike stability or offense-defense balance implications is thus very challenging. Autonomous weapons will be subject to the same physical constraints as other weapons. For example, ballistic missile submarines are stabilizing in part because it is difficult to find and track objects underwater, making them survivable in the event of a first strike. The defining feature of autonomous weapons is how target selection and engagement decisions are made. Thus we should evaluate their impact on stability relative to semiautonomous weapons with similar physical characteristics but a human in the loop.

  This makes it important to separate the effects of robotics and automation in general from autonomous targeting in particular. Militaries are investing heavily in robotics and as the robotics revolution matures, it will almost certainly alter the strategic balance in significant ways. Some analysts have suggested that robot swarms will lead to an offense-dominant regime, since swarms could be used to overwhelm defenders. Others have raised concerns that robots might lower the threshold for the use of force by reducing the risk of loss of life to the attacker. These outcomes are possible, but they often presuppose a world where only the attacker has robotic weapons and not the defender, which is probably not realistic. When both sides have robots, the offense-defense balance may look different. Swarms could also be used for defense too, and it isn’t clear whether swarming and robotics on balance favors the offense or defense.

 

‹ Prev