Army of None

Home > Other > Army of None > Page 34
Army of None Page 34

by Paul Scharre


  The assumption that robots would result in fewer casualties also deserves a closer look. Robots allow warfighters to attack from greater distances, but this has been the trend in warfare for millennia, from the invention of the sling and stone to the intercontinental ballistic missile. Increased range has yet to lead to bloodless wars. As weapons increase in range, the battlefield simply expands. People have moved from killing each other at short range with spears to killing each other across oceans with intercontinental missiles. The violence, however, is always inflicted on people. It will always be so, because it is pain and suffering that causes the enemy to surrender. The more relevant question is how fully autonomous weapons might alter the strategic balance relative to semiautonomous weapons. Horowitz suggested it was useful to start by asking, “When is it that you would deploy these systems in the first place?”

  COMMUNICATIONS: OFFENSE-DEFENSE BALANCE, RESILIENCE, AND RECALLABILITY

  One advantage of fully autonomous weapons over semiautonomous or supervised autonomous ones is that they do not require a communications link back to human controllers. This makes them more resilient to communications disruption.

  Communications are more likely to be challenging on the offense, when one is operating inside enemy territory and subject to jamming. For some defensive applications, one can use hardwired cables in prepared positions that cannot be jammed. For example, the South Korean SGR-A1 robotic sentry gun on the DMZ could be connected to human controllers via buried cables. There would be no need for a fully autonomous mode. Even if speed required immediate reaction (which is unlikely for antipersonnel applications), human supervision would still be possible. For some applications such as the Aegis, humans are physically colocated with the weapon system, making communications a nonissue. Fully autonomous weapons without any human supervision would be most useful on the offensive. It would be a leap to say that they would necessarily lead to an offense-dominant regime, as that would depend on a great many other factors unrelated to autonomy. In general, autonomy benefits both offense and defense; many nations already use defensive supervised autonomous weapons. But fully autonomous weapons would seem to benefit the offense more.

  With respect to first-mover advantage, if a country required a human in the loop for each targeting decision, an adversary might be able to diminish their offensive capabilities by attacking their communications links, such as by striking vulnerable satellites. If a military can fight effectively without reliable communications because of autonomous weapons, that lowers the benefit of a surprise attack against their communications. Autonomous weapons, therefore, might increase stability by reducing incentives for a first strike.

  But the ability to continue attacking even if communications are severed poses a problem for escalation control and war termination. If commanders decide they wish to call off the attack, they would have no ability to recall fully autonomous weapons.

  This is analogous to the Battle of New Orleans during the War of 1812. Great Britain and the United States ended the war on December 24, 1814, but news did not reach British and American forces until six weeks later. The Battle of New Orleans was fought after the treaty was signed but before news reached the front. Two thousand British sailors and soldiers died fighting a war that had technically ended.

  SPEED AND CRISIS STABILITY

  While the ability to carry out attacks without communications has a mixed effect on stability, autonomous weapons’ advantage in speed is decidedly negative. Autonomous weapons risk accelerating the pace of battle and shortening time for human decision-making. This heightens instability in a crisis. Strategist Thomas Schelling wrote in Arms and Influence:

  The premium on haste—the advantage, in case of war, in being the one to launch it or in being a quick second in retaliation if the other side gets off the first blow—is undoubtedly the greatest piece of mischief that can be introduced into military forces, and the greatest source of danger that peace will explode into all out war.

  Crises are rife with uncertainty and potential for miscalculation, and as Schelling explained, “when speed is critical, the victim of an accident or false-alarm is under terrible pressure.” Some forms of autonomy could help to reduce these time pressures. Semiautonomous weapons that find and identify targets could be stabilizing, since they could buy more time for human decision-makers. Fully autonomous and supervised autonomous weapons short-circuit human decision-making, however, speeding up engagements. With accelerated reactions and counterreactions, humans could struggle to understand and control events. Even if everything functioned properly, policymakers could nevertheless effectively lose the ability to control escalation as the speed of action on the battlefield begins to eclipse their speed of decision-making.

  REMOVING THE HUMAN FAIL-SAFE

  In a fast-paced environment, autonomous weapons would remove a vital safety in preventing unwanted escalation: human judgment. Stanislav Petrov’s fateful decision in bunker Serpukhov-15 represents an extreme case of the benefits of human judgment, but there are many more examples from crisis situations. Schelling wrote about the virtues of

  restraining devices for weapons, men, and decision-processes—delaying mechanisms, safety devices, double-check and consultation procedures, conservative rules for responding to alarms and communication failure, and in general both institutions and mechanisms for avoiding an unauthorized firing or a hasty reaction to untoward events.

  Indeed, used in the right way, automation can provide such safeties, as in the case of automatic braking on cars. Automation increases stability when it is additive to human judgment, but not when it replaces human judgment. When autonomy accelerates decisions, it can lead to haste and unwanted escalation in a crisis.

  COMMAND-AND-CONTROL AND THE PSYCHOLOGY OF CRISIS DECISION-MAKING

  Stability is as much about perceptions and human psychology as it is about the weapons themselves. Two gunslingers staring each other down aren’t interested only in their opponent’s accuracy, but also what is in the mind of the other fighter. Machines today are woefully unequipped to perform this kind of task. Machines may outperform humans in speed and precision, but current AI cannot perform theory-of-mind tasks such as imagining another person’s intent. At the tactical level of war, this may not be important. Once the gunslinger has made a decision to draw his weapon, automating the tasks of drawing, aiming, and firing would undoubtedly be faster than doing it by hand. Likewise, once humans have directed that an attack should occur, autonomous weapons may be more effective than humans in carrying out the attack. Crises, however, are periods of militarized tension between nations that have the potential to escalate into full-blown war, but where nations have not yet made the decision to escalate. Even once war begins, war among nuclear powers will by necessity be limited. In these situations, countries are attempting to communicate their resolve—their willingness to go escalate if need be—but without actually escalating the conflict. This is a delicate balancing act. Unlike a battle, which is fought for tactical or operational advantage, these situations are ultimately a form of communication between national leaders, where intentions are communicated through military actions. Michael Carl Haas of ETH Zurich argues that using autonomous weapons invites another actor into the conversation, the AI itself:

  [S]tates [who employ autonomous weapons] would be introducing into the crisis equation an element that is beyond their immediate control, but that nonetheless interacts with the human opponent’s strategic psychology. In effect, the artificial intelligence (AI) that governs the behavior of autonomous systems during their operational employment would become an additional actor participating in the crisis, though one who is tightly constrained by a set of algorithms and mission objectives.

  Command-and-control refers to the ability of leaders to effectively marshal their military forces for a common goal and is a frequent concern in crises. National leaders do not have perfect control over their forces, and warfighters can and sometimes do take actions inconsistent
with their national leadership’s intent, whether out of ignorance, negligence, or deliberate attempts to defy authorities. The 1962 Cuban Missile Crisis was rife with such incidents. On October 26, ten days into the crisis, authorities at Vandenberg Air Force Base carried out a scheduled test launch of an Atlas ICBM without first checking with the White House. The next morning, on October 27, an American U-2 surveillance plane was shot down while flying over Cuba, despite orders by Soviet Premier Nikita Khrushchev not to fire on U.S. surveillance aircraft. (The missile appears to have been fired by Cuban forces on Fidel Castro’s orders.) Later that same day, another U-2 flying over the Arctic Circle accidentally strayed into Soviet territory. Soviet and American leaders could not know for certain whether these incidents were intentional signals by the adversary to escalate or individual units acting on their own. Events like these have the potential to ratchet up tensions through inadvertent or accidental escalation.

  In theory, autonomous weapons ought to be the perfect soldier, carrying out orders precisely, without any deviation. This might eliminate some incidents. For example, on October 24, 1962, when U.S. Strategic Air Command (SAC) was ordered to DEFCON 2, just one step short of nuclear war, SAC commander General Thomas Power deviated from his orders by openly broadcasting a message to his troops on an unencrypted radio channel. The unencrypted broadcast revealed heightened U.S. readiness levels to the Soviets, who could listen in, and was not authorized. Unlike people, autonomous weapons would be incapable of violating their programming. On the other hand, their brittleness and inability to understand the context for their actions would be a major liability in other ways. The Vandenberg IBCM test, for example, was caused by officers following preestablished guidance without pausing to ask whether that guidance still applied in light of new information (the unfolding crisis over Cuba).

  Often, the correct decision in any given moment depends not on rigid adherence to guidance, but rather on understanding the intent behind the guidance. Militaries have a concept of “commander’s intent,” a succinct statement given by commanders to subordinates describing the commander’s goals. Sometimes, meeting the commander’s intent requires deviating from the plan because of new facts on the ground. Humans are not perfect, but they are capable of using their common sense and better judgment to comply with the intent behind a rule, rather than the rule itself. Humans can disobey the rules and in tense situations, counterintuitively, that may be a good thing.

  At the heart of the matter is whether more flexibility in how subordinates carry out directions is a good thing or a bad thing. On the battlefield, greater flexibility is generally preferred, within broad bounds of the law and rules of engagement. In “Communicating Intent and Imparting Presence,” Lieutenant Colonel Lawrence Shattuck wrote:

  If . . . the enemy commander has 10 possible courses of action, but the friendly commander, restricted by the senior commander, still has only one course of action available, the enemy clearly has the advantage.

  In crises, tighter control over one’s forces is generally preferred, since even small actions can have strategic consequences. Zero flexibility for subordinates with no opportunity to exercise common sense is a sure invitation to disaster, however. Partly, this is because national leaders cannot possibly foresee all eventualities. War is characterized by uncertainty. Unanticipated circumstances will arise. War is also competitive. The enemy will almost certainly attempt to exploit rigid behavioral rules for their own advantage. Michael Carl Haas suggests these tactics might include:

  relocating important assets to busy urban settings or next to inadmissible targets, such as hydroelectric dams or nuclear-power stations; altering the appearance of weapons and installations to simulate illegitimate targets, and perhaps even the alteration of illegitimate targets to simulate legitimate ones; large-scale use of dummies and obscurants, and the full panoply of electronic deception measures.

  Even without direct hacking, autonomous weapons could be manipulated by exploiting vulnerabilities in their rules of engagement. Humans might be able to see these ruses or deceptions for what they are and innovate on the fly, in accordance with their understanding of commander’s intent. Autonomous weapons would follow their programming. On a purely tactical level, other benefits to autonomous weapons may outweigh this vulnerability, but in crises, when a single misplaced shot could ratchet up tensions toward war, careful judgment is needed.

  In recent years, the U.S. military has begun to worry about the problem of the “strategic corporal.” The basic idea is that a relatively low-ranking individual could, through his or her actions on the battlefield, have strategic effects that determine the course of the war. The solution to this problem is to better educate junior leaders on the strategic consequences of their actions in order to improve their decision-making, rather than giving them a strict set of rules to follow. Any set of rules followed blindly and without regard to the commander’s intent can be manipulated by a clever enemy. Autonomous weapons would do precisely what they are told, regardless of how dumb or ill-conceived the orders appear in the moment. Their rigidity might seem appealing from a command-and-control standpoint, but the result is the strategic corporal problem on steroids.

  There is precedent for concerns about the strategic consequences of automation. During development of the Reagan-era “Star Wars” missile defense shield, officially called the Strategic Defense Initiative, U.S. lawmakers wrote a provision into the 1988–1989 National Defense Authorization Act mandating a human in the loop for certain actions. The law requires “affirmative human decision at an appropriate level of authority” for any systems that would intercept missiles in the early phases of their ascent. Intercepts at these early stages can be problematic because they must occur on very short timelines and near an adversary’s territory. An automated system could conceivably mistake a satellite launch or missile test for an attack and, by destroying another country’s rocket, needlessly escalate a crisis.

  Even if mistakes could be avoided, there is a deeper problem with leaders attempting to increase their command-and-control in crises by directly programming engagement rules into autonomous weapons: leaders themselves may not be able to accurately predict what decisions they would want to take in the future. “Projection bias” is a cognitive bias where humans incorrectly project their current beliefs and desires onto others and even their future selves.

  To better understand what this might mean for autonomous weapons, I reached out to David Danks, a professor of philosophy and psychology at Carnegie Mellon University. Danks studies both cognitive science and machine learning, so he understands the benefits and drawbacks to human and machine cognition. Danks explained that projection bias is “a very real problem” for autonomous weapons. Even if we could ensure that the autonomous weapon would flawlessly carry out political leaders’ directions, with no malfunctions or manipulation by the enemy, “you still have the problem that that’s a snapshot of the preferences and desires at that moment in time,” he said. Danks explained that people generally do a good job of predicting their own future preferences for situations they have experience with, but for “a completely novel situation . . . there’s real risks that we’re going to have pretty significant projection biases.”

  Again, the Cuban Missile Crisis illustrates the problem. Robert McNamara, who was secretary of defense at the time, later explained that the president’s senior advisors believed that if the U-2 they sent to fly over Cuba were shot down, it would have signaled a deliberate move by the Soviets to escalate. They had decided ahead of time, therefore, that if the U-2 was shot down, the United States would attack:

  [B]efore we sent the U-2 out, we agreed that if it was shot down we wouldn’t meet, we’d simply attack. It was shot down on Friday. . . . Fortunately, we changed our mind, we thought “Well, it might have been an accident, we won’t attack.”

  When actually faced with the decision, it turns out that McNamara and others had a different view. They were unable to accurately predict
their own preferences as to what they would want to do if the plane were shot down. In that example, McNamara and others could reverse course. They had not actually delegated the authority to attack. There was another moment during the Cuban Missile Crisis, however, when Soviet leadership had delegated release authority for nuclear weapons and the world came chillingly close to nuclear war.

  On October 27, the same day that the U-2 was shot down over Cuba and another U-2 flying over the Arctic strayed into Soviet territory, U.S. ships at the quarantine (blockade) line began dropping signaling depth charges on the Soviet submarine B-59 to compel it to surface. The U.S. Navy was not aware that the B-59 was armed with a nuclear-tipped torpedo with a 15-kiloton warhead, about the size of the bomb dropped on Hiroshima. Furthermore, Soviet command had delegated authority to the ship’s captain to use the torpedo if the ship was “hulled” (a hole blown in the hull from depth charges). Normally, authorization was required from two people to fire a nuclear torpedo: the ship’s captain and political officer. According to Soviet sailors aboard the submarine, the submarine’s captain, Valentin Savitsky, ordered the nuclear torpedo prepared for launch, declaring, “We’re going to blast them now! We will die, but we will sink them all.” Fortunately, the flotilla commander, Captain Vasili Arkhipov, was also present on the submarine. He was Captain Savitsky’s superior and his approval was also required. Reportedly, only Arkhipov was opposed to launching the torpedo. As with Stanislav Petrov, once again the judgment of a single Soviet officer may have again prevented the outbreak of nuclear war.

  DETERRENCE AND THE DEAD HAND

  Sometimes, there is a benefit to tying one’s hands in a crisis. Strategists have often compared crises to a game of chicken between two drivers, both hurtling toward the other one at deadly speed, daring the other to swerve. Neither side wants a collision, but neither wants to be the first to swerve. One way to win is to demonstrably tie one’s hands so that one cannot swerve. Herman Kahn gave the example of a driver who “takes the steering wheel and throws it out the window.” The onus is now entirely on the other driver to avoid a collision.

 

‹ Prev