CHAPTER EIGHT
YOUR INNER DRONE
IT’S A COLD, misty Friday night in mid-December and you’re driving home from your office holiday party. Actually, you’re being driven home. You recently bought your first autonomous car—a Google-programmed, Mercedes-built eSmart electric sedan—and the software is at the wheel. You can see from the glare of your self-adjusting LED headlights that the street is icy in spots, and you know, thanks to the continuously updated dashboard display, that the car is adjusting its speed and traction settings accordingly. All’s going smoothly. You relax and let your mind drift back to the evening’s stilted festivities. But as you pass through a densely wooded stretch of road, just a few hundred yards from your driveway, an animal darts into the street and freezes, directly in the path of the car. It’s your neighbor’s beagle, you realize—the one that’s always getting loose.
What does your robot driver do? Does it slam on the brakes, in hopes of saving the dog but at the risk of sending the car into an uncontrolled skid? Or does it keep its virtual foot off the brake, sacrificing the beagle to ensure that you and your vehicle stay out of harm’s way? How does it sort through and weigh the variables and probabilities to arrive at a split-second decision? If its algorithms calculate that hitting the brakes would give the dog a 53 percent chance of survival but would entail an 18 percent chance of damaging the car and a 4 percent chance of causing injury to you, does it conclude that trying to save the animal would be the right thing to do? How does the software, working on its own, translate a set of numbers into a decision that has both practical and moral consequences?
What if the animal in the road isn’t your neighbor’s pet but your own? What, for that matter, if it isn’t a dog but a child? Imagine you’re on your morning commute, scrolling through your overnight emails as your self-driving car crosses a bridge, its speed precisely synced to the forty-mile-per-hour limit. A group of schoolchildren is also heading over the bridge, on the pedestrian walkway that runs alongside your lane. The kids, watched by adults, seem orderly and well behaved. There’s no sign of trouble, but your car slows slightly, its computer preferring to err on the side of safety. Suddenly, there’s a tussle, and a little boy is pushed into the road. Busily tapping out a message on your smartphone, you’re oblivious to what’s happening. Your car has to make the decision: either it swerves out of its lane and goes off the opposite side of the bridge, possibly killing you, or it hits the child. What does the software instruct the steering wheel to do? Would the program make a different choice if it knew that one of your own children was riding with you, strapped into a sensor-equipped car seat in the back? What if there was an oncoming vehicle in the other lane? What if that vehicle was a school bus? Isaac Asimov’s first law of robot ethics—“a robot may not injure a human being, or, through inaction, allow a human being to come to harm”1—sounds reasonable and reassuring, but it assumes a world far simpler than our own.
The arrival of autonomous vehicles, says Gary Marcus, the NYU psychology professor, would do more than “signal the end of one more human niche.” It would mark the start of a new era in which machines will have to have “ethical systems.”2 Some would argue that we’re already there. In small but ominous ways, we have started handing off moral decisions to computers. Consider Roomba, the much-publicized robotic vacuum cleaner. Roomba makes no distinction between a dust bunny and an insect. It gobbles both, indiscriminately. If a cricket crosses its path, the cricket gets sucked to its death. A lot of people, when vacuuming, will also run over the cricket. They place no value on a bug’s life, at least not when the bug is an intruder in their home. But other people will stop what they’re doing, pick up the cricket, carry it to the door, and set it loose. (Followers of Jainism, the ancient Indian religion, consider it a sin to harm any living thing; they take great care not to kill or hurt insects.) When we set Roomba loose on a carpet, we cede to it the power to make moral choices on our behalf. Robotic lawn mowers, like Lawn-Bott and Automower, routinely deal death to higher forms of life, including reptiles, amphibians, and small mammals. Most people, when they see a toad or a field mouse ahead of them as they cut their grass, will make a conscious decision to spare the animal, and if they should run it over by accident, they’ll feel bad about it. A robotic lawn mower kills without compunction.
Up to now, discussions about the morals of robots and other machines have been largely theoretical, the stuff of science-fiction stories or thought experiments in philosophy classes. Ethical considerations have often influenced the design of tools—guns have safeties, motors have governors, search engines have filters—but machines haven’t been required to have consciences. They haven’t had to adjust their own operation in real time to account for the ethical vagaries of a situation. Whenever questions about the moral use of a technology arose in the past, people would step in to sort things out. That won’t always be feasible in the future. As robots and computers become more adept at sensing the world and acting autonomously in it, they’ll inevitably face situations in which there’s no one right choice. They’ll have to make vexing decisions on their own. It’s impossible to automate complex human activities without also automating moral choices.
Human beings are anything but flawless when it comes to ethical judgments. We frequently do the wrong thing, sometimes out of confusion or heedlessness, sometimes deliberately. That’s led some to argue that the speed with which robots can sort through options, estimate probabilities, and weigh consequences will allow them to make more rational choices than people are capable of making when immediate action is called for. There’s truth in that view. In certain circumstances, particularly those where only money or property is at stake, a swift calculation of probabilities may be sufficient to determine the action that will lead to the optimal outcome. Some human drivers will try to speed through a traffic light that’s just turning red, even though it ups the odds of an accident. A computer would never act so rashly. But most moral dilemmas aren’t so tractable. Try to solve them mathematically, and you arrive at a more fundamental question: Who determines what the “optimal” or “rational” choice is in a morally ambiguous situation? Who gets to program the robot’s conscience? Is it the robot’s manufacturer? The robot’s owner? The software coders? Politicians? Government regulators? Philosophers? An insurance underwriter?
There is no perfect moral algorithm, no way to reduce ethics to a set of rules that everyone will agree on. Philosophers have tried to do that for centuries, and they’ve failed. Even coldly utilitarian calculations are subjective; their outcome hinges on the values and interests of the decision maker. The rational choice for your car’s insurer—the dog dies—might not be the choice you’d make, either deliberately or reflexively, when you’re about to run over a neighbor’s pet. “In an age of robots,” observes the political scientist Charles Rubin, “we will be as ever before—or perhaps as never before—stuck with morality.”3
Still, the algorithms will need to be written. The idea that we can calculate our way out of moral dilemmas may be simplistic, or even repellent, but that doesn’t change the fact that robots and software agents are going to have to calculate their way out of moral dilemmas. Unless and until artificial intelligence attains some semblance of consciousness and is able to feel or at least simulate emotions like affection and regret, no other course will be open to our calculating kin. We may rue the fact that we’ve succeeded in giving automatons the ability to take moral action before we’ve figured out how to give them moral sense, but regret doesn’t let us off the hook. The age of ethical systems is upon us. If autonomous machines are to be set loose in the world, moral codes will have to be translated, however imperfectly, into software codes.
HERE’S ANOTHER scenario. You’re an army colonel who’s commanding a battalion of human and mechanical soldiers. You have a platoon of computer-controlled “sniper robots” stationed on street corners and rooftops throughout a city that your forces are defending against a guerrilla attack. On
e of the robots spots, with its laser-vision sight, a man in civilian clothes holding a cell phone. He’s acting in a way that experience would suggest is suspicious. The robot, drawing on a thorough analysis of the immediate situation and a rich database documenting past patterns of behavior, instantly calculates that there’s a 68 percent chance the person is an insurgent preparing to detonate a bomb and a 32 percent chance he’s an innocent bystander. At that moment, a personnel carrier is rolling down the street with a dozen of your human soldiers on board. If there is a bomb, it could be detonated at any moment. War has no pause button. Human judgment can’t be brought to bear. The robot has to act. What does its software order its gun to do: shoot or hold fire?
If we, as civilians, have yet to grapple with the ethical implications of self-driving cars and other autonomous robots, the situation is very different in the military. For years, defense departments and military academies have been studying the methods and consequences of handing authority for life-and-death decisions to battlefield machines. Missile and bomb strikes by unmanned drone aircraft, such as the Predator and the Reaper, are already commonplace, and they’ve been the subject of heated debates. Both sides make good arguments. Proponents note that drones keep soldiers and airmen out of harm’s way and, through the precision of their attacks, reduce the casualties and damage that accompany traditional combat and bombardment. Opponents see the strikes as state-sponsored assassinations. They point out that the explosions frequently kill or wound, not to mention terrify, civilians. Drone strikes, though, aren’t automated; they’re remote-controlled. The planes may fly themselves and perform surveillance functions on their own, but decisions to fire their weapons are made by soldiers sitting at computers and monitoring live video feeds, operating under strict orders from their superiors. As currently deployed, missile-carrying drones aren’t all that different from cruise missiles and other weapons. A person still pulls the trigger.
The big change will come when a computer starts pulling the trigger. Fully automated, computer-controlled killing machines—what the military calls lethal autonomous robots, or LARs—are technologically feasible today, and have been for quite some time. Environmental sensors can scan a battlefield with high-definition precision, automatic firing mechanisms are in wide use, and codes to control the shooting of a gun or the launch of a missile aren’t hard to write. To a computer, a decision to fire a weapon isn’t really any different from a decision to trade a stock or direct an email message into a spam folder. An algorithm is an algorithm.
In 2013, Christof Heyns, a South African legal scholar who serves as special rapporteur on extrajudicial, summary, and arbitrary executions to the United Nations General Assembly, issued a report on the status of and prospects for military robots.4 Clinical and measured, it made for chilling reading. “Governments with the ability to produce LARs,” Heyns wrote, “indicate that their use during armed conflict or elsewhere is not currently envisioned.” But the history of weaponry, he went on, suggests we shouldn’t put much stock in these assurances: “It should be recalled that aeroplanes and drones were first used in armed conflict for surveillance purposes only, and offensive use was ruled out because of the anticipated adverse consequences. Subsequent experience shows that when technology that provides a perceived advantage over an adversary is available, initial intentions are often cast aside.” Once a new type of weaponry is deployed, moreover, an arms race almost always ensues. At that point, “the power of vested interests may preclude efforts at appropriate control.”
War is in many ways more cut-and-dried than civilian life. There are rules of engagement, chains of command, well-demarcated sides. Killing is not only acceptable but encouraged. Yet even in war the programming of morality raises problems that have no solution—or at least can’t be solved without setting a lot of moral considerations aside. In 2008, the U.S. Navy commissioned the Ethics and Emerging Sciences Group at California Polytechnic State University to prepare a white paper reviewing the ethical issues raised by LARs and laying out possible approaches to “constructing ethical autonomous robots” for military use. The ethicists reported that there are two basic ways to program a robot’s computer to make moral decisions: top-down and bottom-up. In the top-down approach, all the rules governing the robot’s decisions are programmed ahead of time, and the robot simply obeys the rules “without change or flexibility.” That sounds straightforward, but it’s not, as Asimov discovered when he tried to formulate his system of robot ethics. There’s no way to anticipate all the circumstances a robot may encounter. The “rigidity” of top-down programming can backfire, the scholars wrote, “when events and situations unforeseen or insufficiently imagined by the programmers occur, causing the robot to perform badly or simply do horrible things, precisely because it is rule-bound.”5
In the bottom-up approach, the robot is programmed with a few rudimentary rules and then sent out into the world. It uses machine-learning techniques to develop its own moral code, adapting it to new situations as they arise. “Like a child, a robot is placed into variegated situations and is expected to learn through trial and error (and feedback) what is and is not appropriate to do.” The more dilemmas it faces, the more fine-tuned its moral judgment becomes. But the bottom-up approach presents even thornier problems. First, it’s impracticable; we have yet to invent machine-learning algorithms subtle and robust enough for moral decision making. Second, there’s no room for trial and error in life-and-death situations; the approach itself would be immoral. Third, there’s no guarantee that the morality a computer develops would reflect or be in harmony with human morality. Set loose on a battlefield with a machine gun and a set of machine-learning algorithms, a robot might go rogue.
Human beings, the ethicists pointed out, employ a “hybrid” of top-down and bottom-up approaches in making moral decisions. People live in societies that have laws and other strictures to guide and control behavior; many people also shape their decisions and actions to fit religious and cultural precepts; and personal conscience, whether innate or not, imposes its own rules. Experience plays a role too. People learn to be moral creatures as they grow up and struggle with ethical decisions of different stripes in different situations. We’re far from perfect, but most of us have a discriminating moral sense that can be applied flexibly to dilemmas we’ve never encountered before. The only way for robots to become truly moral beings would be to follow our example and take a hybrid approach, both obeying rules and learning from experience. But creating a machine with that capacity is far beyond our technological grasp. “Eventually,” the ethicists concluded, “we may be able to build morally intelligent robots that maintain the dynamic and flexible morality of bottom-up systems capable of accommodating diverse inputs, while subjecting the evaluation of choices and actions to top-down principles.” Before that happens, though, we’ll need to figure out how to program computers to display “supra-rational faculties”—to have emotions, social skills, consciousness, and a sense of “being embodied in the world.” 6 We’ll need to become gods, in other words.
Armies are unlikely to wait that long. In an article in Parameters, the journal of the U.S. Army War College, Thomas Adams, a military strategist and retired lieutenant colonel, argues that “the logic leading to fully autonomous systems seems inescapable.” Thanks to the speed, size, and sensitivity of robotic weaponry, warfare is “leaving the realm of human senses” and “crossing outside the limits of human reaction times.” It will soon be “too complex for real human comprehension.” As people become the weakest link in the military system, he says, echoing the technology-centric arguments of civilian software designers, maintaining “meaningful human control” over battlefield decisions will become next to impossible. “One answer, of course, is to simply accept a slower information-processing rate as the price of keeping humans in the military decision business. The problem is that some adversary will inevitably decide that the way to defeat the human-centric systems is to attack it with systems that are not
so limited.” In the end, Adams believes, we “may come to regard tactical warfare as properly the business of machines and not appropriate for people at all.”7
What will make it especially difficult to prevent the deployment of LARs is not just their tactical effectiveness. It’s also that their deployment would have certain ethical advantages independent of the machines’ own moral makeup. Unlike human fighters, robots have no baser instincts to tug at them in the heat and chaos of battle. They don’t experience stress or depression or surges of adrenaline. “Typically,” Christof Heyns wrote, “they would not act out of revenge, panic, anger, spite, prejudice or fear. Moreover, unless specifically programmed to do so, robots would not cause intentional suffering on civilian populations, for example through torture. Robots also do not rape.”8
Robots don’t lie or otherwise try to hide their actions, either. They can be programmed to leave digital trails, which would tend to make an army more accountable for its actions. Most important of all, by using LARs to wage war, a country can avoid death or injury to its own soldiers. Killer robots save lives as well as take them. As soon as it becomes clear to people that automated soldiers and weaponry will lower the likelihood of their sons and daughters being killed or maimed in battle, the pressure on governments to automate war making may become irresistible. That robots lack “human judgement, common sense, appreciation of the larger picture, understanding of the intentions behind people’s actions, and understanding of values,” in Heyns’s words, may not matter in the end. In fact, the moral stupidity of robots has its advantages. If the machines displayed human qualities of thought and feeling, we’d be less sanguine about sending them to their destruction in war.
The Glass Cage: Automation and Us Page 19