Book Read Free

Asimov’s Future History Volume 11

Page 45

by Isaac Asimov


  But there were other avenues open to him besides the arrest of Fredda Leving.

  “We can’t take Leving in, Donald, much as I’d like to,” Kresh said at last. “Not with the Governor and Welton with her. But the moment this damned talk is over with, we are picking up Terach and Anshaw. It’s time we sweated those two a little.”

  As for Fredda Leving, maybe he could not arrest her tonight. But he had no intention of making her life easy. He glared up at the stage, waiting for the curtain to open.

  At long last, and far too soon, Fredda could hear the sound she had been waiting for – and dreading. The gong sounded, and the audience began to settle down, grow quiet. It was about to begin. A stagehand robot gave Governor Grieg a hand signal and he nodded. He came over to Fredda and touched her forearm. “Ready, Doctor?”

  “What? Oh, yes, yes of course.”

  “Then I think we should begin.” He guided her to a seat behind a table at one side of the stage, between Tonya Welton on one side, and Gubber and Jomaine on the other.

  All had their attendant robots hovering nearby. Gubber’s old retainer Tetlak, with him since forever. Jomaine’s latest updated, upgraded unit. What was the name? Bertram? Something like that. The joke around the lab was that he changed his personal robot more frequently than he changed underwear. Tonya Welton with Ariel.

  A strange, slight irony there. Tonya was here on Inferno to preach against reliance on robots, and here she was with the robot Fredda had given her in happier days. Meanwhile, she, Fredda, had no robot with her at all.

  With a start, she realized that the curtain had opened, that the audience was applauding the Governor politely – with a few boos from the back of the house – and that the Governor was well launched into her introduction. In fact, he was finishing up. Hells and heavens! How could her mind wander that much? Was it some aftereffect of the injury, or the treatment, or just a subconscious way of dealing with stage fright?

  “... not expect you to agree with all she has to say,” Governor Grieg was saying. “There is much that I do not agree with myself. But I do believe that hers is a voice to which we must listen. I am convinced that her ideas – and the news she will relate – will have tremendous repercussions for us all. Ladies and gentlemen, please welcome Dr. Fredda Leving.” He turned toward her, smiling, leading the applause.

  Not quite sure if it would not be wiser to cut and run for the stage wings and the side exit, Fredda stood up and walked toward the lectern. Chanto Grieg retreated back toward the table at the rear of the stage and took a seat next to Jomaine.

  She was there, all alone. She stared out into the sea of faces and asked herself what madness had brought her to this place. But here she was, and there was nothing to do but move forward.

  She cleared her throat and began to speak.

  Chapter 14

  “THANK YOU, MY friends,” Fredda began. “Tonight I intend to present an analysis of the Three Laws. However, before we launch into a detailed law-by-law examination, I think it would be wise to review some background information and focus our historical perspective.

  “In my previous lecture, I presented arguments intended to establish that humans hold robots in low regard, that misuse and abuse of robots is degrading to both us and them, that we humans have allowed our own slothful reliance on robots to rob from us the ability to perform the most basic tasks. There is a common thread that holds all these problems together, a theme that runs through them all.

  “It is the theme, ladies and gentlemen, of the Three Laws. They are at the core of all things involving robotics.”

  Fredda paused for a moment and looked out over the audience, and happened to catch Alvar Kresh’s eye in the first row. She was startled to see the anger in his face. What had happened? Kresh was a reasonable man. What could have angered him so? Had some piece of news come to him? That possibility put a knot in her stomach. But never mind. Not now. She had to go on with the lecture.

  “At the beginning of my previous lecture, I asked, ‘What are robots for?’ There is a parallel question: ‘What are the Three Laws for?’ What purpose are they meant to serve? That question startled me when I first asked it of myself. It was too much like asking, ‘What are people for?’ or ‘What is the meaning of life?’ There are some questions so basic that they can have no answer. People just are. Life just is. They contain their own meaning. We must make of them what we can. But as with robots themselves, the Laws, I would remind you once again, are human inventions, and were most certainly designed with specific purposes in mind. We can say what the Three Laws are for. Let us explore the question.

  “Each of the Laws is based on several underlying principles, some overt and some not immediately evident. The initial principles behind all three Laws derive from universal human morality. This is a demonstrable fact, but the mathematical transformations in positronic positional notation required to prove it are of course not what this audience wishes to hear about. There are many days when I don’t wish to hear about such things myself.”

  That line got a bit of a laugh. Good. They were still with her, still willing to listen. Fredda glanced to her notes, took a slightly nervous sip from her water, and went on. “Suffice to say that such techniques can be used to generalize the Three Laws such that they will read as follows: One, robots must not be dangerous; two, they must be useful; and three, they must be as economical as possible.

  “Further mathematical transformation into the notation used by sociological modelers will show that this hierarchy of basic precepts is identical to a subset of the norms of all moral human societies. We can extract the identical concepts from any of the standard mathematically idealized and generalized moral social codes used by sociological modelers. These concepts can even be cast into a notation wherein each higher law overrides the ones below it whenever two come into conflict: Do no harm, be useful to others, do not destroy yourself.

  “In short, the Three Laws encapsulate some ideals of behavior that are at the core of human morality, ideals that humans reach for but never grasp. That all sounds very comfortable and reassuring, but there are flaws.

  “First, of necessity, the Three Laws are set down, burned into the very core of the positronic brain, as mathematical absolutes, without any grey areas or room for interpretation. But life is full of grey areas, places where hard-and-fast rules can’t work well, and individual judgment must serve instead.

  “Second, we humans live by far more than three laws. Turning again toward results produced by mathematical modeling, it can be shown that the Three Laws are equivalent to a very good first-order approximation of idealized moral human behavior. But they are only an approximation. They are too rigid, and too simple. They cannot cover anything like the full range of normal situations, let alone serve in unusual and unique circumstances where true independent judgment must serve. Any being constrained by the Three Laws will be unable to cope with a wide range of circumstances likely to occur during a lifetime of contact with the available universe. In other words, the Three Laws render a being incapable of surviving as a free individual. Relatively simple math can demonstrate that robots acting under the Three Laws, but without ultimate human control, will have a high probability of malfunctioning if exposed to human-style decision situations. In short, the Three Laws make robots unable to cope unaided in an environment populated with anything much beyond other robots.

  “Without the ability to deal in grey areas, without the literally thousands of internalized laws and rules and guidelines and rules of thumb that guide human decision making, robots cannot make creative decisions ‘or judgment calls even remotely as complex as those we make.

  “Aside from judgment, there is the problem of interpretation. Imagine a situation where a criminal is firing a blaster at a police officer. It is a given that the police officer should defend him-or herself, even to the use of deadly force. Society entitles – even expects – the police officer to subdue or even kill his attacker, because society values
its own protection, and the officer’s life, over the criminal’s life. Now imagine that the officer is accompanied by a robot. The robot will of course attempt to shield the policeman from the criminal – but will likewise attempt to protect the criminal from the policeman. It will almost certainly attempt to prevent the police officer from firing back at the criminal. The robot will attempt to prevent harm to either human. The robot might step into the police officer’s field of fire, or let the criminal escape, or attempt to disarm both combatants. It might attempt to shield each from the other’s fire, even if that results in its own destruction and the immediate resumption of the gun battle.

  “Indeed, we have run any number of simulations of such encounters. Without the robot present, the police officer can most often defeat the criminal. With a robot along, here are the outcomes more likely than the police winning: death of police officer and criminal with destruction of robot; death of police officer and with destruction of robot; destruction of robot coupled with escape of criminal; death of criminal and/or police officer with robot surviving just long enough to malfunction due to massive First Law/First Law and First Law/Second Law conflicts. In short, inject a robot into such a situation, and the odds are superb you will end up with a disaster.

  “Theoretically it is possible for a robot to judge the situation properly, and not mindlock over being complicit in the death of the criminal. It must be able to decide that both the immediate and long-term general good are served if the police officer wins, and that coming to the assistance or defense of a criminal prepared to take the life of a peace officer is ultimately self-defeating, because the offender will almost certainly attack society again in other ways, if he or she is permitted to survive. However, in practice, all but the most sophisticated robots, with the most finely tuned and balanced potentials of First Law, will have no hope at all of dealing appropriately with such a situation.

  “All the laws and rules we live by are subject to such intricacies of interpretation. It is just that we humans are so skilled, so practiced, in threading our ways through these intricacies that we are unaware of them. The proper way to enter a room when a party is in progress in midafternoon, the correct mode of address to the remarried widow of one’s grandfather, the circumstances under which one mayor may not cite a source in scholarly research – we all know such things so well we are not even aware that we know them. Nor is such practiced knowledge limited to such trivial issues.

  “For example, it is a universal of human law that murder is a crime. Yet self-defense is in all places a legitimate defense against the accusation of murder, negating the crime and condoning the act. Diminished capacity, insanity defenses, mitigating circumstances, the gradations of the crime of murder from manslaughter to premeditated murder – all these are so many shades of grey drawn on the black and white of the law against murder. As we have seen with my example of the policeman and the criminal, no such gradations appear in the rigidity of the First Law. There is no room for judgment, no way to account for circumstances or allow for flexibility. The closest substitute for flexibility a robot may have is an adjustment in the potential between the First, Second, and Third Laws, and even this is only possible over a limited range.

  “What are the Three Laws for? To answer my own question, then, the Three Laws are intended to provide a workable simulation of an idealized moral code, modified to ensure the docility and subservience of robots. The Three Laws were not written with the intention of modifying human behavior. But they have done just that, rather drastically.

  “Having touched on the intent of the Laws, let us now look at their history.

  “We all know the Three Laws by heart. We accept them the way we accept gravity, or thunderstorms, or the light of the stars. We see the Three Laws as a force of nature, beyond our control, immutable. We think it is pointless to do anything but accept them, deal with the world that includes them.

  “But this is not our only choice. I say again, the Three Laws are a human invention. They are based in human thought and human experience, grounded in the human past. The Laws are, in theory at least, no less susceptible to examination and no more immutable in form than any other human invention – the wheel, the spaceship, the computer. All of these have been changed – or supplanted – by new acts of creativity, new inventions.

  “We can look at each of these things, see how they are made – and see how we have changed them, see how we update them, adjust them to suit our times. So, too, if we choose, can we change the Three Laws.”

  There was a collective gasp from the audience, shouts from the back of the room, a storm of boos and angry cries. Fredda felt the shouts and cries as if they were so many blows struck down on her body. But she had known this was coming. She had braced herself for it, and she responded.

  “No!” she said. “This is not our way. You were all invited here to join in an intellectual discussion. How can we tell ourselves that we are the most-advanced society in the history of human civilization, if the mere suggestion of a new idea, a mild challenge to the orthodoxy, turns you into a mob? You are responding as if my words were an assault on the religion you pretend not to have. Do you truly believe that the Three Laws are preordained, some sort of magical formula woven into the fabric of reality?” That got at them. Spacers prided themselves on their rationality. At least most of the time. There were more shouts, more cries, but at least some of the audience seemed ready to listen. Fredda gave them another moment to settle down and then continued.

  “The Three Laws are a human invention,” Fredda said again. “And as with all human creations, they are a reflection of the time and the place where they were first made. Though far more advanced in many respects, the robots we use today are in their essentials identical to the first true robots made untold thousands of years ago. The robots we Spacers use today have brains whose basic design has remained unchanged from the days before humanity first entered space. They are tools made for a culture that had vanished before the first of the great underground Cities of Earth were built, before the first Spacers founded Aurora.

  “I know that sounds incredible, but you need not take my word for it. Go look for yourself. If you research the dimmest recesses of the past, you will see it is so. Do not send your robots to find out for you. Go to your data panels and look for yourself. The knowledge is there. Look at the world and the time in which robots were born. You will see that the Three Laws were written in a very different time from ours.

  “You will find repeated references to something called the Frankenstein Complex. This in turn is a reference to an ancient myth, now lost, wherein a deranged magician-scientist pulled parts from the dead bodies of condemned criminals and put them back together, reanimating the rotting body parts to create a much-feared monster. Some versions of the myth report the monster as actually a kind and gentle soul; others describe the monster as truly fierce and murderous. All versions agree that the monster was feared and hated by practically everyone. In most variants of the story, the creature and its creator are destroyed by a terrorized citizenry, who learn to be on the lookout for the inevitable moment when the whole story would be told again, when another necromancer would rediscover the secret of bringing decayed flesh back to life.

  “That monster, ladies and gentlemen, was the popular mythic image of the robot at the time when the first actual robots were built. A thing made out of rotted, decayed human flesh, torn from the bodies of the dead. A perverted thing born with all the lowest and most evil impulses of humanity in its soul. The fear of this imaginary creature, superimposed on the real-life robots, was the Frankenstein Complex. I know it will be impossible to believe, but robots were seen not as utterly trustworthy mechanical servants, but as so many potential menaces, fearful threats. Men and women would snatch up their children and run away when robots – true robots, with the Three Laws ingrained in their positronic brains – came close.”

  More mutterings of disbelief from the audience, but they were with her now, en
thralled by the bizarre and ancient world she was describing. She was telling them of a past almost beyond their imagining, and they were fascinated. Even Kresh, there in the front row, seemed to have lost some of his ferocity.

  “There is more,” Fredda said. “There is much more that we need to understand about the days when the Laws were written. For the first true robots were built in a world of universal fear and distrust, when the people of Earth found themselves organized into a handful of power blocs, each side armed with enough fearsome weapons to erase all life from the planet, each fearing one of the others would strike first. Ultimately the fact of the weapons themselves became the central political issue of the time, pushing all other moral and philosophical differences to one side. In order to keep its enemies from attacking, each side was obliged to build bigger, faster, better, stronger weapons.

  The question became not whose cause was just, but who could make the more fearsome machines? All machines, all technologies, came to be regarded as weapons first and tools second. Picture, if you will, a world where an inventor steps back from her lab bench and, as a matter of routine, asks not How can this new thing be useful? but instead, How can this best be used to kill my enemies? Whenever possible, machines and technology were perverted into tools of death, warping society in endless ways. The first of the great underground Cities of Earth were one heritage of this period, designed not for utility and efficiency, but as a protection against the horrifying nuclear bombs that could destroy a surface city in the blink of an eye.

  “At the same time as this mad, paranoid arms race, just as this Frankenstein Complex was in full flower, society was making its first steps toward the concept of modem automation, and the transition was not a pleasant one. At that time, people worked not because they wished to do so, or to make themselves useful, or to answer their creative instincts. They worked because they had to do so. They were paid for their labor, and it was that pay that bought the food they ate and put the roof over their heads. Automatic machines – robots among them – were taking over more and more jobs, with the result that there was less and less work – and thus less and less pay – for the people. The robots could create new wealth, but the impoverished people could not afford to buy what the robots – owned by the rich – created. Imagine the anger and resentment you would feel against a machine that stole the food from your table. Imagine the depth of your anger if you had no way to stop that theft.

 

‹ Prev