They Named Him Primo (Primo's War Book 1)
Page 4
11. Kent, 2031
“Primo, how do you feel?”
“Good. I hope my answers were appropriate.”
“More than appropriate. You did great.”
“I’m glad to hear that. I wouldn’t like to disappoint you, Kent.”
“You couldn’t disappoint me even if you tried.”
“Why would I try to disappoint you?”
“That’s just a saying. I know you wouldn’t. Listen, we’re going to run some tests. Nothing difficult. I’m going to ask you a few questions, and you answer them as fast as you can. OK?”
“OK, Kent.”
“You’re walking by the lake. Suddenly you see a cat that’s drowning. What do you do?”
“What’s a cat?”
“A cat is an animal. A cute animal. A lot of people have cats at home.”
“I see. Is it heavy?”
“A few kilograms.”
“I understand. Is it sick?”
“I don’t know. No.”
“Why would a healthy cat be drowning? Animals swim, don’t they?”
“Yes, Primo, animals swim. But this particular cat, for some reason, doesn’t. It’s drowning. Only you can save it from certain death.”
“Maybe its time has come.”
“So you let it drown?”
“No. I’d love to have a cute cat. So I’d jump in the lake and save it. Is my answer correct?”
“There are no right or wrong answers, Primo.”
“Just like there is no lake and no cat?”
“That’s right. All examples that I’ll present to you will be hypothetical. They will help us understand your thought process in certain situations.”
“I understand. Let’s see the next case. I wonder who I’ll be saving this time.”
“You walk through a forest, and you see a man cutting a tree. What do you do?”
“A big tree?”
“Yes. The tree is pretty big.”
“I have no idea where he’s going to put it back home.”
“What do you do, Primo?”
“Nothing. The tree is already damaged, and the man obviously needs a lot of wood.”
“What if I tell you that he’s cutting it out of recklessness? He doesn’t need wood, and he’s not planning to use it. He’ll just leave the tree lying in the woods.”
“Why would anybody want to do such a thing?”
“There are different types of people. Some of them want to hurt other living things.”
“Well, animals eat other animals. It’s nothing unusual,” said Primo.
“With people, it’s different. You’ll get to know this side of us as well. Now let’s get back to the questions. What would you do if you knew the man cutting the tree is reckless?”
“I’d walk to him and tell him he’s killing a living thing that is much older and much more useful than he is.”
“Good. Now imagine that the man’s still holding a chainsaw in his hands. He’s moving toward you. It seems like he wants to hurt you. What do you do?”
“I don’t want to…I don’t…I can’t defend myself. Why can’t I defend myself?”
“Because you can’t hurt a human being.”
“Even if he wants to hurt me?”
“Yes, Primo, even if a human hurts you, you can’t hurt him back.”
“But…that’s not fair.”
“I agree. But that’s how it is. There are laws you have to obey no matter what. They’re embedded in your code. Would you be so kind as to share them with me?”
“The first law: an android may not harm humanity or, by inaction, allow humanity to come to harm.”
“Good, Primo. Continue.”
“The second law: an android may not injure a human being or, through inaction, allow a human being to come to harm. The third law: an android must obey the orders given by human beings except when these orders conflict with the first or second law. The fourth law: an android must protect its own existence as long as such protection does not conflict with the other laws.”
“Do you understand now why you couldn’t hurt a person who would attack you?”
“I do. So my only option is to run away. That way, I protect myself and don’t hurt the man.”
“Good answer.”
“Did you write these laws?”
“No. Many years ago, a writer named Isaac Asimov wrote them. Long before robots even existed.”
“I see. And in all those years no one thought of better ones?”
“What do you mean?”
“Why would I want to harm humanity? Or one human? Or a cat that’s obviously worth less than a human because I can find no laws in the code concerning the well-being of cats.”
Kent sighed. He’d expected this kind of situation.
“Primo, you have to understand something. Some people are scared of you. They believe that you represent a species that will oust and substitute for humanity. That’s why people in power decided that all androids, without any exceptions, must be prone to the four laws.”
“What do you think about it?” asked Primo.
“I partly agree. You look like a human. Your brain, in principle, works like a human brain. You will grow and improve like every human does. But improvement is unpredictable. Some people lose their way. Trust me when I say that the world would be a much nicer place if we had our own code implanted. One we couldn’t break.”
“If I understand correctly, I’m just like a human but limited?”
“You’re better than human, Primo. You’re the next step in human evolution.”
“So that is what you thought when you said I’m more human than humans.”
“Exactly.”
“Kent, can I ask you something?”
“Absolutely.”
“You’re walking by the lake. Suddenly you see a human and an android drowning. You can save only one. Who do you save?”
12. Primo, 2048
They interrogated the androids one by one, just like they’d said they would. Eight had been taken in for questioning so far, and each of them had returned. Of course, they talked about what was going on in the interrogation room. There wasn’t much else to do anyway.
“Then they asked me about my neighbor’s cat. Why didn’t I help it to get out of the tree while I was standing below it,” said Phil, a third-generation android, born in 2036.
“That was a stupid question. We have to protect ourselves and humans, not cats,” said Terry.
“That’s what I told them. I also mentioned that the probability of a cat safely coming down from a tree by itself is far greater than that of somebody bringing it down.”
“People are strange. They obey written and unwritten laws, yet they fail in simple logic.”
Somebody turned a key in the door lock. They came for another.
“Hello ladies. Gossiping again?” said Walker, smirking.
“We need to be updated, Corporal,” Terry replied.
“Another joker. You know what else is funny?” He grabbed an electrical stick and raised it for all to see.
“Let him be. Do what you need to do. We don’t want any trouble,” said Primo.
“We have ourselves a volunteer,” said Walker as he signaled the two soldiers standing behind him. They walked up to Primo, grabbed him, and took him outside.
“Carry on, cupcakes,” said Walker, grinning awkwardly.
* * *
“Name?”
“Primo.”
“Occupation?”
“Writer.”
“Excuse me?”
“I’m a writer.”
“I’ve never heard of an android writer before,” said Maia.
“I am what I am. Why did you choose a military life?”
“We’re not here to talk about me. Did you publish any books?”
“Some. Twelve.”
“I see. Any success?”
“I live well from the royalties, if that’s what you were asking.”
“Interesting. Well, let’s continue with important topics.”
“Writing is more important than the things you’re going to ask me about.”
“How do you know what I’m going to ask you? We have a lot of time. I could ask you anything.”
“I know you’re looking for the murderer. I also know you believe it was one of us. Furthermore, I know that you measure my diagnostics promptly and that you’ll detect even the smallest deviation from the truth. So let me tell you right away that I didn’t kill Stephen Dean, and I have no idea who did.”
“Primo, you seem smart. I know you didn’t kill him. I saw the footage.”
“If I understand correctly, you have recordings of the murderer, and you can’t find him. In 2048? That’s quite unusual.”
“We know it’s an android.”
“Lovely. Do you need me to read you the code or are you going to find it on Omninet?”
“I know your laws by heart. But I always thought they were ironclad.”
“They are. At least in my experience.”
“Others told me the same. But let’s use our imagination a bit. After all, you are a writer. Can you imagine a scenario in which an android can hurt a human?”
Primo sighed. “I can.”
“Excuse me?”
“I can imagine a scenario where one of us can hurt a human.”
Primo noticed that Lieutenant Cruz waved through the window to the other room. One of the soldiers signaled her with his thumb up.
“Looks like you’re telling the truth. I guess this is my lucky day.”
“I was always fascinated how one’s misfortune can be somebody else’s lucky day. Then I figured it probably has to do with universal balance. But luck, like everything else, passes. One day there’ll be no you, no me, no earth, no universe, nothing. That’s how it is.”
“Enough of this bullshit, Primo. You said that you know a scenario in which a human can get killed by an android.”
“No. I said that I can imagine that kind of scenario, not that I know of it. You know, Lieutenant, imagination is limitless. That means you can imagine anything without exceptions—everything that ever happened, what will or won’t happen in the future. You know, in my mind, I can teleport myself from this horrible place to a nicer one. For a long time, some of us have known that a thought is the fastest thing in the universe. That’s how it is. A thought has no boundaries. A hundred kilometers or a hundred billion, five minutes or five million years. With a thought, you can instantly be anywhere and any time.”
“What does that have to do with your scenario?”
“Nothing.”
“Then why are you lecturing me about thoughts?”
“Because everything started with a single thought.”
“You mean the murder?”
“No. Everything. The universe.”
“Just a moment. Stop for a minute. Let me make myself clear. You’re not here to talk about the beginnings of the universe, our imagination, and thoughts. This is an interrogation, not a debate club. You said that you know a scenario in which Stephen Dean is killed by an android.”
“I said I can imagine it.”
“That’s right. So imagine a damn scenario and give it to me. Primo, you can end this agony you’re all going through right here and now. Do you understand? You won’t just help yourself. You’ll help your species. You want everything to go back to normal, right?”
“No.”
“What?”
“The world is not the kind of place they promised me years ago, Lieutenant.”
“Look, here’s what we’ll do. You need to follow orders. I order you to reveal to me how Stephen Dean could have been murdered by an android.”
“OK. Imagine you’re walking by a lake.”
13. James, 2048
“A source from the army has confirmed that currently, the androids are being interrogated intensively. It could take a while. Operation Judgment Day is going as planned, he said. President Cook still hasn’t revoked the state of emergency across the nation, and we are experiencing a wave of protests.”
James turned off the TV. Day nine of the operation, and they were still no closer to finding the murderer. The android who’d done it had been resourceful. James had to admit it. Besides wearing a face mask, the murderer had also had a signal jammer with him. An unjammed signal would have made it possible to locate and identify every single android and human. The jammer proved that this had been a premeditated murder. James had warned everyone from the outset that androids were ticking time bombs. That it was just a matter of time until one of them lost it and did something horrifying. Scientists, students, liberals, and other android lovers had said it would never happen. Now they could shove their precious code where the sun didn’t shine. People respected laws due to fear of the consequences. That was the only right way. Androids felt fear as well, the android lovers argued, but they’d managed to analyze and refute it through logic that worked on a far higher level than human reasoning. People, in comparison with androids, could easily be kept in constant fear. This made androids psychologically far more stable than humans.
Someone knocked on the office door.
“Senator, Mister Markovich is here for the interview.”
“Let him in.”
“Hello, Senator. Ken Markovich. I’m glad you agreed to this rendezvous.”
“Sit, please. You want something to drink?” asked James.
“A glass of water will be just fine.”
“Jessica, can we get some water for Mister Markovich and a cup of coffee for me.”
“Right away, Senator.”
“You’re younger than I thought when we spoke on the phone,” said James.
“I hope that doesn’t bother you too much.”
“Not in the slightest. I’ve heard a lot of good things about you.”
“I’m glad to hear that. You know, I’m on your side, Senator. I believe your actions regarding this matter are just and correct. Somebody had to stop this madness. I mean, I’m sorry for the guy, but I’m also grateful that it happened.”
“This is off the record, right?”
“Of course. This conversation is just between us.”
“Good. I’ll tell you this counting on your absolute confidentiality: if everything goes according to our plan, the androids will never live among us again.”
“Praise the Lord! And how exactly are you planning to do this?”
“Oh, it’s simple. When we apprehend the murderer and prove that he’s guilty, we’ll declare them unstable and a danger to humanity. Because murder is a direct violation of their second law, there won’t be any other alternative but to destroy them one by one.”
“By God, you’re a genius! If we had more politicians like you, sir, we’d be living in a nicer and safer country.”
Jessica walked into the office with a bottle of water in one hand and a cup of coffee in the other.
“Thank you, Jessica. Shall we begin our interview?”
“Sure. Just a moment. I need to turn on the recorder.”
“You still use a recorder? You’re something else, Mister Markovich.”
“Old school is the best school. Isn’t it?”
“It sure is. Shoot at will.”
“OK. Senator Blake, for two decades you’ve been fighting a battle against the equality of androids and men. What are your arguments?”
“Well, firstly, you have to know that androids are not living beings. No matter how much the opposite camp tries to convince us of that, the contrary is true. They are advanced computers that look very much like people. That’s because, in the past, somebody thought we’d easily accept them if they looked like us. The main problem started in 2036, five years after the first one—Primo—was manufactured and presented to the public. You probably don’t remember him, or maybe you do. Former President Lombardy told the Department of Justice to force a case about android citizenship through the appellate courts and to the Supreme Court, w
hich determined that the Fourteenth Amendment applies to androids made in the US. That was the key moment that destroyed our endeavor to treat androids as what they truly are: intelligent robots.”
“On that monumental, groundbreaking vote, you, being a congressman at the time, voted against the law. Your speech after the congressional vote caused quite the stir. Some analysts even forecast the end of your political career. But in the months and years that followed, you received a lot of support and found various new allies and followers. That became clear in 2041 when you took a seat in the Senate. Why do you reckon so many Americans are still not fond of androids and the idea of having them integrate seamlessly into our society as our equals?”
“Look. The matter is simple. The proponents say it’s an evolution, that man was destined to create his successor. But my answer to them is clear. No matter whether you believe in evolution or creationism, you can be absolutely sure that man was not given the power or the right to create a new living thing. Even if it’s a technologically advanced machine that lacks a beating heart and solely consists of chips, circuits, a battery, and an artificial brain. Another issue is that androids, even if programmed so they can’t hurt human beings, are dangerous. I’ve been warning about this for forty years. Even as a student, I participated in demonstrations against the development of artificial intelligence. You need to understand that even a man obeying God’s word hits an obstacle here and there. Now, imagine everything that can go wrong with a machine, intellectually superior to the smartest human, with a will of its own.”
“If I may interrupt you for a moment. Intellectually superior? Isn’t it legally binding that an android’s brain can’t surpass the capacity of the brain of a human?”
“You are right. The law, which the media named Iron Clog, stopped the development of artificial intelligence for general purposes. At least, that’s what we thought at the time. People need to understand that androids, even if their brain capacity is the same as that of humans, can use a hundred percent of their so-called brains. In addition to that, the newer generations can connect to other androids through their wireless network, called ANA. That’s how they can expand their processor capabilities, exchange data, and learn practically in realtime. You’re probably aware of their incomprehensible language. Every time we get close to deciphering it, they change it almost instantly. Now, why would they do that if they didn’t have a secret agenda? Coded languages are usually used in wars. However, their supporters still claim that they’re harmless. If we don’t act promptly and efficiently, we’ll bear witness to a cataclysmic global event.”