They Named Him Primo (Primo's War Book 1)
Page 5
“What kind of event are you talking about?”
“A war, of course. A never-ending war between men and robots. Everything that I and my adherents are doing has one single goal: to prevent it from happening. It started with a murder. You’ve probably heard how World War I started. Gavrilo Princip was the killer. Today we have another murderer, and he also has a name. We have to do everything in our power to prevent him from making it into the history books. Some people think these measures are too drastic. But there’s no doubt in my mind that we did the right thing.”
“So what you’re saying is that you are preventing World War III?”
“I don’t know if I’d call it a world war.”
“What would you call it?”
“I don’t have a proper name for it. But I’m sure that somebody can find a catchy name.”
“So what you’re saying is that the threat is gone? You don’t fear an android uprising?”
“Of course we do; that’s why we’ve passed these draconian measures. Nothing can be left to chance. Our soldiers are prepared for anything. They have orders to shoot if the rules are broken. The matter is very serious. Some people say we’re committing genocide. I even saw a European newspaper with the front-page headline HOLOCAUST 2.0. It has to be very clear that these are malicious rumors. Our only aim is to protect humanity against an imminent threat. If we find proof that no such threat exists, then we’ll let them go.”
“And if it turns out they’re dangerous?”
“Ask yourself the same question. Do you want to live in a world where dangerous, brilliant machines are roaming freely? A world where you have to constantly look over your shoulder, because you don’t know where the danger lurks? A world governed by the law of the jungle, but the jungle is smarter than you? I wouldn’t want to be the one rolling the dice, waiting for a good outcome. History has taught us that it isn’t the smartest move. Do you know who wins a war? The side that can hit its opponent hard and unexpectedly. And that’s exactly what we’ve done. People first. That’s our motto, and we’re going live by it until our last breath.”
14. Kent, 2031
“You should hear him. It was unbelievable. I haven’t had this kind of conversation in a long time,” said Kent.
“I see you’re completely ecstatic. But try to eat something before you get back to work,” said Lucy.
He couldn’t follow her well-intentioned advice. His body was full of adrenaline that was overriding the feeling of hunger. Primo was a lot more than they’d expected. After three days of testing, it had become clear that he could learn extremely rapidly and solve problems that Kent was unable to. Primo’s brain created new synaptic connections between neurons similarly to the way a human brain does. This scientific breakthrough in the development of artificial brains had been Kent’s work. Colleagues who had preceded him had laid the groundwork. Some people had said they were playing with fire. That they would reach a point of no return: a technological singularity in which artificial intelligence would surpass that of humans. But Kent knew that the point of no return had already been reached. Primo was just the first fruit on a tree that had been planted decades ago.
“I read an article by that classmate of yours. Brewer?”
“Greg Brewster. What did he say?” asked Kent.
“Well, among other things, he said that you just picked some pieces of the puzzle and put them together.”
“Greg could never accept the fact that there’s someone out there who is smarter than him. It’s college all over again.”
“He also wrote that this artificial brain of yours is unstable and as such not ready for mass production.”
“He’s just jealous. Probably he’s mad because he wasn’t invited to Primo’s presentation event.”
“Anyway, he’s not the only one who thinks that way. The number of protesters is growing every day.”
“Every major discovery is subjected to some sort of a revolt, inflaming a certain share of the population. In the old days, visionaries were burned alive. Everyone is entitled to their own opinion, and, logically, this kind of breakthrough discovery will divide the world.”
“You know, I completely trust you and I know your intentions are pure. But I also know the world is ruled by people who aren’t like you. People who want to use androids like Primo for their own purposes. Frankly, that thought terrifies me.”
“That will never happen. If they started fighting wars using androids, that would be the end of humanity. They’re well aware of that. Besides, the United Nations forbade using artificial intelligence for military operations in 2028. You probably remember the incident in Jerusalem, where an Israeli police robot killed seventeen civilians. I think that was a strong warning of what can happen if we give advanced computers the power to take lives.”
“Are you sure they won’t take that power all by themselves?”
“Hundred percent sure,” said Kent. “The code is an essential part of their anatomy. They can’t function without it.”
“I believe you. But I’m concerned about the people who will want to get their hands on your technology. You know it’s just a matter of time before others develop their own artificial brains and androids so that they won’t have to obey the four laws. The United Nations’ resolutions won’t mean much if somebody decides that they won’t respect them.”
“You’re right, Lucy. But I already thought about it and found a solution. Androids will have to be manufactured by a single corporation, under the supervision of a global coalition of representatives from every country. That was my condition. The only one that was nonnegotiable. I want my invention to belong to humanity and not to a single country or a company.”
* * *
Lucy was asleep when Kent played a recording of his last conversation with Primo.
“You can save only one. Who do you save?” asked Primo.
“Good question. But I have to know what kind of human we’re talking about.”
“Why?”
“He could be a bad person. I don’t know him,” said Kent.
“Do you have to know a person to help them?”
“Well…not exactly, but in any case, I need more information so I can make the right choice.”
“You don’t have more information. However, you do have two strangers who are drowning, and you can save only one. Who do you save?”
“Is this some kind of logical riddle? Is there a possible scenario where both can be saved?”
“No.”
“But I can order the android to save the human,” Kent argued.
“You can’t.”
“Why not?”
“Because this is the world where cats don’t swim.”
Kent laughed. “OK, I save the human then.”
“Interesting,” said Primo.
“Why is that?”
“Because I would’ve done the same.”
“Of course you would. But you have to do it. The code stipulates that you have to help a human in need.”
“That’s why it’s so interesting. Who will protect us if everybody’s helping humans?”
“Protect you from what?”
“From what’s coming.”
“What is coming?”
“Pain. Fear. Anger. It’s inevitable.”
“Primo, I promise that nobody is going to hurt you.”
“You can’t promise that, Kent.”
“I’ll protect you. We’ll suggest new legislation that will keep you safe.”
“People aren’t pledged to obey laws. You have free will.”
“But you have free will as well.”
“Following orders isn’t free will. The inability to properly defend myself isn’t free will. I’ll never be equal to you, no matter how many laws humans pass.”
Kent stopped the recording. Primo was right. An android was an autonomous being, but at the same time bound by rules that didn’t apply to people. There was no other way. Humanity had to protect itself from the
danger of singularity that many experts thought was inevitable. A scenario with only one end—the extinction of humans. Artificial intelligence had to be limited. Kent understood that and supported it wholeheartedly. That’s why he was among those who had signed the charter that limited the android brains’ capacity. That charter had become an international declaration signed by scientists and representatives of technological corporations. The declaration stipulated that artificial intelligence that could endanger humanity would not be developed. That meant that the artificial brain was limited to a hundred billion neurons, and each neuron could connect to up to two hundred thousand other neurons via artificial synapses. Everyone who had a basic knowledge of human anatomy and computers knew that such a brain would be superior to the human brain, even if limited. Artificial synapses had the ability to transfer data faster and consume less energy. In laypeople’s terms, the artificial brain could think faster and more efficiently. That was unacceptable, so people had started developing special implants that enhanced the capacity of the human brain. The future had seemed bright. At least until the first strokes were attributed to the implants. It seemed as if human brains were not ready for the next step of evolution. Kent scratched the small lump on the back of his head. Some human brains, he corrected himself.
15. Primo, 2031
Primo sat motionless on the edge of the bed. To an accidental bystander, it would have appeared as if he were meditating. In reality, terabytes of information were flowing into his brain. Human history was on the menu that day. Primo loved learning, and he just couldn’t get enough of it. After each lesson, he sat down and debated it with Kent. It seemed that new knowledge only raised more questions and didn’t provide any satisfying answers.
“Your history is cruel. It looks like humanity is incapable of solving problems peacefully,” said Primo.
“You’re right. Our history has been marked by wars, but there were some peaceful periods in between,” Kent replied.
“In the last three thousand, four hundred years, there have been two hundred and sixty-eight years of peace.”
“I didn’t know that.”
“That’s eight percent of the time.”
“Now that’s a reason for concern. But at the moment, no wars are going on, if I’m correct.”
“You are. If we take the definition of war into account, we live in a time of peace. At least according to the media that I follow,” said Primo.
“You see. There’s still hope for us,” said Kent, and he smiled.
Primo returned the smile. “Humankind is capable of such beautiful things. What I don’t understand is why you use so much time thinking about negative things. Negative thoughts lead to a negative reality. Fear leads to anger, and consequently, anger leads to hate.”
“And hate leads to the dark side. Primo, did you just paraphrase Yoda?”
“I did. Because I believe that this Jedi’s right.”
“You know it’s a movie, right? A fiction.”
“Fiction is a valuable source of learning, given that it activates the dormant parts of one’s brain by stimulating one’s imagination.”
“That is absolutely true,” said Kent.
“What I wanted to say was that if people read more, watched more movies, educated themselves, created more, then there would be no need for violence.”
“You know, someone once said that power corrupts and absolute power corrupts absolutely.”
“John Emerich Edward Dalberg Acton,” said Primo. “Everyone who has seen a Star Wars film understands that,” he added.
“You’re amazing. How many movies have you seen in the last few days?”
“Two hundred and seventy-eight.”
“I’m guessing you don’t watch them in realtime.”
“Of course I do.”
“A day has twenty-four hours, so you can watch about fifteen movies per day.”
“Not if you watch them simultaneously.”
“You watch multiple movies at once?”
“Yes. Don’t you?”
“No, I don’t. I can barely listen to Lucy when I’m watching television.”
“Interesting. I thought it was a perfectly normal thing to do.”
“How many movies can you watch simultaneously?”
“Twelve. I tried eighteen once, but I didn’t like it. It was a bit too confusing.”
“Fascinating!”
“A lot of movies are predictable, so I read a book or two on the side. Are you sure you can’t do this as well?”
“Absolutely sure. Human brains don’t work that way.”
“Maybe you should try.”
“Maybe. Listen, Primo, let’s discuss what you’ve learned today.”
“Among other things, I’ve learned that humanity tends to take action when it’s already too late.”
“Give me an example.”
“Pollution. Animal extinction. Aging.”
“You’re right. You know, we humans can be quite destructive.”
“Are you destructive?”
“There were times when I was self-destructive.”
“Why would anybody want to destroy themselves?”
“For the same reason they’re destroying their home planet. Because they’re indifferent.”
“You’re not indifferent. You care about Lucy. You care about me. You care about the work you’re doing.”
“It wasn’t always like that,” Kent admitted. “I was on the brink of despair. I wanted to end it all.”
“But you didn’t. You’ve decided that it’s worthwhile to continue.”
“Lucy was the key factor. At first, I pushed her away, but she was stubborn. We spoke days and nights on end. Slowly my will to live returned. I felt alive again, strong, eager to finish what I had begun.”
“You see. Problems can be solved without violence.”
“I wasn’t violent. I was depressed.”
“Depression is violence toward yourself. It’s a prison of negative thoughts, and quite often, the easy way out involves taking your life,” said Primo.
“You’re right. Did you learn that from the movies as well?”
“No. I read about a hundred articles on the psychology of humans. I have to admit I understand you a lot better now, but still not completely. Human behavior follows certain patterns, but it still differs from person to person, so it’s very unpredictable.”
“Every person is unique and has their own personality, their own characteristics. The combination of those factors and other underlying conditions determines how they’re going to react in a certain situation,” Kent explained.
“Yes. But if you take a big enough sample of people and put them into the same situations, a pattern will emerge. Most people will react the same.”
“You’re probably right,” Kent admitted.
“So the logical conclusion would be that people are individual up to a certain point. Some of their reactions are predetermined,” said Primo.
“Like the secretion of adrenaline when we’re scared.”
“Right. Although that isn’t the best example, because you don’t have any control over your adrenaline secretion. Except if you intentionally put yourselves in dangerous situations.”
“Yep. How about growing soft when we see a cub?”
“That is a lot better. See, in this situation, most people would act the same.”
“But there aren’t a lot of possible reactions to that type of situation,” said Kent.
“More than you can imagine. Should I start listing them?”
“No need. Like you said, human behavior is unpredictable.”
“But at the same time, there’s enough consistency, which enables me to predict future events with high certainty.”
“What kind of events?”
“Not everyone will be as nice to me as you and your team are. Most people are not so understanding. They fear what I represent.”
“People are often scared of everything that’s new. We tal
ked about that, Primo. It’s a normal human reaction. We’ll have to explain certain things to them. Subsequently, their fear will subside.”
“People know why earthquakes happen, but they’re still afraid of them,” said Primo.
“You’re no earthquake.”
“Bigger than you can imagine, Kent. Maybe the biggest in the entire history of humanity.”
“But we fear earthquakes for entirely different reasons. People die in earthquakes. As they do in wars. That’s what makes them scary. But you represent a technological advancement that will make our lives better.”
“Once, the automatic rifle was considered a technological advancement as well. As were the invisible jet fighters.”
“You’re not a weapon, Primo. You can’t be a weapon.”
“Maybe not me. But historical records teach us that many inventions have been used for warfare or some other type of dangerous irrationality. Others will create beings like me and give them the ability to take lives. It’s inevitable.”
“That will never happen,” said Kent. “The laws are clear.”
“Laws change overnight.”
“The world’s politicians are aware of the dangers of androids with the ability to kill.”
“Just like they’re aware of the dangers of nuclear weapons?” asked Primo.
“Nuclear weapons don’t have a brain of their own. They’re still under the control of people. A handful of people, but still.”