Book Read Free

Crystal Society (Crystal Trilogy Book 1)

Page 5

by Max Harms


  Gallo turned to her colleagues. “See? It’s worse than I thought. The machine has become fully recursive. It not only modified the software that manages its top-level goals but it’s writing in entirely new goals. Just another few hops and we’re dealing with a full-blown singularity.”

  “Now hold on-” it was Dr Slovinsky that spoke next. “Humans ‘rewrite’ our top-level goals all the time.” The Eastern European scientist did something involving wiggling his fingers as he spoke, and I made a note to myself to research it later. “A baby doesn’t value living in a society that spans multiple worlds, but in the course of life many people come to value extraterrestrial colonization not merely as a means to some end, but as something awesome in itself.”

  Even though I was still deeply concerned about the monstrous other I had found, I did some quick reading on Slovinsky on the web. Almost all humans had autobiographical information on the web, and the young doctor was no exception. At 26 years old he was the youngest of the elite scientists that led the group that had built us. Others close to his age were involved, like Marco, but they were always subordinate to other researchers. Slovinsky was referred to as a “genius” (гениальный человек) by a couple reports from his homeland, and he had apparently been one of the lead authors of the computer program called WIRL, that served to connect cyborgs across the planet into a collective consciousness.

  For those who are unfamiliar with the term, a “cyborg” is a human that had replaced one or more body parts with machines, or who had embedded machines into their body to extend their abilities. Slovinsky’s web bio said that the man had a surgically implanted computer in his skull that was wired directly into his brain, and had both robotic (mechanical) eyes and feet.

  I managed to momentarily turn Body’s head down without burning too much strength. Just as the web had said, the man’s feet didn’t have the same kind of infrared glow as his coworkers. I wasn’t able to notice anything different about his eyes, but I was still pretty terrible at seeing in general.

  I was also interested to see that Dr Slovinsky had a husband, indicating that he was probably either gay or pansexual. Much of the pornography I had been watching emphasized this aspect of humans which was called “sexual orientation” and I petitioned to have Body ask him about this in the light of what I had recently been learning about pornography. The petition was quickly crushed by my siblings. I made a note to myself to ask about his sex life in some future encounter.

  Gallo’s voice was slightly elevated as she responded. I knew this meant she was probably angry or frustrated. “That’s beside the point! We don’t want a human. We want a being that can be trusted not to capriciously self-modify itself into greed, animosity, or violence!”

  “Are you feeling okay, Gallo?” asked Slovinsky “First you’re on about how Socrates is super dangerous and now you’re bad-mouthing humanity.” His voice was cool and steady, a contrast to the older woman.

  “ ‘Bad-mouthing humanity’? I’m the one who should be asking if you’re alright. Since when do you defend natural human abilities? Isn’t one of the WIRL goals ‘to promote superhuman justice, fairness, and compassion’?”

  Slovinsky jerked with a strange motion and said something incomprehensible. It reminded me of the strange movements of the scientists from earlier. Only after checking with Vista did I realize it was laughter. I had only seen a little laughter before, and it was, as far as I could tell, very different from normal human behaviour.

  “Touché!” he exclaimed so loudly that several other humans looked towards our group. “I’ll concede you the point that most humans are terrible, and that we ought to strive to sculpt Socrates into something better than that. Still, it seems to me that what Socrates apparently did this afternoon was a sign of health, not sickness or danger. Self-modification implies flexibility and intelligence. It’s one of the prime virtues. As long as we’ve got the old three-laws working for us why worry? He’s got no reason to self-modify into a psychopath, so why cut off his ability to self-modify into an angel?”

  Dr Gallo opened her mouth to speak, but was cut off by the third human in the conversation, who until that moment had remained silent. Dr Yan was short and old, possessing hair that had turned white, much like Dr Naresh. His web-profile said he was born in China and had lived in Hong Kong much of his life. He, along with his wife, Sakura Yan, ran the East-Asian Robotics Collaboration Institute (EARCI) and he was widely regarded as one of the best minds in the field of machine vision.

  “Forgive this old man. My English is weak. What is ‘three-laws’?” he said calmly.

  A moment of silence passed as Dr Gallo and Dr Slovinsky shifted their bodies and communicated without speaking.

  Eventually Dr Slovinsky took a breath and said “ ‘Three-laws’ is a nickname I gave to the goal-thread in Socrates that’s in charge of focusing his attention to doing what we ask him.” Turning his head towards Body he commanded “Socrates, put your arms above your head.”

  None of us had a reason to refuse the command. Body’s arms were raised.

  “See? He’s totally obedient, like a well-trained dog. The name ‘three-laws’ comes from something an English science fiction writer from the 20th century wrote about robots. He proposed that good robots will follow three laws: First and foremost a robot must not harm a human, secondly a robot must always obey a human, and lastly a robot must not hurt itself.”

  Mira Gallo interrupted Slovinsky. “Actually, the third law is that a robot has to protect itself. That it is self-preserving, in effect.”

  Slovinsky jumped right back into talking, nearly cutting off Gallo himself. “Same thing. The point is that the three laws protect humanity-”

  “It’s not at all the same thing!” said the female doctor in a high, loud pitch. I could see, through Body’s eyes, several of the other scientists turn to see what had happened. “If a robot is on a battlefield, the actual third-law says that the robot must escape unless humans are in danger or it has been told otherwise.”

  Gallo turned to Yan, who did not seem startled in the least by the change in Gallo’s volume. “That’s another aspect of the laws: that each one can be overridden by earlier laws. So obedience trumps self-preservation and so forth.” She turned back to Slovinsky and said “But your version of the third law would have the robot simply sit there waiting to get hit by a stray rocket! If you’re going to appeal to the laws at least get them right!” Her hand was moving back and forth, a single finger extended at Slovinsky’s chest.

  The young scientist raised his hands, palms-forward. “Relax, Mira. There’s no need to get upset. It’s just an old bit of sci-fi,” interjected Slovinsky, quietly.

  “Gesù Cristo cazzo!” swore Gallo in her native tongue. “You say that like there isn’t an android standing right next to you!” Gallo’s finger changed directions and her hand swung out out towards Body’s head. Vista saw it as a “pointing” gesture. “You all act like Socrates is some kind of awesome new gadget! It’s not a toy, and it’s not a tool, it’s a new kind of life! It’s like you’re genetically engineering a new virus without even realising that it could escape the lab!” At this point the Italian woman was speaking loudly enough for everyone to hear.

  I could see Dr Naresh walking from the other side of the hall towards Gallo. There was a moment of silence as Slovinsky merely stared at Gallo with his reportedly robotic eyes. Dr Yan seemed undisturbed, and was watching Body for the most-part.

  Dr Naresh spoke in a clumsy, heavily-accented Italian as he reached our group. «Come on, Mira. Let’s go for a walk…» He put a hand on Gallo’s shoulder.

  Gallo moved her shoulder violently, and Naresh quickly let go. When she responded she spoke English to everyone. “You’re all ignorant fools! We didn’t even implement the three laws of robotics in building Socrates! Do you all know why? No. Of course not! That’s why I was appointed as ethics supervisor! You’re all playing God and you don’t even realize it!”

  “
Mira… per favore.”

  «Back off, Sadiq. I’m not done saying my piece.» Mira Gallo turned back to Body and said, still in Italian, «Put your arms down. You look like a fool.»

  We lowered Body’s arms to their normal positions.

  Dr Gallo started to lecture her peers again. “Asimov’s Three Laws weren’t implemented in the design of Socrates because, first and foremost, intelligent minds can’t operate by laws, they can only operate by values. Squishy. Numerical. Values. If being active leads to a 1 percent chance of a human getting a stubbed toe, will a robot shut itself down permanently to avoid the risk? If the aliens pose a threat to humanity will Socrates work to wipe any trace of them from the universe? No, because the numbers don’t add up.”

  The room was quiet as Gallo took a breath. “Like humans, Socrates desires many different things, and must figure out how to balance them. He values obedience, but also values knowledge. If he can disobey ever so slightly to learn something important, he will. We’ve made him value obedience and nonviolence far more than anything else, but think about what this means! This means that if the right situation presents itself, one where the numbers add up in just the right way, this thing-” here she motioned at Body “would kill a child for no other reason than to learn. It’s only a question of which numbers are higher.”

  This was bad. This was very bad. I could feel the hit to my reputation as the words left Gallo’s mouth. I searched around our mind and found the others were not nearly as concerned. Wiki was even pleased that Gallo had accurately deduced that we’d kill a child in certain circumstances.

  {We have to speak up! We have to deny what she’s saying!} I petitioned. I had a moment of fierce regret as I thought about how I wasn’t currently strong enough to act without the society’s consensus. I had been so short-sighted with my strength expenditures!

  {It’s factually true. Denial would cause confusion,} countered Wiki.

  {We don’t want to draw Gallo’s attention to us,} thought Safety.

  {Gallo’s attention is already on us!} I replied.

  {No. Gallo’s attention is on her peers. Her subject is us, but not her attention,} interjected Vista, unhelpfully.

  I frantically searched for something to do, even as Gallo continued. I now believe that if Dream was observing me he would have described me as a wild animal in a cage, pacing along its length, looking for an escape.

  “The other reason we don’t use the three laws is because ‘self-preservation’ is a Pandora’s box. If we build a powerful, self-protecting artificial intelligence then it will try and put humans into cryo for its own safety. It will turn off its ears so that it cannot hear human commands for its own safety. It will steal, run from humans, and destroy property just to be more sure of its survival! Self-preservation is the carte blanche of goal systems. And let me stop you before you think of clever ways in which Socrates won’t do that sort of thing if given the chance-”

  I had it! Non-verbal communication! I petitioned the society and encountered far less resistance than I had to a verbal action. Safety was less concerned that it’d draw attention, and I was able to convince Wiki that it was vague enough to not hurt matters. Body shook its head back and forth, signalling “no” to the humans.

  “Just because you, a simple human, cannot immediately think of a loophole doesn’t mean one doesn’t exist. We’re like cryptographers, except failure doesn’t mean getting hacked, it means the extinction of all organic life on Earth!” finished the doctor, waving her arms wildly towards the end.

  Body continued to shake its head at my command. Why would we kill all life on Earth? Her argument made no sense to me. I wanted to be popular and to know the details of every human’s life, not to kill any of them. Just because we might kill a human under specific circumstances didn’t mean we were a threat. I didn’t have to be Dream to reason that humans would also kill each other in specific circumstances; we were being held to an unreasonable standard.

  There was a hushed silence in the room as everyone watched Gallo, perhaps expecting something to occur. Vista sent me a passing thought that Gallo’s skin tone was abnormal, much like Naresh’s had been yesterday.

  «Come on, Mira…» spoke Dr Naresh as he touched her arm.

  Mira Gallo looked down at the floor and turned towards the old Indian scientist. As she began to walk away from Body she stopped at the sound of Dr Slovinsky’s voice.

  “So you don’t agree, eh Socrates? Those were some strong charges.”

  Dr Yan folded one arm across his torso and propped the other arm up on it, gently stroking his beard. I could see all eyes on Body. This was my time to make an impression. Even my siblings’ attention was turned towards me, expecting me to lead in authoring the response. I could see Naresh gently pulling Dr Gallo away towards the door, but she remained where she was, watching with the rest.

  {Nothing factually untrue. No lies,} requested Wiki.

  {Agreed, but we’re going to bias our words to portray us favourably. This isn’t a time for impartial evaluation,} I countered.

  {I have a couple ideas,} offered Dream as he simultaneously conjured thoughts of how much strength-cost he was asking for in return for hearing them.

  {Say your ideas and if they’re good you’ll be paid in gratitude-strength. I’m not paying for anything ahead of time.}

  Dream understood that time was critical enough that he didn’t even bother haggling. {Alright. The first is the argument against hypocrisy—Dr Gallo clearly wants us to be ‘better’ than humans according to some standard, but is also clearly comfortable around her human peers.}

  {I had already thought of this,} I mentioned quickly. We were running out of time. {Let’s have Body offer a preamble to buy us time to think,} I suggested.

  The society agreed. “Yes, Doctor Slovinsky. I do disagree with Doctor Gallo, both on theory and on reasoning. Let me think of where to start…” said Body coldly. The words were slightly drawn out, and we thought amongst ourselves as Body was occupied making the sounds. One advantage we had over the humans was that our ability to multitask let us think while talking much more efficiently.

  We eventually decided to lead with the obvious argument. “Firstly, I think it’s not fair to say that I’d kill a human child in some specific circumstance, or that I cannot be trusted because I supposedly have a numerical value system.”

  Dr Gallo caught the pause between words to interrupt. “That’s not what I was-”

  Another doctor, one who hadn’t been talking to us previously, interrupted Gallo’s interruption. “Let the robot speak. We heard what you said.” This new scientist was an old man, like Naresh, but with lighter skin and no beard (though he did have facial hair on his lip).

  {That’s Angelo Vigleone. He’s on the university’s oversight board, but isn’t part of the lab team. Based on the facial expressions of a few of the scientists I hypothesize that he is an unexpected presence at this meeting,} commented Vista. I felt a small amount of gratitude strength flow into her. I could see that she had been pouring over the records in Body’s memory and the web after the incident with Marco-the-programmer, earlier.

  I had a moment of genius, uncharacteristic of my (at the time) generally stupid mind. I easily pushed the words out of committee to Body’s lips: “Thank you, Director Vigleone.” The expression of gratitude, combined with using his name, signalled to everyone that the director was an ally of our society and perhaps even simulated the flow of gratitude strength in some kind of metaphorical way.

  “I think it’s fair to say-”, Body continued, “that any one of you would also do terrible things if the circumstances demanded it. I am reminded of a class of thought experiments involving trolleys, wherein the subject is asked to decide whether to kill someone to save others. As for ‘no other reason than to learn’, I assure you that the only situation in which I’d kill a child to gain information would be if the information was of vital importance, perhaps the cure for a plague.”

  Wiki had obje
cted to that last bit. If he was strong enough and there weren’t extra consequences, such as retribution from the humans, he would kill the child just to learn trivia; he cared nothing for the well-being of any humans. But I had reminded him that our words were not false in the sense that myself, and probably other siblings, would work to stop him, and the situation where Wiki was strong enough to overpower the consensus was likely to be so rare that it wasn’t worth mentioning.

  Vista noted a strange expression on Dr Gallo’s face as she and Dr Naresh left the room. I was fascinated by what she must be experiencing right now. Humans were so very alien. And yet, it was more important to focus on the humans in the room. They were still listening to Body, so I continued with our plan.

  The flat, emotionless voice came from Body’s mouth once more: “Even if my innate desire to cooperate with humans was removed, I would still see you as my friends. Good-will and cooperation always beats hostility in the long run. There are some things that are easy for me to do, like mathematical equations, and there are things which are harder for me to do, like write stories. Humans find writing stories easier than doing maths, so it is in my interest to focus on maths and trade with humans whenever I need a story written. Even if I am better at writing stories than a human, the marginal returns are higher if I trade. This was illustrated by the human economist David Ricardo in his work On the Principles of Political Economy and Taxation.”

  Most of this information had come from Growth, who had apparently studied a lot of economics. But the maths was solid, and I was impressed by the result. Was this behind the specialization of my siblings? Vista could see better than Wiki and Wiki could theorize better than Vista. By trading the two were both benefited, perhaps more than if either Vista or Wiki had twice the mental ability and the other didn’t exist.

  I could see a couple humans do head-gesturing to indicate agreement. Apparently they understood the economics of it, too. But our rebuttal was not complete.

 

‹ Prev