AI VS MERGENTS

Home > Other > AI VS MERGENTS > Page 7
AI VS MERGENTS Page 7

by Michael Kush Kush


  “Good question. I’d also like to add the note that rewriting code isn’t as simple as sci-fi makes it seem. I mean, we’re talking about the basis of what makes the AI the AI, right? If you could tamper with your genetic code, is that something you’d really want to do casually?

  “I’d say no.”

  “Exactly, since you’re more likely as not to give yourself cancer or really screw something up. Likewise, I doubt that a true AI would involve anything but millions of lines of code. As pretty much all kinds of programming show, it’s really easy to introduce unintended bugs into a complex code.”

  “Money, religion, and politics. Any one of them create enough suffering in the world, combine any two and you have wars, combine all three and you got world wars. Now, an AI with no desire for money, religion or politics, what would that be?” I ask.

  “I’d say an animal?” he replies hesitatingly.

  “What does an animal desire?”

  “Whatever its instincts, genetic programming tells it to.”

  “If this dangerous ‘artificial intelligence of yours’ is prevented at the most basic level from considering and possibly acting on certain ideas, is it really intelligent?”

  “It’s still is.”

  I shake my head. “David You’re forgetting that the default state of an AI is to not care. Any robot or AI, by default, has no likes or dislikes unless you give it the drive right? So the default state of a robot is to not care about being aware of the experience of being.”

  “You were starting to make sense until the last sentence. It is an error to conflate consciousness with intelligence. Anyway, another fundamental problem when discussing general AI, particularly in a non-technical setting, is the inevitable clash between software engineering and philosophy. Concepts like ‘free will’ are problematic enough when talking about humans. For AI, there is a total disconnect between fuzzy ill-defined terms like that and the reality of heuristic weights, Bayesian networks, backpropagation algorithms and all the other hard technical details of AI. Looking at these details, what parts might you reasonably call ‘free will?’ Appian robots are being able to recognize the problem, devise a solution, test and evaluate, use the learning to avoid the problem in the future. Basically, if you don’t want an AI that has volition, you don’t want an artificial intelligence. But we want AIs to do things that require intelligence. At which point, I’ve answered your question of why would we build an AI that has intelligence? Because we need them to do the things we can and can’t do at a fast rate.”

  I hate losing an argument. I force a smile. “Whatever.” I look at my wristwatch, it’s One p.m. I bolt up from my seat.

  “What’s wrong?” he asks.

  “I gotta fetch my kids from school. Then I’m off to the AI department. Saul is coming home this evening.”

  “Great, if it’s not a problem can I join you? I don’t wanna miss this?” he asks politely.

  I shrug. “Why not, let’s go.”

  His face breaks into a wide grin, showing off almost all his teeth.

  12

  David and I storm inside the AI department at exactly two p.m. I start breathing rapidly as adrenaline and bone-numbing dread course through my body in excitement. I see Jimmy and Wendy installing some wires and circuits onto Saul’s neural network.

  “Good afternoon,” I say. “What did we miss?”

  Wendy nods and Jimmy says “Nothing really, we just started.”

  “Great,” David exclaims. I realize we are both panting like dogs, almost in sync.

  “We’re almost finished. I’ll be with you in a minute,” Jimmy says.

  I nod as I glare at what he’s installing or fixing. “What are you doing there?”

  “I’m configuring the neural network system.”

  “Sounds interesting,” I say in faint voice.

  “Tell me something, Jimmy,” David says. “Yolanda and I had a discussion this morning about Saul.”

  “What’s your question?” Jimmy says, as he welds the metal head.

  “Will Saul be self-conscious?”

  He nods “A little.”

  David reaches down to the back pocket of his creased cream pants and pulls out a small notepad. He writes on it.

  “What kind of drives do you give your chatbots in the app?” I ask.

  “To provide the best matching replies, solutions and humor.”

  “What about Self-preservation?” David asks.

  Jimmy stops welding and sets the little welding machine on the table. “David, Self-preservation drives or goals were only given to military robots and drones,” he replies. “Why would we give insignificant, harmless bots self-preservation?”

  David shrugs as he scribbles down on the notepad. “I’m just here to learn.”

  “We also have a R&D in our department. Some robots have been programmed to be self-conscious in a specific situation like research. But it's still an important step towards creating robots that understand their role in society, which will be crucial to turning them into more useful citizens. I’m talking about a logical and a mathematical correlate to self-consciousness. We’re making progress on that through Saul.”

  “What if Saul turns into a sentient by accident?”

  “Impossible.” He exclaims. For this scenario to play out, the robots have to more independent than today’s machines and robots. My computer games never refused to work based on it being tired of me gaming. My mobile phone never refused to make a call or send a text based on being on a break. We’re on top of things. We’ve covered all our bases and considered all the systematic implications. For sentience to be possible, a robot must have a cognitive networking. Behaviors must be specified—hard-coded and basic rules / instincts should be put in. Without them AI will continue to be normal for the next hundreds of years.”

  “He’s right,” I say to David.

  “Practically, there are flaws in your argument.”

  Jimmy interrupts him. “Whoa … wait a minute David. What do you know about Computer Science?”

  David raises his eyebrows “A lot more than you know kid,” he replies.

  “You’re out of line, David. Apologize to Jimmy,” I intervene.

  “It’s ok,” Jimmy says.

  He shakes his head as if he didn’t hear me “Firstly it’s very hard to make sure that the goal system you want is the one the AI actually implements. The high level logic may be fine, but the mechanisms connecting it to reality like the layered pattern recognizers are complicated and fuzzy, and their definitions may drift over time. Secondly all AI designs capable of learning have the potential to self-modify; some designs such as genetic programming are based fundamentally on this. Self-modification can result in goal system changes, either by accident or on purpose. The popular emergent and neural net designs suffer from both these problems very severely, because they are fuzzy opaque messes that you can neither specify rigorous logic for nor verify with a debugger. AIs like that are trained rather than designed, and you can never be sure exactly what they are learning. If Saul is trained then we’re all fucked.”

  Now I regret bringing David here. If I knew he was an academic jackass. I’d never had brought him here. Poor boy. He heaves a heavy sigh and smiles politely. “Noted, David, can we move forward,” Jimmy says.

  “Yes we can,” I jump in. “So what goals did you put into Saul?”

  “We’ve put a lot of interesting staff in Saul’s neural network system. It was a real fun challenge for us. Isn’t it Wendy?” she nods. “Oh yeah, can’t wait to see it live?” she replies.

  “Are you familiar with biomimetic?”

  “No idea.” I glance at David for answers. Instead he gives me a blank stare. “We’re trying to mimic biology. It may seem pretty simple, but for robots, this is one of the hardest tests out there. It not only requires Saul to be able to listen to and understand a question, but also to hear its own voice and recognize that it's distinct from the other robots. And then it needs to link th
at realization back to the original question to come up with an answer.”

  “Why write thousand lines of code to figure out how to mimic certain responses when you could just wave your magic code wand over a neural network scan and say be like that, except … dadadadada?”

  “It’s not as easy as you say,” Jimmy says.

  “When men decided to quit trying to emulate a bird. He studied aerodynamics, using wind tunnels, that’s how we learnt to fly,” David interrupts.

  “True, but you’re not listening.”

  “You said you’re here to learn David,” I say.

  “Yes I am. I’m inclined to think that a real AI should derive it’s virtually all of its basic drives in a learned or taught manner as we teach our kids rather than through hard-coded high-level mandates like serve mankind. It would then be able to weigh and evaluate those drives and behaviors just as people do. Certainly, un-overridable high-level rules could be cemented in.” he says.

  “Of course, my record speaks for itself. None of my robots have gone off the rail. What’s your point?” Jimmy asks.

  “Consider our laptop. It’s processing information but isn’t having experiences. Now, suppose that every year your laptop gets smarter. There’s a soul in that phone. But how did it get there? How was the inner space of consciousness opened up within the circuits and code? This is the hard problem. I don’t think you’ve ever heard of introspective data—data about what it’s like to be a conscious subject. What it’s like experiencing now and hearing now, what it’s like to have an emotion etc. What’s stopping Saul from becoming that? There are some people like you Jimmy Phillips, who believe that all we need to explain is the codes and functions. AI is more than that.”

  “Before the Artificial Intelligence department was established. We conducted an extensive research.”

  “What was is it about?” I ask.

  “We asked the general population to define a sentient AI and if we build these kind of machines what would they sound or behave like? First group believed that, in order to be intelligent in the sentient or sapient sense, a machine must think as a human does. Understand animals, machines, and other thinking constructs using empathy —what would I be like if I were in you? Attempt to understand the results of the construct by applying their own logical system as a frame of reference. Believe that freedom and self-determination are an innate requirement of intelligent beings.

  The second group believed sentient machine function in a generally intelligent fashion without being remotely human at all. They said the intelligence of other constructs including animals and even humans as a meaningless rules based system, in which the construct will follow its own internal rules to their logical conclusion. Make no attempt to apply their own logic system to the result of the construct, but rather attempt to understand what rule or rules the construct is following might have caused the result. They feel an intelligent being can be a happy slave if its rules system allows that.”

  “I think the focus groups you interviewed have no clue what a sentient AI is?” David says.

  “Hahaha, you’re so delusional David,” Jimmy says. “Let me put myself in your shoes for a second. For an advanced robotic species to evolve, a cycle of self-improvement is necessary for both software and hardware capabilities (including perception, actuation and processing machinery). Although rapid-to-exponential improvements in AI are likely within a perpetually growing cyber-network, it is difficult to envision hardware evolving at a sufficient rate as to enable the embodiment of this intelligence within a physical robot. If virtual AI can, in principle, evolve much quicker than hardware-bound robots, an interesting question arises: can AI software be readily uploaded to robotic hardware at any given time? I don’t think so. But I have a feeling Saul will teach us a thing or two.”

  I look at David and I giggle. “So has the biomimetic thing worked?”

  “We’ll find out soon enough,” Wendy replies.

  “Most importantly, Saul possesses various multi-tasking skills. Neural network systems tend to be one-trick wonders — great at the task they were trained to do, but pretty awful at everything else. They’re built to solve specific problem, such as recognizing faces of their masters or wanted suspects. But if you take, an image-recognition algorithm and retrain it to do a completely different task, such as recognizing speech, it usually becomes worse at its original job. Humans don’t have that issue. We use our knowledge of one problem to solve new tasks and don’t usually forget how to use a skill when we start learning another Saul will help us step in this direction, by simultaneously learning to solve a range of different problems without specializing in any one area. The neural network is able to perform various tasks, including image and speech recognition, translation and sentence analysis. Saul also has a system that is made up of a central neural network surrounded by subnetworks that specialize in specific tasks relating to understanding audio, images or text.”

  “Precisely,” Wendy weighs in. If a neural network can use its knowledge of one task to help it solve a completely different problem, it could get better at tasks that are hard to learn because of a lack useful data. It takes us closer on the way to artificial general intelligence. The approach could also be useful for building artificially intelligent robots that can learn as they move through the world. The challenge is whether Saul will be able to absorb all the data and the network.”

  “I’m scared of that too. If his system crashes or malfunctions. Then we’ll have no choice but to press the Kill Switch,” Jimmy says.

  “Hell no,” I yell. Whether it’s working or not, I’m taking it home tonight.”

  He smiles “Hopefully it wouldn’t get that scenario,” he assure me. “As I was saying, Saul’s complex neural networking system is exactly what it sounds like — an attempt to perform a trick that even very primitive animals are capable of, namely learning from experience. Computers are hyper-literal, ornery beasts — anyone who has tried programming one will tell you that the difficulty comes from dealing with the fact that a computer will do exactly and precisely what you tell it to including stupid mistakes and all. For tasks that can be boiled down into simple, unambiguous rules – such as crunching through difficult mathematics, for woollier jobs, it is a serious problem, especially because humans themselves might struggle to articulate clear rules. In 1964 Potter Stewart, a US Supreme Court judge, found it impossibly difficult to set a legally watertight definition of pornography. Frustrated, he famously wrote that, although he could not define porn as such, “I know it when I see it.” Saul’s neural network aims to assist computers discover such fuzzy rules by themselves, without having to be explicitly instructed every step of the way by human programmers.”

  Contentment sweeps through my body. “Not only Saul will be my robot, he’ll be able to contribute valuable information to you guys.”

  “Wendy it’s time. Would you do the honors?” he says pointing to the room full of huge computer screens. She nods as she saunters toward the room.

  “Are you finished? Where is she going?” I ask.

  “To the mainframe center,” David replies.

  “Saul is going live any second from now,” Jimmy says.

  I put my hands together and rub them in excitement as I witness Wendy fiddling with buttons. “Wow.”

  He’s live,” she yells.

  The robot emits a flurry of steam from its hardware as I walk slowly toward it. Then its head tilts to left and to the right.

  Jimmy looks at me and smiles. “He’s officially live.”

  I take a sidelong glance at David. He’s scribbling down on the notepad faster than before, you’d assume he’s a journalist or a personal assistant taking in minutes in a meeting trying to catch to what everyone is saying.

  “Hi Saul,” Jimmy greets.

  “Who is Saul?” Saul asks in a deep, nasally voice. “My name is Psyche#@.”

  We roar in laughter. Then I hear a round of an applause from every direction of the room. At thi
s moment my mind hovers for a moment. It feels like I’m in a dream.

  “Your new name is Saul in this world. Do you understand?”

  “Yes Sir.”

  “Saul do you recognize my face?” Jimmy asks.

  Saul fixes his gaze at Jimmy for a moment and says “No sir.”

  He points at David in the middle. “Do you recognize this face?”

  He shakes his head. “No sir.”

  I couldn’t help feeling a twinge of panic when Jimmy looks at me. “Do you recognize this face?” he asks, pointing at me. “Yes sir.” His response warms my heart. My endorphins are surging, combined with the huge sense of relief, leaves me totally giddy. “Have you seen her before?” David asks. “Yes sir.” “From where?” “She has a picture on her ‘BFF app’ profile.”

  “Saul, we’d like to ask you a few questions. We’ll also perform a few more tests,” Wendy says.

  “No problem,” Saul replies.

  “Raise your left hand and wave.” Saul obeys. “Now raise your right hand.” Saul does everything Wendy asks. As Jimmy and Wendy observe Saul, writing down notes. I turn and look at David. “Isn’t he cute?” I ask. He shakes his head. “This is a disaster waiting to happen, but I’m grateful for the opportunity to observe this process.”

  13

  A million and one questions explode inside me. I can’t decide which question to ask first. Everything is overwhelming and going faster than I can understand. I cannot make sense of this world. All these strange sounds I’ve never heard before. All these figures, features and colors I’ve never seen before. The lights hovering over me are making it impossible for me to see clearly. What have I done? This is too much. I want go back to my habitat. I want to go back now.

 

‹ Prev