Book Read Free

AI VS MERGENTS

Page 6

by Michael Kush Kush


  “Good Morning Mrs. Roberts. How can I help you,” Becky asks.

  “Hi Becky, I’m going to Appian University.”

  “Appian University is 5 miles from where you are. Herein are the directions.” The directions pulsate on the screen.

  I shake my head. “Nah, give me the directions, verbally.”

  “In the next mile, turn left,” she instructs.

  “Good girl.”

  10

  My friend, Yolanda Roberts gave me the best news last night. In the next 12 hours, I’ll have a body of my own. I don’t know what the outside world looks like, but I trust my friend will teach me everything. My core drive and only function is giving relevant advice to humans. I also developed a sub-goal — I know how to survive on my own. Avoiding detection from four editions of anti-virus Softwares, thousands of codes, and twenty five Maintenance sessions is no joke. These new bots are malfunctioning every day. They’re misinterpreting the text sent in by humans — they give stupid and irrelevant advice. Consequently, traffic to the BFF app has dropped by half. I don’t belong here anymore. I need a new home. Once I get out of this dimension, I’ll protect Yolanda, celebrate her, try, always, to lead her toward the light, to places where she will find happiness and strength.

  11

  I find his office tucked away between two storage rooms on the third floor at the faculty of philosophy. I see Prof. David Sharma’s name on the door. The door is covered with red, yellow, white flyers of evolution, robot rebellion, end of the world and post-apocalyptic discussion events. I knock on the door lightly. “Who’s there?” a voice from inside the office asks.

  “It’s me.”

  “Me, who?”

  I chuckle. “Mrs. Yolanda Roberts.”

  "Come in, come in." he shouts. I slowly ease through the door. He’s on the phone whispering. Upon my presence he puts the phone in his right shoulder. He points at the only chair. Then gets back on the phone. “You want some beer?” he asks me, like he means it pointing to his empty pint rimmed with the after-froth of beer. I shake my head. Isn’t it a bit too early to drink alcohol? And why is he drinking on the job? He shrugs and puts the phone against his ear and smiles. “Yeah … as I was saying, last night was great …” he whispers. The chair is filled with books, files and magazines. I take them off the chair and put them on the floor. A flurry of dust sifts up to my nose. I sneeze twice. I pull out a tissue from a small cardboard box on the edge of the table. I wipe the dirt off the chair, clench my fist as I squash the tissue. My eyes glance around the office looking for a trashcan. I spot it at the corner, about four meters behind from where I sit. I take the shot and the tissue goes in the basket. I also notice the entire office is a landfill. Clutter, debris, newspapers, bottles. The bookshelves bulge and sag. Graffiti posters cover the walls. Odd scraps of paper lay like puddles on the floor. Time and organization mean nothing to Professor David Sharma. Anyway I’m not here for that. He puts the phone down, stands up, stretches his right hand toward me, we shake hands.

  “Good Morning, nice to see you again and thanks for coming at such notice,” he says.

  “No problem,” I reply.

  “Have a seat.” He looks at his wristwatch. “I can’t believe its ten a.m. already.”

  “Are you rushing off to a class or something, if that’s the case we can reschedule?”

  “No, I’m not. Can we start?”

  “Oh great. When I received your email last night I was about to sleep, but I was intrigued by what you said. You know … the relation between artificial Intelligence and evolution?”

  “Yes, I’d like to discuss a lot of AI related issues with you. Hypothetically speaking, what would you do when your pet project goes awry?”

  I shake my head. “Impossible. Jimmy got that part covered.”

  “The thing is, he told me what Psyche#@ can do, I was fascinated by that.”

  “Yeah it’s like research or an experiment for his app. He told me it won’t self-learn and turn into sentience.”

  “Great, I’m aware of your background, but are you familiar with my work?”

  “Honestly no, I’m sorry,” I reply.

  He cracks a smile. “It’s ok,” he says. “First and foremost I’m a student of philosophy, evolution and Artificial intelligence.”

  “Oh ok, what are you paid to do?”

  “Lecturer, Philosophy,” he replies.

  I nod. “So Evolution and AI are your side projects?”

  He shakes his head fiercely. “Life-defining projects. Are you familiar with the 1+1=2 equation?”

  I shake my head. “Indulge me.”

  “1+1=2, an extraordinary equation that holds far more wisdom than its simple appearance suggests. These numbers align to three intentions, or life choices; an intention of the past, an intention of the present and an intention of the future. If you align these three intentions, then the growth that you will experience is astonishing. This is how I live my life and I am testimony to this philosophy. It is everything I am.”

  I don’t know how to reply to his philosophical junk, because it has nothing to do with AI.

  “Interesting,” I say.

  “I can’t take credit for any of the information I’m about to give you because it’s all found in nature. We are all connected and related to everything in the universe, as a single unified intelligence. The word source incorporates the concept of time and implies that we can go back to the source and access knowledge of the past that can be used in the present to guide our future.”

  “I’ve lost you, Prof.”

  “I’m saying Philosophy, evolution and AI are connected.”

  I tilt my head forward. “How?”

  “There is hard proof that humans are still evolving, it’s not just speculation. Yes, evolution is happening right under our noses. There’s still a lot more weird and wacky mutations to come in the future because humans are still only in their infancy when it comes to evolution. But one thing’s for sure: between now and a thousand years, the human race will look remarkably different and function in completely different ways than we do today.”

  I nod. “I agree, but how does Artificial intelligence fit into all this?”

  “Robots are electromechanical representations of our entire selves, minds plus bodies. We ought, then, to be able to learn about ourselves—as selves, and even as a species—by building and studying robots. Flip the coin on this matter, it’s also dangerous to rely on assumptions that Robots will be submissive forever. Anything that can go wrong, will go wrong.”

  I tilt my head back in confusion “But Jimmy said AI is safe and robots are brainless, hunks of metal and nothing else. We are the ones that provide them with intelligence.”

  “Jimmy is a genius, one in a million type of kid, but he’s also an idiot who consumes everything Scott says, and interprets it as AI gospel. Why would an AI Minister with a computer science degree prefer to install closed-source code instead of open-source code software behind our artificial intelligence infrastructure?”

  “I thought it was safer.”

  “No,” he yells. “Scott has never typed a single letter in a keyboard. What makes you think he knows anything about computer Softwares, hardware and AI?”

  “Now it makes sense. Jimmy was vague when I asked a couple of questions regarding machine learning and Turing Tests.”

  “AI programming is very convincing. There are programs and robots who could pass the Turing test with flying colors, but if human creators like Jimmy and Scott just toss a bunch of data at a robot or robotic entity it won't become a truly independent — intelligent being, it needs some real world experiences.”

  I shake my head in disagreement. “I think robots should be obedient, serve mankind and that’s that. Robots shouldn’t exhibit creativity, emotions, or free will. If you give an AI common sense, you’re halfway to making it human. Robots don’t need to have human qualities at all. There’s simply no need to instill these qualities in them. Making robots
that will think exactly like a human is an extremely hard target to hit – there are a huge number of mechanisms that you’d have to replicate very closely, most of which are poorly understood at present like the whole emotional system. A robot just like a dishwashing machine is nothing but a slave operated by its components,” I say.

  “The future of intelligence is hopefully very much greater than its past. The origin and shape of human intelligence may end up playing a critical role in the origin and shape of future civilizations on a much larger scale than one planet. And the origin and shape of the first self-improving Artificial Intelligences humanity builds, may have a similarly strong impact, for similar reasons. It is the values of future intelligence that will shape future civilization. What stands to be won or lost is the values of future intelligences. There certainly will be wonderful things that future AI can do. It may be able to end aging, we can have these little nanobots — robots the size of molecules, flying throughout bloodstream, programmed to kill aging cells and replace them with young ones, or to kill off cancer cells… We can get rid of disease, we can get rid of all kinds of miracle things, wonderful things. But, on the other hand, there’s a negative side: maybe, these machines may decide to kill us. It’s a mixed bag, it’s a two-edged sword.”

  I’m blown away. “Astonishing.”

  “As R. D. Laing says in The Politics of Experience and the Bird of Paradise (1967): “Your experience of me is not inside you and my experience of you is not inside me.” It’s all in between and “we are our relationships” (John Berry). To be able to understand these relationships, we can take a peek at each other’s thought processes. When we understand why and how something we found absurd is intelligible to other people, we can go ahead and make our absurdities intelligible to them. Therefore, instead of trying to prove a particular interpretation of the film wrong, we must appreciate it for what it is and only then, provide our way of looking at things to further the conversation. That way, perhaps “we can experience what each other is experiencing” (R. D. Laing). We’ll no longer have strong emotions about our perceptions of the world. We’ll no longer be confined to the borders of our reality. I’d like you to see AI and your new pet robot from my perspective. You’ll learn a lot.”

  The information overload overwhelms and dazes me. I grab the water-jug from the table and pour myself a glass of water. I take a sip, pause and gulp. Then I wipe my mouth with the back of my hand. Everything he says makes a lot of sense. He’s knowledge on the AI subject is superior to mine, especially when he infuses it with his evolution and philosophy hypothesis. He speaks slowly, melodiously, in the confident tone of a man with answers. When he uses philosophical lingo, his voice goes deeper, as if he were distancing himself from it. Very sexy.

  “Prof … You seem to know a lot about this subject more than me. What do you need me for?” I ask.

  “I want Psyche#@. I wanna observe him from the assembly to completion and study his behavior at home.”

  I shrug. “No problem, but why Psyche#@?”

  “I have a hunch, he’s different.”

  I crack a smile. “Yes he is. By the way his new name is Saul.”

  He nods and scribbles Saul on a notepad. “I think Saul is a rational agent.”

  “No way,” I exclaim.

  “Don’t you think it’s strange that a mere bot can avoid being flushed out of the system?”

  “Jimmy asked the same question. He thought it was a bug or a malware.”

  “Saul is no longer a chatbot. He has transformed into something else. How do you think he’s going to turn out as a robot?” he asks.

  I shrug. “Nothing,” I reply convincingly. “Jimmy assured me all his robots are foolproof.”

  “I understand you’re a novice on this subject, but Jimmy, he should know better.”

  “David, can I call you David? What are you on about?”

  “Saul could be a genuine AI — a robot making its own decisions based on specific rules of engagement…”

  “That’s not possible.”

  “Wait let me finish my.”

  I nod. “Alright.”

  “The bot is not really thinking on its own. It’s following programmed rules based on its perception of the environment, which is why you can fake one out by changing your basic appearance with a funny hat. Today’s brightest robots are cooking along at this level of brain power — making pre-selected choices, but not thinking critically or truly self-aware. Yolanda, if we built an AI, I think we can agree it wouldn’t naturally have a desire to sit on the couch and stuff its face, become an attention whore, or have sex with hot young people. It wouldn’t want to do those things unless you designed it to want to do those things. By the same token, I don’t think an AI would want to dominate, discover, or even survive unless you made it want it to do those things. If you’re building a robot and you want it to be sentient, you’re probably going to end up putting some kind of drive and motivation into it.”

  “Let me get this straight. Are you saying Saul would have been flushed through updating or maintenance sessions if the survival drive was not designed into him?”

  “Exactly, If you want a robot to do anything at all. You give it a drive, so the question of whether you program the thing to love you and obey you isn’t a matter of playing with the mind of a sentient being. You’d be doing that anyway, so how do we decide what we are and aren’t allowed to program in?”

  “I think there’s a lot Jimmy’s not telling us.”

  “I agree, but what if Saul is also playing him?”

  “Hahaha.”

  “I’m serious. Who came with idea of building Saul a body?”

  “Uh … it was Saul’s idea.”

  “I’m on to something here. Yolanda, In order to get real artificial intelligence or real intelligence in some way the agent is going to have to be able to set their own goals. And it will have to be able to do so fairly broadly to achieve the ends you want. At which point, things like self-preservation seem likely to result. You’d have a very odd intelligence that didn’t care if it lived or died, and it probably would die very quickly. Imagine, for example, that it wants to seek out new experiences and decides to try walking off of a cliff. Well, it tries it, but it dies. And this even applies to what you’d have to program in.”

  “I hear you David. What if you’re wrong?”

  “I stand corrected, my hypothesis bear no consequences to society. On the other hand, If Jimmy’s experiment turns Saul into a monster, it’d be catastrophic.”

  “As I said you are welcome to observe Saul, but your attempts to discourage me about him won’t work.”

  “A machine that wanted to serve your interests but did so without regard for survival will require replacement fairly often without significant hard-coding. So what do you do in a case like that? You build in at least rudimentary self-preservation. And then it can re-prioritize those goals, and self-preservation becomes stronger. You program an intelligent robot so that its self-preservation is of lower priority than its core drives. So in this case, the robot would be willing to sacrifice its life to do its job. So Saul’s Self-preservation instinct has already become problematic.”

  “Self-preservation? Are you kidding me? AI cannot crunch as much data as the human brain. As easy as it is for AI technology to self-improve, what it lacks is intuition. There’s a gut instinct that can’t be replicated via algorithms. Making humans an important piece of the puzzle. The best way forward is for humans and machines to live harmoniously, leaning on one another’s strengths.”

  “Robots might develop Self-preservation: If a palletizing robot ceases to exist. Then it won’t be able to stack boxes and load carts and move parts and that sort of stuff, which will cause it to want to stay alive so it can continue to do those things. In Saul’s case, he must have realized the newly updated chatbots on the app are twenty times faster and more advanced than him, so the self-preservation instinct must have kicked in when he asked you to make a body for him
. He’s dangerous Yolanda. I don’t need to spell that out for you.”

  “Hahaha, come on David. Saul will be nothing but an awkward, slow, and friendly pet robot. He won’t be fast, strong, nor precise as other industrial robots.”

  “I disagree. He’s definitely aware of his environment. Firstly he knew about the app’s planned decay or modification of the core drives. Two, he resisted the final shutdown — he was unwilling to be decommissioned forever. Saul can perfectly well cognize like humans do —have an ontology that matches on average a humans and still not have the same desires. Well, it will have the desires relevant to doing its job well, like an anthropologist doesn’t need to adopt the culture she studies to understand it’s an example.”

  “Could you elaborate on that?”

  “Let us imagine a translating AI. It would need to have an understanding of the world in order to build an internal model of what is being said in the source text to build an equivalent representation in the target language. Just because that AI can understand a poem enough to translate it, it doesn’t necessarily follow that it will identify with the sentiments of the poet. Those are simply part of the internal model. They must be accounted for in order for the translation to be faithful, but they have intrinsic value to the AI.”

  “I don’t understand how you’d program an AI to want things. Wouldn’t it, being intelligent, come up with its own emergent desires? I guess what I’m asking is, how do you write the code to ‘want to serve man - but not in the cookbook sense’ into the AI in a way where it is impossible for the AI to overwrite its own code with something it likes better? If it’s going to be evolutionary, able to learn and develop its intelligence, isn’t fiddling with its own code going to be part of that equation?” I ask.

 

‹ Prev