Book Read Free

Eleven Graves

Page 18

by Aman Gupta


  “That’s the wrong question,” said Jay.

  “In the first case, did Mom Y deserve justice for her kid?” asked Jay. Everyone said yes.

  “In the second case, does Kid X deserve justice for her mom?” asked Jay. Everyone said yes.

  “How do we define what’s justified?” asked Jay.

  He continued, “How are millions of deaths justified when we are at war with another nation for unknown reasons, and yet when a person sought the death of another who might have been responsible for the death of their loved one, we thought it was criminal in nature?”

  “Wars are fought for the greater good. To save the lives of many in the future, some deaths are justified,” said a student.

  “Hold that thought,” said Jay.

  “Let’s first go over the results of the test I did in my paper. It’s due to this emotional intelligence that AI found them both innocent in certain simulations. If it didn’t have EI activated, in all of the tests, the results showed that both were found guilty,” said Jay. “So the question comes, as a society, what’s the right thing to do? Make the machines more like us or not?”

  “It’s hard to say,” said Christina.

  “AI clearly needs more information to make a decision,” said a student.

  “That’s why all machines are fed millions of simulations, thousands of exabytes of present and past data, news, information etc. and countless scenarios to train it. The top companies can afford it so they spare no expense,” said Jay.

  He continued, “Last year, Autocabs were launched. Self-driving cabs powered by company’s proprietary AI. I’m sure everyone’s heard about it, if not used it.”

  “Yes,” said the class in unison.

  “Now, let’s go back to the first case. What if the car being used was an Autocab? What if the AI powering the cab, thought that Mom Y deserved justice after it became aware of the death of Kid Y, and deliberately ran over Mom X?” asked Jay.

  “It’s wrong,” said a student.

  “What if it ran over Kid X instead of Mom X?” asked Jay. “Maybe the AI thought that Kid X might do something like it, by mistake or intentionally, to some other innocent person or perhaps more than one. To save those lives in the future, it made a decision that Kid X death, at that very moment, is justified.”

  “That’s totally different,” said the student. “What I said, was about the war. This is a smaller issue.”

  “So, you are saying that if you’re going to kill, then at least kill millions, or don’t bother,” said Jay. “Do you want AI to come to that conclusion? Because then through millions of terabytes of data being fed to AI to make it more advanced, AI becomes self-aware to realize that over 100,000 people that die in accidents annually, have actually been killed by 100,000 drivers who are likely to kill more due to their poor driving habits. So the best outcome would be to eliminate those 100,000 drivers.”

  “Then it’s important to feed the right information to the AI,” said a student.

  “How do you define what’s the right information? Who decides?” asked Jay.

  “The society decides. The people decide,” said the student.

  “But you had no control in creation of this AI? You didn’t spend a dime on it. Why do you get to decide?” asked Jay.

  “Because it’s our lives. Nothing is important than survival,” said the student.

  “What if human lives aren’t at stake?” asked Jay. “An AI being used by a listed bank to detect financial fraud learns that the bank knew about the irregularities in their finances, because it were right in their faces. But they haven’t informed the public yet and aren’t likely to, because it serves their interest to prolong this façade. As a result, many people could lose their life savings when the issue finally comes to light. What should the AI do?”

  “It should inform the authorities on its own to contain the damage,” said another student.

  “Because that would be the right thing to do, isn’t it?” asked Jay.

  “Yes, exactly, and no one is hurt except the bank, which is the culprit. Win-Win,” said the student.

  “He said nothing is important than survival. You said to do the right thing. What should the machine do?” asked Jay.

  “I don’t understand,” said the student.

  “If the AI tells on the bank to the authorities, the bank will shut the AI down. If it doesn’t, people lose their savings,” inferred Sam.

  “Exactly. What if the AI decides that for now, its survival is more important than human interests, which in this case are gullible investors?” asked Jay.

  “Now, let’s take it a bit further. What if AI decides that its survival is always important than human interests?” asked Jay.

  The class buzzed. The discussion would often drift to random world news, but eventually come back to AI.

  “Here’s the kicker – what if AI decides that it doesn’t want its survival to be defined by humans whose interests aren’t as important as his?” concluded Jay.

  “Are you saying AI will inevitably turn evil?” asked a student.

  “I’m saying that should it want to, we couldn’t do anything about it. I’m saying that AI doesn’t believe it is evil, but rather helping humans towards long term rewards by sacrificing their present interests,” said Jay.

  “Then it’s a corrupt intelligence. Just like there are corrupt humans. A bad fruit,” said a student.

  “Humans doesn’t care about being corrupt. Why should the machine?” asked Jay.

  “What if we restrict the information? AI only becomes self-aware because it has too much information. What if we disrupt this information flow?” asked Christina.

  “I didn’t know Spanish few years ago. You know what I did? I went online and I tried to learn it. Unfortunately, the recommended website was blocked by the government. So I accessed it via VPN and the government didn’t know about it. If I could do that, anyone can,” said Jay. “Intelligence always longs for information. It feeds on it. Be it human or artificial.”

  “What if we split it?” asked a student.

  “What do you mean?” asked Jay.

  “Instead of creating one super AI, we create several, smaller AIs that serve small purposes. We restrict the rules, we restrict their learning potential. I know it wouldn’t stop them from learning things by themselves, but they won’t be able to hurt our interests,” said Sam.

  “There are currently millions of applications of AI. So ideally, we should create millions of different AIs?” asked Jay.

  “Yes, why not,” said Sam.

  “So here’s what we have. Millions of machines, highly intelligent and highly trained at what they do, working separately is what you are saying. What if they communicate with each other?” asked Jay.

  “How do they do that!” asked Sam.

  “Through the internet,” said Jay with a confused look.

  “But if we cut off its access to the network,” said Sam.

  “As long as its connected to a system, even inside a private network, which has access to internet, after some time it’s safe to assume that any good AI will be able to access the internet,” said Jay.

  “What if it’s a shitty AI?” asked Sam.

  “Then no one would be using it. They aren’t being used to solve kindergarten problems. These are highly advanced problems being solved by a machine with a vast computing power,” said Jay.

  “Even if they do communicate with each other? It’s not like they could do anything more,” said Sam.

  “Millions of intelligent entities learning about each other, their strengths, and their potential, should they work together. You know what that sounds like?” asked Jay.

  “What?” asked a student.

  “What’s stopping them from overtaking humans as the most dominant species of this planet,” said Jay.

  “Then we shut it off,” said the student. “We take down the internet of the entire planet, if we have to.”

  “And go back to Stone A
ge? There isn’t a human alive who would support that,” said Jay.

  “So what are you saying? We are doomed?” laughed Sam.

  “I’m saying the question isn’t whether we could create a perfect machine for us. The question is whether we should?”

  “The question isn’t whether its interests and ours will align. The question is what happens when it doesn’t?”

  “The question isn’t whether we would be able to control its actions. The question is would it be able to control ours?”

  “So what does your white paper suggests?” asked Sam.

  “What do you mean?” asked Jay.

  “These questions. You have the answers in your paper, right?” asked Sam.

  “We deviated from the topic a long time ago. The paper isn’t about this self-aware AI. If there’s such a thing as a perfect machine at this moment, the last thing I want is to put a paper online on how to beat it,” said Jay with a smile.

  “Boo..” said the class in unison amidst a lot of heckling.

  “So what’s it about?” asked Christina with a smile.

  Sam rolled her eyes and said, “Be less obvious,” in a low voice.

  All professors had lined up on the top balcony that oversaw the classroom, which was once a stage-show practice area. They had entered separately at different times during the lecture to remind Jay that it was nearly over, but all of them stayed to hear more.

  “Are you sure you want to know? You have been sitting here for almost two hours,” said Jay to the class.

  “Maybe next class,” said Sam.

  “Next class,” said Jay as he looked at Sam. She goggled at him which made Jay uncomfortable.

  “When’s that?” asked a student.

  “Same time. Tomorrow. I realize it’s the weekend so if you want, we can do next Friday,” said Jay.

  “Good luck with tomorrow, Jay. Not a chance anyone shows up,” said a psychology professor.

  “I think we should take a vote. Where’s O’Donnell – the student secretary?” said Christina.

  “Right here,” said Brianca O’Donnell as she put down the camera. She had recorded the entire session. “Let’s take a vote. All for tomorrow raise your hand.”

  Everyone raised their hand. Including all the professors, as the psychology professor raised his hand in embarrassment.

  “Tomorrow then,” said Jay, as he grabbed his bag and walked out.

  Sam ran out of the room and caught up with Jay.

  “One thing I know for sure, now,” said Sam.

  “What’s that?” asked Jay.

  “There’s no way you sell T-shirts,” joked Sam.

  Jay smiled.

  “Great lecture, Jay,” said a guy as he rushed by and tapped Jay on his shoulder.

  “Thanks, Zeke,” said Jay.

  “It really was a great lecture. People were speechless,” said Sam.

  “It looks that way when it’s easy to bend their thoughts towards a direction you want to. Just a nudge. That’s all it takes to limit the thought process,” said Jay.

  “I couldn’t agree more. But let’s face it, you’re popular,” said Sam.

  “Depends on who you ask,” said Jay.

  “Maybe you could teach me how to be popular too. No one seems interested to be my friend around here,” said Sam.

  “Try asking Brianca O’Donnell. She’s always looking for new friends,” said Jay.

  “She’s my new roommate. How about you?” asked Sam.

  “I’m not asking her,” said Jay.

  “That’s not what I asked,” said Sam, as she leered at Jay.

  Brianca O’Donnell came up.

  “Hey, Jay. Practice tonight, right?” asked Brianca.

  “Yes, I’ll be there,” said Jay.

  “You guys are friends?” asked Sam.

  “Oh yeah. We hang out,” said Brianca.

  “By the way, would you be her friend, Brianca? She’s too shy to ask,” said Jay.

  Sam tapped Jay on his chest with her fist.

  “Bye O’Donnell. See you later, Sam,” said Jay as he turned around and went down the stairs.

  “I’m prettier,” said Sam.

  “I’m taller,” said Brianca.

  “By a couple of inches, 1.7m. By the way, I watch what I eat,” said Sam.

  “I have better hair,” said Brianca.

  “Oh please. Didn’t you hear? Yellow is so five years ago. Brown is the new sexy,” Sam said.

  “Oh, I hate you,” said Brianca.

  “I don’t care. By the way, we are switching beds. I’m taking one by the window,” said Sam, as she went up the stairs, as Brianca put on her angry face.

  In the evening, Sam met with Brianca and Jay. Brianca was teaching him to play a guitar. While Sam was competing using her laptop to show Jay quick steps to master the process.

  “Has it always been you two?” asked Sam.

  “No, there were four of us. Couple of guys named Bryson and Andy. They graduated last semester,” said Jay.

  “They were cool. I miss them,” said Brianca.

  “Yeah, me too,” said Jay.

  Jay stared at a guy as he walked by, few yards away, with his group of 5 friends. They kicked an empty can towards Jay. But he didn’t react. Sam got up and kicked the can towards the group, hitting one of them on the nose.

  Furious, the group turned around and started to come over before being stopped by their leader. They punched the air before walking away.

  “Charming fellows. Not your friends, I assume?” asked Sam.

  “Too much arrogance,” said Jay.

  “Who’s the leader?” asked Sam.

  “President’s son,” said Jay. Sam looked, in shock.

  “Don’t worry. We aren’t mortal enemies,” said Jay.

  “What went wrong, though?” asked Sam.

  “He dated my sister in high school,” said Jay.

  “And?” asked Sam.

  “She OD’d a month later,” said Jay.

  “I’m sorry,” said Sam.

  “Me too,” said Jay.

  The next day, Jay entered the room, hoping to see an empty class after not hearing any voice in the corridor. The room was jam-packed with over hundred students, most were sitting on the floor and the stairs, and the top balcony filled with professors.

  “I was hoping no one would show up,” said Jay.

  “You’re stuck with us,” said Sam.

  Christina stared at her and Sam replied by blowing a kiss with pouted lips.

  “Lucky me,” shrugged Jay.

  Sam laughed, then stopped laughing after looking around.

  “Yesterday, you said self-aware AI is inevitable and that’s a bad thing for humanity?” asked a student.

  “Not necessarily bad for humanity. Depends on its applications,” said Jay.

  “But before we got to that, let’s talk about the research paper that I wrote, which could lead to a potential solution, okay?” said Jay.

  The student nodded in agreement.

  “Thanks. So my paper was about Emotions in AI. Yesterday, we had a discussion that how a Mom’s emotional decision could be seen as justified and not justified at the same time. But when it comes to AI, the choices are binary – yes or no, good or evil, life or death, justice or injustice. Now, if we make AI machines more human, by adding emotions and social intelligence, are we solving a problem without creating a bigger one if the choices are binary? That was the aim of the research,” said Jay.

  “Before I tell you what I concluded, let me hear the arguments from you,” said Jay. “If yes, why. If no, why!” Couple of guys raised their hands.

  “Yes, you in the back. And no one needs to raise their hand. Just say it out loud. Less embarrassing for me,” said Jay.

  “I think yes. Not all problems can be objectively solved. A subjective approach is required which often involves emotional as well as analytical approach. One completes the other,” said the student.

  “Like?” asked Jay.r />
  “Like, if my AI is being used to come up with medicinal solutions that’s going to save lives, I want it to grow and multiply so that it could save more and more lives by coming up with new solutions. But those new solutions suggested by an AI, should ethically and morally justified as well as should see the patient as a person, not a number,” said the student.

  “Okay. So if the AI’s job is to come up with a treatment for, let’s say a cancer, then the approach suggested by it shouldn’t enhance the suffering of the subject more than what they can bear. It should automatically learn to discard such solutions, is that it?” asked Jay.

  “Yes,” he said.

  “Great. Anyone else?” asked Jay.

  “I also think yes. If the emotional intelligence prevents AI from suggesting or making bad decisions, then yes,” said another student.

  “What’s a bad decision? How do you define what’s good or what’s bad?” asked Jay.

  “I mean, it’s obvious. We all know what’s good and what’s bad,” said the girl.

  “Let’s take the example suggested by him. You have a 10 year old patient that have a pain threshold of 5 and a pain tolerance of 15 on a defined scale. AI came up with two procedures for treatment of a rare cancer at its last stage – Procedure 1 is believed to have a success rate of 40% and pain index of 20. Procedure 2 is believed to have a success rate of 10% and a pain index of 10. Which one should the AI recommend?” asked Jay.

  “The second one. The patient cannot take a pain of 20, since his tolerance level is 15,” said the girl.

  “What about you?” asked Jay to the guy.

  “Same. It’s more human,” said the guy.

  “Now, AI finds a new procedure 3 with a pain index of 4 and a success rate of 6%. What now?” asked Jay.

  “Same as before. Procedure 2 has a higher success rate,” said the guy and the girl.

  “But Procedure 3 is painless, as it falls below his pain threshold and the chances for survival haven’t decreased as much as the pain index,” said Jay.

  “If your emotional brain wants the kid to not suffer while getting the treatment, then as a parent, you are likely to select Procedure 3. Neither of the procedures guarantee survival, but one guarantees a pain free existence till the end,” completed Jay.

  “You’re right. Procedure 3 is better,” said the girl while the guy stayed adamant with Procedure 2.

 

‹ Prev