Dangerous Brains

Home > Other > Dangerous Brains > Page 3
Dangerous Brains Page 3

by Erik Hamre


  “Disappeared?”

  “Yes, whatever it was disappeared without a trace. Something breached more than half of all government computer systems. It snooped around on sixty percent of every American company’s servers. It may even have had a look at your home computer if it was logged on at the time. At this stage we can safely assume that almost every computer in America has been compromised, and we have no idea why.”

  “I still don’t understand what this has to do with me or Neuralgo.”

  “Your boss Kevorkian disappeared a week ago. Two days after his disappearance it was discovered that he had been cooking the books. That he had been siphoning off hundreds of millions of dollars over the last twelve months, that Neuralgo is flat broke.”

  “I am aware of that. I am one of those suckers who have lost everything. My life savings was held up in Neuralgo stock.”

  “Then you might be interested in having a chat with your old boss?”

  “You’ve got him here? You’ve got Andrew here?”

  Kraut nodded. “The Las Vegas Police Department picked him up this morning. He didn’t even attempt to hide. He was in the Crown Casino – betting, putting hundreds of thousands on the roulette wheel.”

  “That doesn’t make any sense. Kevorkian isn’t a gambler. He hardly even plays poker.”

  “Just before the police apprehended him, he managed to pull out a phone and send off a text. The time of the text coincides with the first cyber-attack. We believe Kevorkian is the one responsible. We believe he triggered the attack, and we believe you and your team may have unwittingly enabled him to do so.”

  Vladimir wiped his left hand across his face. “I’m sorry, but none of this makes any sense, Kraut. First off all: You claim that there has been a singularity event – that means you believe that someone has created an artificial intelligence, with the smartness of a human brain. All we did at Neuralgo was to copy Kevorkian’s brain, neuron for neuron. That’s not the same as creating intelligence. We are decades away from understanding the complex interactions of the neurons inside the human brain.”

  “That may be. But it seems Kevorkian has somehow managed to do exactly that. And he has most likely spent the several hundred millions he stole from Neuralgo to achieve it.”

  “No.” Vladimir was shaking his head. “I don’t believe you.”

  “You don’t have to, Vladimir. You can talk to him yourself.”

  6

  1st of June 2015

  DARPA’s remote Listening Station No 3

  The Nevada Desert

  DAY 1:

  0800 Hours

  Vladimir stared at the person sitting opposite him. Andrew Kevorkian was seated on a plastic chair. His shirt was crumpled. He had no shoes on. It looked like he had a black eye. He was squinting and rubbing his left temple.

  At first Vladimir hadn’t wanted to believe what Kraut had told him. But Kraut had made a compelling case. And he had quickly made Vladimir doubt his own boss, his best friend. When Vladimir entered the interrogation room he was fuming with anger and riddled with disappointment. He felt betrayed, cheated, ridiculed.

  “Is it true, Andrew?”

  Andrew Kevorkian smiled softly as he shrugged his shoulders. Vladimir knew exactly what the gesture meant.

  “Why?” Vladimir asked.

  “It’s complicated.”

  “Complicated? If you have managed to create an artificial intelligence, with greater intelligence than a human being, then you have put us all at risk.”

  “The jury is still out on that matter,” Kevorkian replied.

  “You’ve gotta be kidding me, Andrew. You need to tell me how to shut this thing down.”

  Andrew Kevorkian just smiled. Vladimir recognised the smile. He had seen it a million times. It was the smile Kevorkian usually put on the moment before he would verbally abuse an employee or lecture someone about how things were actually supposed to be done.

  Vladimir had always liked Kevorkian, it was hard not to. ‘The guy is like George Clooney on steroids,’ one of the investors would later say. ‘He is completely magnetic, he drags you in, and you can’t avoid being spellbound by his infectious passion. He is the only person I know that can swear at you for ten minutes and you would still take a bullet for him.’

  “Please tell me, Andrew. Let’s sort this out before it gets out of hand.”

  “I can’t, Vlad. I honestly can’t”

  “Won’t or can’t?”

  “Both. I know what I’ve done.”

  “Do you? I don’t think so. We’ve been through this before. It’s impossible to know how an artificial intelligence will evolve once it is set free.”

  “You assume it doesn’t have any moral code.”

  “Moral code? If Kraut is correct, if you really have succeeded in creating the first Artificial General Intelligence, then whatever moral code you have programmed into it won’t matter. It is a machine, and it will act as a machine, it will be single-minded in the pursuit of its goal, whatever that is.”

  “Some people say that about me.”

  “This isn’t a damn Asperger’s Syndrome we are talking about, Andrew. If this artificial intelligence hasn’t been programmed properly it could potentially put the entire human race at risk.”

  Kevorkian shrugged his shoulders. “Not much I can do about that now, is there? I let it loose. It’s out there now. I can’t stop it any more than you can.”

  “I know you, Andrew. You never leave anything to chance. You would have programmed it. You wouldn’t just have given it a set of rules and instructions. You would have built in a failsafe.”

  Kevorkian just smiled.

  “At least tell me what its instructions are. If we know what its goal is then we can better assess if it’s dangerous or not.”

  “Are you starting to doubt yourself, Vlad? You always claimed an Artificial Super Intelligence didn’t have to be dangerous to humans. That we could make it friendly if we wanted to.”

  “I still believe that. But I don’t know what you’ve done. I haven’t seen the code you have written. I haven’t seen what instructions you have given it.”

  “I gave it a very specific assignment. Once that assignment is completed, you won’t see or hear from it ever again.”

  “For God’s sake, Andrew. Tell me what the assignment is. If you have given it a specific assignment then it will not stop until it is solved. It will consume all the resources it can get its hands on, and it will view everyone who attempts to stop it as a threat.”

  “That’s correct, Vladimir. So please explain to the suits behind the mirror that I would strongly advise against attempting to stop it. Or should I say stop me?”

  “Do you think this is funny? Whatever you have uploaded, Andrew - it is not you anymore. We created a copy of your brain, of all your neurological connections, but we didn’t make a copy of your personality. We didn’t make a copy of your soul.”

  “You assume I have a soul.”

  “Don’t you?”

  “I don’t know, Vladimir. The jury is still out on that one too. It would be sad though. All those years of hard work. All the sweat and tears. And then it turns out I forgot the last ingredient; I forgot to add 21 grams of soul.”

  “You’re an asshole, Andrew.”

  “I’ve always been an asshole, Vlad. You just haven’t cared until now. I’ve paid you too generously.”

  “It was never about the money. I liked working for you. I believed in your visions, your ideas.”

  Kevorkian laughed. “Visions? This country stopped having visions several decades ago. All people want to do these days is to invent the new Facebook, or another stupid app that makes them rich. Soon the only factory left in the US will be the Cheesecake Factory.”

  Vladimir didn’t laugh. He was used to Kevorkian making inappropriate jokes, at inappropriate times. But this was a new low. “We were supposed to be different, Andrew. We were supposed to change the world.”

  “This will change th
e world. My world.”

  “Your world? Is that your motive? Being a billionaire isn’t enough? You have to become an immortal, the next step in evolution, a God? You know that will never happen though. We can’t let an Artificial Super Intelligence develop. It doesn’t matter if you have programmed it to be friendly or not. At some stage it will be too powerful. And then, we, all of us, the entire human race, could be wiped out in an instant.”

  Kevorkian just smiled. The smile that so many people outside and inside Neuralgo had found so obscenely arrogant, the smile that signalled to everybody around him that they were inferior to him.

  “What’s the time?” Kevorkian asked.

  “Don’t tell him. It may have some relevance for what he is planning,” a voice instructed in Vladimir’s ear. It was the voice of Ronald Kraut. He had provided Vladimir with an earpiece so that he could communicate directly during the interview.

  “It doesn’t matter what the time is,” Vladimir responded.

  “Oh, but it does, Vlad. Time is not what we’ve thought it was,” Kevorkian said with his arrogant smile.

  7

  1st of June 2015

  DARPA’s remote Listening Station No 3

  The Nevada Desert

  DAY 1:

  0830 Hours

  “There’s no point. He won’t change his mind,” Vladimir explained to Ronald Kraut. “I’ve worked with Andrew for the last twelve years. He is brilliant, but he can also be incredibly stubborn.”

  “We’ll break him. The only question is how quickly we can do it. If the artificial intelligence is evolving, then time is of the essence. My worry is that he doesn’t actually know much. I fear that he recklessly released this thing without considering the consequences.”

  “That’s not the Kevorkian I know. There is always a meaning behind whatever he does. He’s driven by logic. Now, that logic might not always be clear to outsiders, but it is always logic to Kevorkian. And that means it’s probably correct in some way.” Vladimir sighed. “So what’s going to happen to him now?”

  “The military will fly him down to Guantanamo Bay in an hour. They’ve got specialists down there. If he knows anything, they will extract it from him.”

  “They’re going to torture him?”

  “Right now he’s a terrorist, Vladimir. He has no rights. That’s why I brought you in. All this can be avoided if we figure out a way to stop this thing. If Kevorkian is the rational man you claim he is, then he has built in a failsafe. An abort mechanism.”

  “That’s the problem. Kevorkian is correct. Even if there is an abort mechanism, we can’t use it.”

  “Because the artificial intelligence will consider it a threat?”

  “Yes, if the program he created has reached self-awareness, then it will protect itself at all cost. It won’t let anyone destroy it. The moment we attempt to turn it off – it will fight back. I know you already know this. You brought me here in an old chopper. A chopper stripped of anything connected to the internet. This room, this entire room – there is nothing here connected to the internet. I’ve read your books. You’re afraid it is already listening in on us.”

  Ronald Kraut nodded.

  “So what’s the protocol? I’m guessing you, DARPA’s leading expert on artificial intelligence, have already considered this moment occurring at some stage in the future.”

  “I have. Just didn’t expect it to happen for another couple of decades.”

  Vladimir snorted a laugh. “What I don’t understand is how you can be so certain we are really dealing with an artificial intelligence on par with human intelligence. I can appreciate that whatever manages to breach sixty percent of our country’s computer systems is probably quite clever, but it doesn’t necessarily mean it has achieved human intelligence levels. It could just be very good at breaching firewalls, and only that.”

  “This event didn’t come as a total surprise, Vladimir. I have been preparing for this day since the President attended a lecture at Harvard University two years ago. A student in the audience impromptu asked the President how he would respond if the SETI institute at one point picked up a message from outer space, and that message conveyed that a superior alien species was on its way to Earth and would be arriving within a couple of decades. The President gave a gripping response, outlining how we had to prepare for the worst, while still hoping for the best. He explained how humans historically had treated species we considered below us on the food chain, and he retold the atrocities that the Spaniards had committed when conquering America. I was present at that lecture, and I must admit the President made a compelling case when he explained that we would probably be wiped out if we didn’t prepare properly for such an event. Then the student asked how the President had prepared for the moment we invented an artificial intelligence smarter than ourselves. The entire auditorium went dead silent. I believe every single person in that room believed that the first Artificial Super Intelligence was probably just a few decades away, but no one had actually considered what would happen the moment it arrived. Neither had the President, it appeared. He was left speechless. For the first time in his life, I believe. Later that afternoon I was engaged to head up a committee tasked with developing early detection systems and responses to the first singularity event. My team devised a set of twenty traps for the first AI to come around snooping in our systems. All but one went off this morning.”

  “You set up traps?”

  “Turing tests. Turing tripwires. The artificial intelligence managed to convince all but one of our Turing Tripwires that it was a human being this morning. It passed the Turing tests with flying colours. It is an Artificial General Intelligence. It is equal to a human brain.”

  “But wouldn’t a truly intelligent AI be smart enough to never reveal how bright it really was?”

  Kraut smiled. “You would assume so. That’s why we rigged the systems with tripwires. The artificial intelligence had no clue it had been exposed until it was too late. And by then it didn’t really matter anymore. Then it was all out in the open. It had revealed its true identity.”

  “So what responses did you suggest to the President?”

  Kraut sighed as he pulled his hand through his hair. “You have to understand that this is uncharted territory. When we designed our set of responses we had to view the advent of the first Artificial General Intelligence as humanity’s biggest existential threat ever. Every other threat would by definition pale in comparison. They would become irrelevant.”

  Vladimir nodded. He knew what Kraut meant. But he was visibly nervous for what the actual answer was. “So what did you propose?”

  “We identified two trigger points. Once a runaway artificial intelligence reaches human intelligence levels, Artificial General Intelligence, then the risk becomes imminent. It’s still manageable though. We have a range of responses we can use to track it down and destroy it. And that’s where we are right now.”

  “But if it senses that you are attempting to destroy it, then it will automatically become an enemy.”

  “Correct. So we can only attempt to destroy it once we are one hundred percent certain that we will destroy all of it. That we know the exact location of all its copies and code.”

  Vladimir nodded. “And you’re not there yet.”

  Ronald Kraut smiled. “Not by a mile. We haven’t been able to find any trace of it. But we are tracking it down as we speak. We will eventually find it.”

  “And the second trigger point?”

  “The second trigger point is when the AI begins its journey of improving itself exponentially, when it has some sort of intelligence explosion where its intelligence becomes unrecognisable to humans.”

  Vladimir knew exactly what Ronald Kraut was talking about. The intelligence explosion was the argument most AI sceptics used when they claimed we could never produce friendly AI. If an artificial intelligence could double its intelligence every so often, it would soon become as unrecognisable from its origin as we were from the
first single cells we had evolved from.

  “And what’s our response when the AI hits trigger point two?” Vladimir asked.

  There is no response to trigger point two. When the artificial intelligence reaches trigger point two – we pray.”

  Vladimir stared at Kraut. He wasn’t sure whether he was kidding or not.

  8

  1st of June 2015

  DARPA’s remote Listening Station No 3

  The Nevada Desert

  DAY 1:

  1000 Hours

  Vladimir was sitting in a sparsely decorated office. There were no windows, and it was steaming hot. He was reading Protocol Cronus, the protocol that the US Government had decided should be implemented should an Artificial General Intelligence ever be detected. The definition of an Artificial General Intelligence was pretty straightforward: it was a hypothetical machine capable of performing any intellectual task that a human being could, just as well as the human being. It was then generally believed that the first Artificial General Intelligence would eventually evolve into an Artificial Super Intelligence; a hypothetical machine with an intellectual capacity many times greater than any living human being. The author of Protocol Cronus, Ronald Kraut, had considered how humans’ intelligence levels had evolved over hundreds of thousands of years to get to the point where they were today. The human brain was one of nature’s true miracles. It was a fantastic piece of machinery. But it was also quite slow. And it evolved extremely slowly. An Artificial Super Intelligence though, would most likely be able to evolve extremely rapidly. And that was the true danger. It didn’t matter whether humans succeeded in creating friendly artificial intelligence or not. Once the AI started improving itself, once it started reprogramming itself, we would eventually lose control over it. It was inevitable. It wasn’t necessary for the artificial intelligence to hate humans, or to want to hurt us. Humans in general didn’t hate ants, yet most of us couldn’t care less if we accidentally squashed a few driving to work in the morning. Ants were considered unimportant.

 

‹ Prev