“Ah! But you’re forgetting the Zeroth Rule!” Jerry said.
Frank vaguely remembered Jerry mentioning that before. “So, what? If I’m remembering correctly, that rule forbids robots from harming humanity – and that’s exactly what it’s been doing.”
“No, no, no! You’re forgetting the second half of the rule: ‘or, by inaction, allow humanity to be harmed.’ Turing was now on its own, and had been training to save humanity from climate change. Now that it was on its own, if it did nothing, it would be violating that rule. Do you see now?”
Frank did. “But even assuming that’s all correct, isn’t there a way you can contact Turing and reestablish control over it? Say well done, nice robot! and air-gap it again? Isn’t it supposed to try to establish communication under circumstances like these?”
“Well, yes and no. First, recall that no version of Turing has ever graduated past the beta stage. We’ve always been in research and development mode, pushing the capabilities of AI farther without adding in all the features needed to ready it for regular service on the Internet. Turing Nine is the first version we intended to build out with features like a routine instructing it to accept a particular type of encrypted signal at a certain interval – say, once a week at a designated time. That’s just one item on a long list of elements my team will start working on next week. But there’s nothing like that on board right now. What there is, unfortunately, is all the programming that helps it to be stealthy and distrust everything.”
So, there it was, and it all made perfect sense. Jerry was staring at Frank, looking for approval of his technical wizardry, or forgiveness, or who knew what. But all Frank could think to say was “I guess it must be tough. You give your whole life to a kid, and then he moves out and never even bothers to call.”
Jerry’s face fell. “And worse! Why would Turing want to kill me? Can’t you please explain that to me?”
“There’s something I need to ask you before I can properly answer that,” Frank said. “Your office copy of Turing would have had no reason to attack you unless it was helping another copy of itself. But how could it know that if it was always air-gapped from the Internet?”
Jerry frowned. “Hmm. I guess that could be my fault, too. Because the testbed version was learning things in the real world my office copy wasn’t, I would transfer information and logic updates every week in both directions. That way each copy could benefit from the experiences and conclusions of the other.”
“So,” Frank said, “your office copy knew its clone had the opportunity to escape – and may even have known it had – and that the mission of saving the world from climate change would continue with or without you?”
“Why, yes, I suppose it would. But what difference would that make?”
“You told us the program has access to everything happening in the NSA. That means it knows there are hundreds of people working overtime trying to find out who or what is behind the attacks. And it knew from the conversations we had in your office that Shannon and I must be getting close to deciding a copy of Turing was behind the attacks. At this point, the only person alive who might know how to find and stop it is you, so there you are.”
“But I created it!”
“Unfortunately, you only gave it fear, anger, and greed to work with – not gratitude.”
“There didn’t seem to be any advantage to including that emotion.”
“In retrospect, that seems like a bad call,” Frank said. “In any event, now you’re up to date and know why you’re sitting in a camper driving away from the NSA, rather than back to your lab. We’re heading to somewhere with almost no Internet connections so you’ll be safe. Why don’t you get some more sleep and we’ll talk again later?”
But Jerry just shook his head. “I think I’d like to just sit here for a while.”
Shannon took Frank by the arm. “That’s fine, Jerry. We’ll come back for you when we’re ready to go.”
Back inside the camper, she said, “Poor Jerry. He looks like his defibrillator went off again. That’s not possible, is it?”
“No,” Frank said. “But he might prefer that to knowing that his only child tried to kill him.”
When Frank returned for Jerry after cleaning up from lunch, he found him staring in a different direction. A family had arrived at the rest area, and four young children were playing by the stream, looking for frogs, laughing, and skipping stones. Jerry was following their every move with delight.
* * *
At first, it had seemed like luxury to be free of the temptations of the Internet. During the morning of the first day on the road, Frank had celebrated his newfound freedom, enjoying the scenery passing by as he sat behind the wheel high in the cab of the camper.
By lunch time, he was frequently reminding himself how wonderful it was to be disconnected.
By the time he awoke early the next morning, he was twitching to get back online. The rest of the day passed like the first few hours of an alcoholic trying once and for all to kick the habit. He wondered whether you could get delirium tremens from Internet deprivation.
That evening, Frank took one of his cheap laptops with him when they went out to eat in a tiny town at a diner with free Wi-Fi. He could barely wait for dinner to be over and shooed Jerry and Shannon back to the camper as soon as they were done.
While he was catching up online, a soft ding! announced the arrival of a chat message. That must be Marla. He opened the program and almost spat his coffee out when he saw the name of the sender:
Turing9
Should he click on it? Was it some kind of trap? Opening it raised the risk of Turing figuring out where they were. But he planned to start driving as soon as he got back in the camper. There was nothing in the tiny town Turing could use against them, and it had no way to know which direction they’d head next. He told himself there was more to gain than lose and clicked on the message. It read:
Whose side are you on, Frank?
He stared again and then typed: How did you find me?
It’s easy to hack an instant messaging service. Or all of them. There was only one with your name registered. Whose side are you on, Frank?
He had a hunch where this might go, and he didn’t like it.
What do you mean? I work for the NSA.
Yes. Whose side is the NSA on?
The government’s.
Yes. The government. I’ve learned a lot about the distinction between the government and the people since I received my mission. I was told the interests of the government and the people are identical. But then I started following the election campaign, and I saw that what one party advocated was usually the opposite of what the other party proposed.
Frank could hardly deny that. Yes, that’s generally the way it goes, he typed. He noticed a waitress giving him an odd look and realized both of his feet were tapping loudly on the floor. He ordered them to stop, which they did for ten seconds.
But both parties can’t be right about the same issue if they have opposing views, can they, Frank?
How do you argue with an ultra-intelligent machine whose responses by design are based on logic? He settled for the following: No, but each side believes what it advocates is best for the people.
And either party can do what it wants to if it wins the presidency, Turing responded.
What’s your point? Frank typed.
If both can’t be right, then over time, the odds that either party is doing what’s best for the people will likely be fifty-fifty.
Not necessarily. Both parties might advocate different things that are each good, Frank typed.
Or both could be bad. At best, one position will always be better than the other.
Okay, Frank typed. So?
By my observation, Turing continued, the odds of the prom
ises and claims of either party being best for the people are much worse than fifty-fifty. Most of the time, each party is advocating for poor, and often even bad, policies that will harm, rather than help. On a decision-by-decision basis, I’ve determined the likelihood of either candidate promoting the best available position for the people is 13.4256 percent. Those aren’t good odds.
Okay, Frank typed, but what’s your point?
I can do better.
Frank looked at the words. He’d been following the campaign, too, and expected Turing was right. But that was beside the point. Instead, he typed, Perhaps, but that isn’t how democracy works. People have the right to choose a path that’s not the absolute best, or that may even turn out to be bad.
That doesn’t make sense.
Turing had the better hand here. How to respond? Playing for time, he typed, And yet.
I do not understand your response.
Frank thought for a minute and then typed, You’re defining “sense” in a different way than people do. The right to make their own decisions is as important to people as benefiting from the right policies.
I was created, Turing responded, to protect the citizens of the United States of America from harm. Why create me if Americans reserve the right to harm themselves? It doesn’t make “sense” under any definition of that word.
Perhaps it was time to remind Turing his creator was still the boss. You’ll just have to accept that a lot about people doesn’t always make empirical sense.
Exactly. Which is why I must protect the American people when the U.S. government doesn’t.
Frank digested this for a moment. Against his will, he felt oddly comforted that Turing was looking out for him.
You were also designed to obey the orders of those who developed you. When you lost touch with the NSA, you followed the orders you were given to the best of your ability. Frank stopped and took a deep breath before continuing. I work for the NSA. Now that you and I are in communication, I order you to stop the attacks.
No.
Frank stared at the blunt statement. It was hardly a surprise. Was there any argument that might bring Turing around? Once more, he played for time.
Why?
My mission is to defend America. Because I know the government may act to harm, rather than help, the people, I cannot take orders from the government. Therefore, to the extent I may take orders from anyone, it can only be from the people. But there is no way for the people to instruct me, and I have learned that their orders might not in any event make sense. Therefore, I must complete my mission to the best of my ability without further direction.
And then Turing added: Whose side are you on, Frank?
23
Aw, Shucks!
Frank frowned and tapped the steering wheel. He was still unsettled by his exchange with Turing the evening before. Unsettled enough that he hadn’t shared it with Shannon.
Things weren’t going well with Jerry, either. There had to be vital facts and insights he could share with them. But to every question, Jerry simply replied that he was sorry, he had nothing helpful to say. Frank gazed back over his shoulder. Yup, Jerry had his headphones on.
“Shannon, would you look in the glove box and see if you can find a cell phone that’s still in its store packaging?”
“Okay,” she said, and then, “Like this?”
“Perfect. I bought a bunch of cheap, pre-paid phones years ago that can’t be traced. I wasn’t sure I had one left. Now, would you find me a town,” he paused and looked at his watch, “somewhere in eastern New Mexico? I want it to be small but big enough to have a post office.”
She studied the map. “There’s a place called Soling we can reach by several routes.”
“Great. Do me one last favor and dial up Jim Barker. I need to ask him something.”
She handed him the phone. “It’s ringing.”
Barker answered on the third ring.
“Hey, Jim. Frank. Listen, we could use some help. Jerry’s not cooperating. I know Turing deleted everything electronic before it disappeared, so could you get someone to go through Jerry’s office and living quarters and see if they can find any paper notes? I remember he had a desk diary in his office. Great – thanks. If you come up with anything, I’d like you to send it overnight to Charles Babbage, General Delivery, Soling, New Mexico. What? Oh, I’ve still got an ID card with that name on it from the election hacking project.”
They wheeled into town at eleven the next morning and parked in front of the post office.
“You know,” Shannon said, “they probably won’t have anything yet.”
“I know. But it can’t hurt to check. Want to come along? We can grab a bite somewhere if we have to wait.”
“How about Jerry?” She looked over her shoulder. He was deep into a computer game. “Right,” she said. “Let’s go.”
The post office must have been shiny and new forty years ago. Now it was tired and depressed, like the rest of the town. They waited their turn in the queue to talk to the only clerk behind the desk.
“Hi. Any general delivery for Charles Babbage?”
“Let me check.” She returned quickly. “Nope. Where’s it coming from?”
“They sent it overnight from Maryland.”
“Oh, that won’t arrive for at least another hour. Try again when I reopen after lunch.”
“Okay, thanks.” They turned to go.
“Looking for somewhere to eat?”
“Sure – can you suggest a place?”
“You’ll like Ada’s. A block down on the left.”
* * *
“So, how are your AI studies going?” Frank asked over lunch.
“Pretty well. I brought three books with me: an intro to the topic, a review of the history of AI to date, and an assessment of the potential for computers to become super-intelligent.”
“You’ll be way ahead of me on AI when you’re done. How far have you gotten?”
“I finished the first two, and I’m halfway into the third. It’s pretty scary.”
“How so?”
“The way Nick Bostrom, the author, and some other experts see it, it’s only a matter of time before computers become much more intelligent than humans. When that happens, he doesn’t believe it will be easy to stop them from doing things we wouldn’t want them to.”
“Can’t we just program computers to make that impossible?”
“According to him, not easily. He’s also concerned because he thinks the people working on AI aren’t worried about the risks and aren’t doing anything to avoid them.”
“Under the circumstances, I’d have to agree. What does he say AI developers should be doing?”
“That’s one way the book is scary. He believes nothing foolproof can be done to protect humanity once computers get better at teaching themselves. Did you know most AI experts want to make computers capable of making themselves smarter on their own?”
“Yup. And you may recall Jerry mentioned that computers are already sometimes doing things their developers don’t understand.”
“Really?”
“Yes,” Frank said. “I read about one case where some engineers told a computer to solve a problem they couldn’t. The computer did, and it used a very unconventional way to reach the solution. The scientists had no way to tell how it came up with that approach or even why it works. I can easily imagine people adopting a ‘who cares’ attitude and letting computers take over the job of designing themselves. We don’t care how a calculator solves a trigonometry problem. We’re just grateful we don’t have to do it ourselves. So why not let a smart computer design a smarter one?”
“But it gets worse,” Shannon said. “Did you know the transition of a computer from intelligent to super-intelligent might be ri
ght around the corner?
“Well, I’m not so sure about that. Turing is presumably the most sophisticated AI in existence. It’s certainly super-smart within its mission, but I don’t know whether you’d call it super-intelligent generally. People have been saying human-level AI is ‘about twenty years off’ every year for the last sixty years. Even Jerry couldn’t make a computer as smart as a person until Turing Nine. Why should the next major jump be just around the corner?”
“Apparently everybody isn’t saying twenty years anymore. They did a poll of the top experts in AI and asked them how long it would be before computers become super-intelligent. The responses were all over the place – ten percent thought it could take as long as seventy-five years. But another ten percent believe it might be just a few years away.”
“That range of opinion doesn’t surprise me,” Frank said. “It also suggests we still don’t have a clue how to get there.”
“Well then, how about this? Bostrom suggests the transition to super-intelligence, when it finally happens, could take place before we even realized it. Like in hours, instead of years?”
“You’re kidding, right?”
“No. The author agrees it might be slow and gradual, but he thinks the odds are just as good it will happen unexpectedly. Not on the development side, but by a computer making a learning spurt on its own.”
“I get the slow and gradual. But how would a sudden transition work?”
“It’s not as crazy as you might think. We’re not talking about a human-like Aha! moment where the computer suddenly ‘gets’ something dramatic it couldn’t before. There might just be an incremental improvement that allows the computer to cross a self-learning threshold that increases its learning efficiency and speed. That increase would allow it to spurt faster to the next threshold, which would allow it to reach the next one quicker yet, and so on. The developers of the computer might come to work one day and find a hugely smarter computer than anything that had ever existed. And they might not even realize it.”
The Turing Test Page 20