“Okay – thanks.” Frank said, “If you do, remember you’ll have to contact him the same way we brought you into the loop. Otherwise, Turing may get wind of it. If that happens, it would know the information we release is bogus and wouldn’t react.”
“Are you suggesting I invite the director out for a drink in a karaoke bar? I don’t know whether I can even get an appointment with him in his office.”
“Okay, not a karaoke bar, but you get the idea. If anyone slips up, the next time Jerry updates the testbed version, it will carry that knowledge along.”
“Well, you can be sure we’re not going to let Jerry update the testbed again.”
“But to run the test, we have to. That’s the second step, remember?”
Barker looked pained. “Look. I can’t figure everything out right here. And especially not with that racket going on. Give me twenty-four hours and I’ll get back to you. And this time, we’ll take a walk in the park.”
* * *
Two days later, Frank met Jim for a stroll outside the NSA headquarters. They were joined by Harold Bromfield, NSA assistant director. His boss had approved the tests.
“When and how do you propose to proceed?” Bromfield asked.
“I think the best way to start would be to put together a briefing document for wide internal circulation, as well as presentation at a meeting. Turing couldn’t fail to detect that.”
“On what topic?”
“For the first stage of the test, I’m thinking we’ll issue an update on how many new coal-fired power plants China is bringing online. There’s already a detailed NSA analysis on that topic. We’ll revise that document to include data from a new source. A couple weeks later, after the test is complete, we’ll release a correction, saying the source had been discredited, and restore the document to its original state.”
“How bad do you think the resulting attacks might be?”
“We’ve collected enough data using the predictive model to know how much impact it takes to provoke an attack. That means we won’t need to come up with anything likely to inspire a major event. But we can’t be too conservative, or we may not cause an attack at all.”
“I understand. How soon can you move forward? With the election coming up, the president’s under enormous pressure to halt these attacks.”
Well, that explained why the director had given his approval so quickly. What was one more attack compared to an election? “The analyst I’ve been working with, Shannon Doyle, has already drafted the document update for the first test. Today’s Thursday, so we can release that today. Jim, how soon can you call a team meeting to present it?”
“How about tomorrow afternoon?”
“Tomorrow sounds good. I’ll have the document to you within the hour.”
17
What? Me Worry?
Frank felt like a lab rat deprived of the sugar water he’d been trained to expect when he pushed the bar in his cage. Here it was the following Thursday, and no matter how many times Frank refreshed his browser, there was no word of a new attack.
Jim had circulated the falsely updated NSA document and held the meeting right on schedule. No attack occurred by Wednesday, and that was fine. Frank had already convinced himself a stolen clone of Turing wasn’t behind the attacks. He was betting Jerry’s testbed version was the guilty party. And then there was also the possibility he’d been wasting the NSA’s time and resources. The longer things stayed all quiet on the cyber front, the more likely he feared that might be the case.
An email alert popped up in the corner of his screen. It was Shannon. “Want some company?”
He tapped his fingers on his knees. He’d love the distraction. But he wouldn’t be able to stop himself from checking the news constantly.
“Thanks,” he typed back. “I would, but I doubt I’ll be much fun to be around. Let’s connect in the morning.”
He refreshed the browser again. Darn. He tried to think about next steps instead. What would he do if neither test produced any results? He’d been so sure an attack would occur after Jerry updated the testbed on Wednesday he hadn’t bothered to work out what experiments should follow if they were needed. Could he have been off base the whole time? He went back to his decision tree and reanalyzed everything to see where he might have gone off track.
He was still at it, and no wiser, when his intercom buzzed. Who could that be? He pressed the voice switch. “Hello?”
“I happened to be in the neighborhood. Mind if I come upstairs?” It was Shannon.
He paused. No, he wouldn’t mind at all. “Of course not,” he said. A minute later he saw she had not only chanced to be nearby but was carrying a bottle of wine and a takeout dinner for two from a trendy restaurant. She didn’t come right out and say he was still her hero, but her eyes suggested that it was so. He was grateful for that.
* * *
Shannon was still there the next morning when Frank woke up and checked the news. Nothing. He had to admit his initial tests had failed to determine anything at all.
He left a note for Shannon and scuttled downstairs for his run. On the street, he unconsciously inched up to a pace he would later regret.
His review of his original analysis had yielded only one revelation, which was that he was an idiot. Obviously, he’d jumped to a conclusion simply because the available data made it look so appealing. That meant he’d been guilty of the cardinal sin of confusing coincidence with causality. Just because Turing might be capable of launching the attacks didn’t mean that in fact it had. No, he corrected himself. His theory and tests hadn’t even proved that – all he knew was Jerry thought Turing could launch similar attacks if properly programmed to do so. He was truly right back where he started.
He was exhausted when he dragged himself upstairs. Shannon was awake and setting out fresh croissants and jam. She must have brought those, too. One look at her told him she’d already checked the news.
“Welcome back. Coffee?”
“Thanks,” he wheezed. “Give me a little time to cool off on the balcony, okay?”
“Sure thing,” she said, giving him a sympathetic kiss.
He slid open the door and collapsed into the chair. The sun was still beneath the horizon, but it was obvious it would be a beautiful day. Or at least obvious to anyone who hadn’t just fallen flat on his face in front of everyone, including the director of the NSA. And Shannon.
With a flurry of feathers, Julius landed at his elbow for his morning handout.
Frank groaned. “Oh, for Pete’s sake. Can’t you even give a guy a chance to get his breath back first?”
The crow cocked his head to one side and then the other. Maybe it would just go away if he ignored it. But no. Instead, it hopped once sideways along the railing. Then it hopped twice more in the same direction; he wondered what it would do when it reached the corner. When it did, it went through the head cocking exercise again. Then it tilted its head back and squawked Black Hats Suck! But all to no avail. Now what would it do?
What it did was fly away. He felt guilty and disappointed, watching it disappear. It wasn’t the crow’s fault he was in a foul mood. And for a couple of minutes it had provided a distraction from his morbid thoughts. He stood up and stepped inside.
“How was your run?” Shannon asked, filling the coffee cup by his seat at the table.
“Okay, I guess.”
“Good. Have you figured out what Turing test to run next?
“How do we even know it’s Turing?”, he said, picking up his coffee. “What if it’s a similar program developed by someone else?”
“Well, what if it is? You said from the beginning that was a possibility. And does it really make any difference who actually created it?”
No, it didn’t. Shannon was right. It was true the tests had failed to
prove a copy of Turing at the NSA was behind the attacks. But it was equally true they hadn’t proven an escaped copy of Turing, or another program like it, wasn’t. That was good to keep in mind. But he’d failed to come up with any usable test ideas during his run. Inside the NSA, every variable was under his direct control. Outside, nothing was.
Things would be a lot harder now. Whatever else that might mean, he decided, it meant he needed to find out everything possible about Turing from Jerry Steiner.
* * *
Jerry was standing outside his office door talking to one of the engineers on his team when they walked up to him. “Well, hi, Frank – and is it Shannon? Good! I remembered this time!”
“Hi, Jerry. Mind if we come in?”
“In? Oh! In my office! Of course! Please come in.”
They did, and Frank handed him the short letter he’d brought with him. Jerry held it up in front of his face and took his time reading it. When he lowered the letter, he put the index finger of his other hand up to his lips and then disappeared through the door to his private living quarters. When he returned, Frank tried to figure out whether his grin looked forced. He couldn’t tell.
“So! I’ve turned off the microphones, so there’s no way Turing can hear us. Your letter didn’t say why you wanted to speak to me privately, though. What is it you don’t want Turing to hear?”
“Let me work my way up to that, if that’s okay. During our last visit, we talked mostly about who might have the capability to create software that could be responsible for the current attacks. But we didn’t spend a lot of time on how such a program would go about planning and executing those exploits. If we understand that process as well as possible, we should be better able to determine if a self-directed program is involved, and if so, how to stop it.”
“Is that what you think is happening?” Jerry asked.
“Well, that’s why we’re here – to try to find out if it’s a real possibility or not. If Turing Nine is the most sophisticated program of its kind in existence, then you should know better than anyone how likely that is to be the case. Does that make sense?”
“Why, yes, I think that’s correct.”
“Good. So, let’s say you told Turing Nine that global warming was the enemy,” Frank said, “and then you instructed it to take whatever action it thought necessary to ensure that the atmospheric level of carbon dioxide would never exceed a specific number of parts per million. Could it do that, or would that be beyond its current capabilities?”
Jerry grinned even more widely than usual. “Oh, that’s just too funny. As a matter of fact, I’ve been running trials in the simulation environment to test Turing Nine’s capacity in that exact area. And it’s performing splendidly! It seemed like an ideal test case to use, because there’s so much baseline data, lots of new information coming out all the time, and almost limitless greenhouse gas sources. Turing Nine has progressed enormously, and it’s getting more capable and creative by the day.”
Frank stared at him. “So, the answer would be yes?”
“Oh, most definitely. It’s performing very well in the simulation environment in response to instructions very much like that.”
“Excellent,” Frank said. “So, let’s continue to use Turing as an example then. To do so, I assume it would need to first determine which targets were most appropriate. Then, it would need to find vulnerabilities in all the different types of networks and systems it encountered at those targets. After it succeeded at that, it would have to exploit those flaws to penetrate the targets, analyze the architecture of their systems, and then develop and install any new code necessary to control them in whatever way it decided was necessary. And finally, at a particular time, exercise that control to bring off an attack. Could Turing Nine actually do all that?”
“Yes. Isn’t that amazing?”
“It certainly is. I’m not an expert on AI, but doesn’t that mean Turing Nine must be ten times as powerful, if not more, than Turing Eight?”
“Ten times! My goodness. I calculate it’s just short of one hundred and thirteen times as powerful!”
“That’s extraordinary. Did that require some phenomenal new breakthroughs in artificial intelligence?”
“Actually, no. The important work can be found in Turing Seven and Eight. But they were too slow. To be useful, I needed to figure out a way to make a computer not only as creative as a human brain but just as fast.”
Frank was intrigued. “And how did you do that?”
“With RGA.”
“RGA?”
“Yes! RGA.” He smiled. “Oh! Excuse me. I tend to forget that if I give something a name, I need to tell other people about it, too. RGA stands for Recursive Guess Ahead. Here’s what it’s meant to do.” He paused and looked at Shannon. “Are you a computer engineer, too?”
“I’m afraid not. I’m an analyst. I use computers all the time, but I only have a layman’s knowledge of how they work.”
“Oh! Well. Let me see then. Okay. So, while computers have been able to answer very difficult questions for over seventy years, for a long time there was no ‘intelligence’ involved. And computers couldn’t do other things a human could. Think of a doctor giving his expert opinion, for example. Such an opinion is based on a lifetime of experience as well as knowledge of very extensive and constantly changing facts as science and medicine continue to advance.
“Emulating that type of decision making is extremely complex and would take enormous amounts of computing power if it was done the traditional way. Think of a computer playing chess, for example. Every time it takes a turn, it has lots of different moves to choose from. Each one of those moves in turn leads to many new possibilities and so on. And don’t forget – that’s true for the other player as well. Whenever the opponent makes a move, an entirely new set of options become available to the computer. So, the computer needs to think many moves ahead to be competitive. That means the number of possible outcomes for a game of chess, and the ways to get there, are almost infinite and far beyond what even the most powerful computer in existence today could manage.
“One way we solve this problem is to use what came to be called ‘artificial intelligence’ to decrease the number of alternatives a program needs to consider. So, for example, we can give a chess-playing program access to a database which includes all the classic strategies expert players are likely to use. The computer can use this knowledge to recognize a classic opening and then access the countermoves the best players would use in response to that opening.
“The computer can use all this data to greatly reduce the number of choices that make sense for it to consider, as well as to decide which move is most likely at any time to lead to a victory. Do you follow me so far?”
“Yes, that’s clear. It sounds very much like what a human chess player does.”
“Perfect! Excellent! So now you understand why Alan Turing came up with the specific test he suggested should be used to determine when computers had become ‘intelligent.’ As you know, here’s how his test works: someone is asked to conduct two simultaneous keyboard conversations and is told that one of them will be with a human and one with a computer. If he or she can’t tell after five minutes which is which, the computer in the test is deemed to have achieved ‘intelligence.’ Why? Because Turing pointed out that there really wasn’t much point in arguing over what intelligence is. Instead, we should focus on what intelligence allows us – or a machine – to do.”
“Okay, I follow you,” Shannon said. “But I assume that just because a computer is ‘intelligent,’ it won’t necessarily have the ability to do any particular job, like hack a power company’s computer system.”
“Of course not. But you’re intelligent, and you couldn’t, either. Am I right?”
“No question about that!” Shannon said. “So, I guess we should have
asked our original question in two parts: does Turing Nine have the capacity, and could it be programmed to use that capacity, to autonomously analyze, penetrate, and compromise all kinds of power infrastructure?”
“Exactly. You understand the matter precisely.” Obviously pleased, Jerry stopped and grinned. Then he frowned. “Was there anything else you wanted to talk about today?”
“Yes,” Frank jumped in. “You were going to tell us about Recursive Guess Ahead.”
“Oh yes! Right! So, Turing Eight had the capacity to do what you’re asking about, but it was so slow! That’s because simulating ‘general’ intelligence – the kind of capability a human being has – is vastly more complicated than enabling a program to perform one specific task. Like instructing a robotic vacuum cleaner where to go, or even playing world-class chess. Accomplishing the current attacks in a usefully short period of time requires something much closer to general intelligence.
“To create a program capable of that kind of activity, we need to figure out how to take shortcuts, like a chess-playing program does. Another approach is to teach computers how to be more ‘intuitive,’ like humans are when they use the entirety of their prior experience to help them deal with a situation.
“Better yet, we want to make computer programs able to learn on their own. I’m sure you know,” he said, turning to Shannon, “that computer engineers need to break down tasks into thousands of tiny steps and then write code to perform each of them. That wouldn’t work for a self-directed program. It would need to figure out how to meet new challenges, and work with new information, without having to wait for a programmer to update it. But learning takes an awful lot of computer resources, too, and takes time as well.”
Jerry stopped again, looking helpful and hopeful.
The Turing Test Page 15