“Hasn’t artificial intelligence always been hot?”
“Oh, not at all, partly because it’s had a history of over-promising and under-delivering. Back in the 1960s everybody expected computers to be thinking like people in no time at all. As you already know, that didn’t happen. Eventually everybody realized how big a challenge getting computers to think like people was, and the enthusiasm wore off. Then somebody came up with something they called ‘symbolic logic,’ and suddenly, AI was back in fashion. After the progress from that approach petered out, nobody who stayed a true believer was taken seriously. For a long time, it was a career killer to promote the potential of AI. Most people gave up and moved on.
“Anyway, I haven’t heard anything about Jerry in years. I wonder if it could be the same guy?” Frank said, Googling Jerry’s name on his phone.
“Huh! How about that? There’s almost nothing about him online at all; just some old journal papers from over twenty-five years ago. And not a single picture. That’s really odd. Anyway, we’ll find out when we meet him.”
On their way to NSA headquarters the next day, Shannon returned to the topic.
“So, you said this guy Jerry was brilliant but strange. How so?”
“Well, here’s a good example. He was maddeningly literal – just like a computer. Jerry was incapable of recognizing a metaphor no matter how obvious it was, even in context. And if you asked him something, he’d answer the question exactly as you phrased it, even if it didn’t make any sense hearing it that way. He wasn’t jerking you around; it’s just the way his mind worked.”
“Such as?”
“Well, let’s see. If you said, ‘Jerry, am I wrong to think of North Korea as an existential threat to America?’ he’d answer, ‘No, you’re not wrong to think that.’ You’d think, fine, we’re in agreement. But what he really meant was ‘No you’re not wrong to think whatever you want to, even if it’s something as stupid as Korea posing an existential threat to America.” After that, the conversation would get more and more bizarre until you finally realized you and he were on totally different wavelengths.”
“That must have been fun.”
“It would have been fun to strangle him sometimes.”
“What did he look like?”
“He looked kind of weird, too. For starters, he had this big, round head, with a broad forehead and pasty, pink skin.” Frank paused, thinking back, “and thinning blond hair. He combed it straight back. Oh – and how about this – he wore these awful 1970s polyester suits – he had to be the only undergraduate on the entire campus in a suit. He must have bought them at thrift stores, too, because thank goodness, you couldn’t buy polyester suits new anymore. No tie, though. Just open-neck shirts with big collars that flopped around over the lapels of the suit jacket. He looked like something out of the movie Saturday Night Fever, only in terrible colors, like pale purple. But really, it was mostly the look on his face.”
“What kind of look?”
“I’m trying to get that picture back in my mind – it was kind of wild-eyed, with a big, manic grin that usually had nothing to do with the conversation. But he wasn’t scary. As a matter of fact, other than the literal bit and the expression, he was more or less okay to talk to. Childlike, actually, now that I think of it. But clearly he was living in his own unique version of reality.”
Frank looked back at his computer. “It looks like the last Google hit is an interview Jerry gave after his appointment to an endowed chair.” Frank opened the link and skimmed the article. “Oh my,” he said. “Here’s something I never knew.”
“What’s that?”
“I’ll read it out loud,” Frank said.
Professor Steiner was the oldest of four children and enjoyed a happy early childhood. His fondest memories are of playing with his brothers and sisters. But all that changed abruptly when he turned eleven, and his father deserted the family. His mother, already suffering from mental health issues, was devastated and became unable to care for her family. Mrs. Steiner was institutionalized, and the children were distributed far and wide among multiple foster homes. When he grew up, Professor Steiner was able to locate his mother, who is still in custodial care. But he was never reunited with his siblings.
“That’s terrible,” Frank said, shutting his browser. I feel bad I never knew any of that.”
At the NSA, they were led through the usual underground labyrinth of hallways and laboratories to a destination that turned out to be an open area filled with programmers hunched over workstations. On the other side of the room was a single closed office door, which their escort knocked on before leaving.
They waited, but no one opened it. “Maybe we should knock harder?” Shannon said.
“Can’t hurt to try,” Frank said, knocking sharply.
“Just walk right in,” one of the programmers called their way. “Jerry’s in his office. But he’ll never hear you knocking.”
“Thanks,” Frank said. He opened the door, and Shannon stifled an urge to laugh: there, sitting behind a computer, was an exact, older version of the person Frank had described. Only now he was bald and wearing a large set of headphones.
They stepped in and waited to be noticed, gradually stepping closer and closer to the desk. When at last they were noticed, Steiner took off his headphones and squinted at them, clearly puzzled.
“Hi, Dr. Steiner,” Shannon said. “We were told you’d be expecting us.”
A broad smile that displayed all of the teeth inside divided his face into two pieces. “Am I? Who are you?”
“We lined up an appointment with you through Jim Barker. I’m Shannon Doyle. And this is Frank Adversego. You might remember him from MIT. You were both undergraduates there at the same time.”
Jerry’s face went briefly back into squint mode before reverting to a grin. His head started nodding in bobblehead fashion.
“Frank! Frank Adversego! Yes, I remember.” He stopped speaking but kept grinning and nodding.
“Uh, do you mind if we sit down?” Frank said.
“Of course not! What are you waiting for? If you have an appointment, you should certainly sit down. Sit down!” He followed that with an unsettling giggle. Frank had forgotten that habit.
“So, I guess we should explain why we’re here,” Frank said, sitting down. Jerry began nodding again, as if this was an astonishingly original thought.
“Shannon and I are part of the team that’s investigating the big waves of cyberattacks that have been occurring.”
“Have they?”
“Well, yes. All over the world – for about a month now.”
“Really? I had no idea. I don’t pay much attention to the news. Any attention, actually. I don’t have a TV or radio here.”
“Or at home, I guess.” Shannon said, trying to fill the silence that followed.
“Excuse me?”
“I meant, it sounds like you don’t have a television or radio at home, either.”
“Yes. That’s exactly what I said.” He grinned back, expectantly, but neither Frank nor Shannon could think of what to say next.
“Oh!” Steiner said at last. “Oh! You’re assuming home is somewhere else. But as far as I’m concerned, home is where the lab is. That was part of the arrangement I made with the NSA when I left MIT. The door behind me leads to my living quarters. At first, I had an apartment in town, but I never seemed to use it, so I let it go.”
Frank cleared his throat. “Anyway. So, as I said, for weeks now, someone has been launching waves of cyberattacks on essential energy infrastructure. Each of the exploits has been extremely sophisticated. Interestingly, each wave has been carefully calculated to have very precise impacts commensurate with the triggers of the attacks.”
“Fascinating! And who do you think is behind them?”
&nb
sp; “Well, that’s the problem. The attacker seems far too sophisticated for anyone other than the U.S., Russian, or Chinese governments to pull off. But more of the targets have been in those countries – and India and Japan – than anywhere else. Each one has suffered billions of dollars of damage. So, it doesn’t seem likely that any of their governments could be actively involved.”
“How very interesting. You’re saying the only countries that could possibly be the attackers also can’t be the attackers.”
“Exactly. So, it occurred to us that we, or the Russians, might have developed an artificial intelligence program capable of such exploits, and someone might have somehow stolen it.”
“Stolen it? From us?” Jerry looked alarmed.
“Or the Russians or the Chinese,” Frank responded hurriedly. “Anyway, we thought it might make it easier for us to figure out how to stop the attacks if we knew how someone would go about creating a program capable of accomplishing the same results. Going a step further, we wanted to get your opinion on whether it would be possible to write a program that could stage such attacks while operating autonomously. In a nutshell, that’s why we wanted to speak to you.”
“Gee, I’m afraid I can’t help you.”
“Really?” Frank said, taken aback. “Why not? We’ve both got top security clearances. You can check with the credentials office.”
“Oh, no, that’s not the problem at all.”
“Then what is?”
“I don’t know enough about the attacks you’re asking about.”
Well, Jerry certainly hadn’t changed. Frank told himself to be patient and continued. “That’s fair enough. Would you be able to if you knew more about the attacks?”
“Oh, certainly, certainly. Would you like me to learn more?”
“Yes – we’d appreciate that a lot. Why don’t I send you a link to the project team’s most recent status report right now? The first section gives a concise summary of all the attacks to date. May I do that?”
“Certainly.”
Frank sent him a link from his phone via the NSA’s internal Wi-Fi. Jerry began reading avidly on his computer screen, nodding vigorously from time to time. As if forgetting Frank and Shannon existed, he put his headphones back on. Shannon frowned and cocked her head to one side, trying to decipher the muffled, just-audible tones that reached them. “Music? Or maybe children?” Shannon whispered. “As good a guess as any,” Frank whispered back.
When Jerry finished, he turned away from the screen and involuntarily jumped in his chair when he saw them across his desk.
“Oh! Yes! Of course,” he said, pulling his headphones off. Then he frowned.
“So, what do you think?” Frank asked.
Jerry looked slightly embarrassed. “About what I just read?” he asked, helpfully.
“Yes – exactly.”
Jerry frowned again. “I think these attacks sound very serious.”
Frank took a deep breath and backtracked. “What we’re hoping you can tell us is who you think might have an artificially intelligent program able, once set in motion, to autonomously identify hundreds, or even thousands, of targets; find an exploitable vulnerability at each one; design a way to attack the target in the way you read about in the incident summary; and finally, launch those attacks almost instantaneously as soon as it detected a triggering event via the Internet. Is any other government or terrorist or activist group you know of currently capable of that?”
“No.”
Frank and Shannon waited for him to elaborate, but Jerry just sat there, grinning.
“Did you want me to say yes?”
“No – not if the answer is no,” Well, at least Jerry hadn’t said “forty-two.” Frank phrased his next question more carefully.
“Does the NSA have anything capable of launching attacks like these?”
“No. The closest capability it has is Turing Eight.”
“What’s that?”
Jerry’s face positively glowed. “The Turing series of AI programs is my favorite project. I’ve been working on it for years, and the version my team is working on now is the eighth. You see, that was the other part of my arrangement with the NSA. I can work on any aspect of artificial intelligence I want to, so long as I also lead the work on whatever AI projects the NSA and CYBERCOM want to pursue.”
“That’s quite an arrangement,” Frank said.
“Oh, my goodness, it’s been absolutely essential for my work – it’s what’s allowed me, unlike other researchers, to keep pushing forward full speed with AI for the last twenty-five years. That’s why I left MIT. I was always having to scramble to find grant funding for my lab. After the ‘AI Winter’ in the 1980s, when all the funding for AI work dried up, I decided that academia just wasn’t for me. Luckily, in my case the NSA was willing to take a long-term view. I’ve had a guaranteed, inflation-adjusted, minimum budget here for twenty-five years now. If they want me to take on any new work, they fund that as well. It’s been bliss.” He giggled. “What’s it like out there, by the way?”
“You mean in AI research?”
“Oh, if you wish. I meant outside generally.” He started humming to himself. “You see, I don’t get around much anymore.”
“When were you last, uh, outside?” Shannon asked.
“Oh, my. Well, let me see. What month is it?”
“Month?”
“Why, yes – and year, too, I guess.”
They told him.
“Oh – well, let me see, I guess it would be almost two years then. That’s when my mother died. I think. It was awful. I had to be away from the lab for almost three entire days.”
“I’m very sorry to hear that, Jerry,” Frank said. “I’m sure that must have been very difficult for you.” He paused and then continued. “But to get back to Turing Eight, can you tell us a bit about that program, what it can’t do that the attackers are doing?”
Jerry’s eyes lit up. “Oh, yes indeed. It truly is my favorite project. The goal of the Turing series has always been to push the potential for AI to its limits. In deference to the NSA, its specific purpose is to apply AI principles to new types of software weapons so they can infiltrate any type or category of system or software and then wait to be triggered, either by a signal, or on the occurrence of any specific or generic type of event found in the program’s database. Once the action phase is triggered, the program can disable those targets. Ultimately, the goal is for Turing to be able to disable an enemy’s entire offensive capability. And it would do all this autonomously.
“It could also be programmed to act as a sort of doomsday machine. So, for example, if war was looming, we could inform the enemy we had already deployed such a program, meaning that even if they somehow succeeded in taking out our nuclear capabilities, we could still destroy all their infrastructure. Sort of like mutually assured destruction, back in the old days.”
“How does it do all that?”
“It uses the NSA’s huge library of zero-day exploits, as well as vulnerabilities it’s programmed to seek and archive on its own. It can install trapdoors and design assaults on whatever types of essential cyberinfrastructure of whatever country, or countries, you wish to designate. You can also program it to attack whatever specific category of target you wish. For example, if you wanted, you could instruct it to disable just financial, or transportation, or – ”
“Energy infrastructure?” Shannon asked.
“Oh, of course, energy infrastructure.”
“But how could you trust a program like that,” Frank asked. “What would prevent it from making mistakes, or attacking the wrong targets?”
“Oh, it has basic ethical controls built into it. Isaac Asimov’s Three Rules of Robotics are still useful starting points, although they don’t go far enough. If you d
on’t remember them exactly, they go like this:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
“But since we’re talking about war,” Jerry continued, “there are overrides to the ethical rules as well. So, for example, at level one – say, during a non-shooting war – Asimov’s rules would be absolute. But you can also program Turing so that as the danger to the owner – in this case, the U.S. – increases, Turing can automatically up-shift to a level where those rules are modified. In the case of a threatened preemptive nuclear strike, it would override the first rule with respect to the enemy’s population, because we’re now on a war footing. And in the case of the doomsday scenario, the three rules would reverse entirely, because the program couldn’t trust any inputs from anywhere, and its mission can’t be completed if it doesn’t survive. And I also included Asimov’s fourth rule as well.”
“There was a fourth rule?” Frank said. “I never knew that.”
“Most people don’t. He added it much later, in 1985, in a novel called Robots in Empire. Since he wanted it to trump all the other rules, he thought it should be the first rule. But people were already used to the numbering of the original rules. So, he assigned it the numeral zero, and called it the ‘Zeroth Rule.’ It goes like this:
A Robot may not harm humanity, or by inaction, allow humanity to come to harm.
“That seems like a very wholesome rule to me, so I included that one, too.”
“So those are the basic ethical rules,” Shannon said. “What more needs to be added?”
The Turing Test Page 10