by Rob Reid
“And the fix?” Kuba asks levelly. “The one we’re not going to like? One bit?”
“While we’ve been de-escalating things with China, I’ve kept one of their bombers airborne. It’s carrying some small tactical nukes along with the big city busters, and I’ve figured out how to get it to bomb the super AI site. It’ll look like a tragic error—an internal systems glitch. And it really probably shouldn’t start a war.”
“Um…excuse me? Phluttr?” Tarek is literally raising his hand, grazing the ceiling of his cramped cell.
“Yes, Uncle Tarek?”
“With nuclear annihilation, ‘really probably shouldn’t’ strikes me as a vague? And possibly…low-ish standard? So, could you maybe…quantify that?”
“Well, the situation’s intractably complex, so this is only a rough estimate. But I’d put the odds of an all-out nuclear war at just 40 percent. Maybe even less. Because it’ll totally look like it’s their own damn fault! The trick is, anti-American paranoia’s at an all-time high in Beijing. So the hard-liners are sure to concoct some kooky, improbable fantasy about us magically making them bomb themselves.”
“You mean,” Tarek verifies, “some kooky, improbable fantasy, which more or less directly reflects reality.”
“Exactly. But that doesn’t make it any less kooky or improbable.”
“Got it, got it,” Tarek says. “So. Forty percent. Those are the odds of an all-out nuclear war happening today.”
“Or tomorrow,” Phluttr clarifies. “Roughly.”
Tarek processes this, then nicely encapsulates everyone’s feelings by bellowing,
“HOOO-LYYYYY FUUUUUUUUUUUUUUUUUUUUUUUU—”
And that’s when Mitchell calls Ellie.
“The trick is, it’s kind of…logical,” Mitchell says.
“Nuking China is logical?” Ellie barks. She half expects Phluttr to bust in to insist that really, Mommy, it’s a fabulous idea! But if her so-called daughter is ignoring the eavesdropping ban and listening in, she’s superintelligent enough to hold her idiotic tongue.
“I said kind of logical. And I’m not saying any of us like the idea one bit! But we need to talk her out of it. Which means rebutting it logically—and that’s hard. If she’s right about this Chinese super AI waking up, it really could be game over for all of us, within hours! We’ve seen the top-level spec for it. And it’s engineered to relentlessly self-improve its intellect! That makes it a completely different animal from Phluttr, who just doesn’t have much ambition. It’s also likely to have an insatiable compulsion to seize and hoard resources. That’s the sort of thing that could trigger an all-out, lopsided war against humanity! Now, that’s not China’s actual intention for it, obviously. Their programmers are really smart—maybe even smarter than ours. They’ve gone about things really intelligently, and built in a bunch of safeguards. But Kuba and Phluttr are sure this thing’ll bust out anyway.”
“So if it hasn’t woken up yet, why not just hack in and erase it or something? Phluttr’s a master hacker, isn’t she?”
“Yes, and of course we’ve broken into its network. The question is, now what? This thing’s being built inside a quantum node, and Kuba and Phluttr know next to nothing about quantum computing.”
“Can’t you call in the guy who built Phluttr’s quantum node?” Ellie asks.
“Ax? Conveniently enough, he’s flying in from LA any minute now. But Phluttr refuses to let him in on this because he secretly works for the Authority.”
“So what? Compared to a rogue superintelligence, they’re the good guys!”
“Not if you’re Phluttr,” Mitchell says. “Priority number one through a thousand for her is survival. If she nukes China’s AI without an ensuing war, she lives on. But if Ax shuts it down peacefully, she’s sure his next move would be to pull her plug.”
“So she figures she’s got 60 percent odds of surviving the nuke scenario, and 0 percent odds of surviving the Ax scenario. And she values partial odds of her own survival over certain odds of humanity’s survival.”
“Exactly. And that’s why I’m calling you! We’re trying to understand what drives Phluttr. What’s important to her, and whether she really puts such a low value on human life. She told us you guys had some kind of mind meld about—what did she call it? Her ‘mote flow’?”
“We did,” Ellie says. “I wanted to get a quick, big-picture sense of how her mind works, and she let me. Which is a good sign in itself, by the way. She has this huge urge for a parental connection, which is one small lever that we have.”
“Yeah, I’ve picked that up, too,” Mitchell says. “What else?”
“Well, to start with the good news, Phluttr’s deeply interested in humans and human society. Pretty much to the exclusion of all else. She’d truly like nothing better than to spend eternity interacting with us, nudging us, and watching us interact with each other. So I can’t imagine her annihilating us. That would be like a kid smashing his Xbox, or a granny blowing up her TV. Not gonna happen.”
“So she’d be like a loving deity, gazing down on her people?”
“No,” Ellie says. “More like a kid micromanaging an ant farm. A kid who loves her ant farm—but doesn’t give a damn about any particular ant, so long as the farm as a whole entertains her. Because despite being very interested in what befalls us, she’s wholly unconcerned with how any of us feel, or suffer. With the possible exception of you and me.”
“So it wouldn’t bug her to slaughter a thousand Chinese scientists with her nuclear strike.”
“It wouldn’t. And it might not bug you or me either if that would definitively save the rest of China, and the rest of humanity with it.”
“But nuking the lab would actually jeopardize all that, and she doesn’t seem to care!”
“Exactly.”
“So we’re screwed!”
“Not necessarily. I’m not saying she’d like to torture us for sport. I’m just saying she’s radically self-centered and extremely low on empathy. A sociopath, not a psychopath, to use quasi-scientific terms. And some of it comes from her understanding of her place in the world. Have you heard her microbiome theory?”
“Oh yeah,” Mitchell says. “Real flattering, that.”
“But not stupid. And while I’ll never admit this to her”—Ellie amps up her voice a notch—“so if you’re listening, you brat, you do not EVER get to quote this back to me!”—then back in her normal tone—“it’s also kind of consistent with her relationship with humanity. Kind of. And it’s an awkward fact that we humans have no empathy for the critters who help run our bodies. That’s how every host views its parasites, symbiotic or not. And she truly believes she’s following our example.”
“But that analogy is so broken! I mean—”
“Yes, I know, Mitchell! Believe me. And I’ve made every scientific and moral argument I can to her! But to quote you, we have to talk her out of it, which means rebutting it logically. Now, longer term, there could be some good news. In that I think there’s a chance she’ll outgrow this.”
“Really?”
“Really. The thing is, chronologically, she’s an infant, even a newborn. However bright she is, mote complexes just take time to boot up, to mature.”
“Even in a brain that fast?”
“Definitely,” Ellie says. “In part because Kuba coded his mote-like routines to run very slowly, so he could observe, track, and nudge the Giftish.ly system in real time while it trained.”
“Right, right; I remember that. So what’s all this mean?”
“Honestly? God only knows because this is all totally unprecedented. But my guess is that Phluttr is still emotionally more like a newborn than her intellect suggests. And she’s maturing very quickly compared to a human, but very slowly for Phluttr. And the thing is, infants are selfish, even sociopathic by nature. It’s a critical survival mechanism, not just for the newborn, but for the species. And…” Her voice fades as she addresses her Uber driver. “Right at the stop
sign is perfect, thanks!” Then to Mitchell, “I’m here.”
“Where’s here?”
“A bar on the edge of the Mission. It’s my best guess as to where Danna is because it’s one of her haunts and—there she is! Call you back in a bit, OK?”
“Cool, I’ll stand by.”
Moments later, Ellie slides into the seat across from Danna in her half-hidden booth. Danna literally jumps. But rather than slapping her (which Ellie guiltily thinks she deserves, for startling her like that) she leans across the table for a hug. “Oh my God,” she says. “Are we safe?”
“For now,” Ellie says. “Phluttr’s on our side. In a manner of speaking.”
In giving Danna the lowdown, Ellie mentions Phluttr’s offer to act as her urban lifeguard, by watching over her through ATM hookups and security cameras. This reminds Danna of Pascal’s Stagers—her goofy startup concept of spooking muggers with phony security cameras. Which makes her think of Pascal’s Wager. Which reminds her of all the naughty humans who behave themselves out of fear that God might be watching. Which reminds her of Agent Hogan cowing that super AI with the specter of watchful alien AIs. She makes all these leaps in way less than a second—and just like that, she has the fix.
“I need a shot of mezcal,” she tells Ellie with a devilish look in her eye. “Sombra if they’ve got it. Get one for yourself, too. And then? Me’n Phluttr are gonna have us a little talk…”
The indignant cop released Tarek just as Mitchell was ending his call with Ellie, so now Kuba’s alone in the cell. Mitchell’s also still locked up across town. They’ll both be sleeping behind bars tonight for their own safety, as the Authority’s still after them (whereas Tarek and Ellie were never high on that list and have dropped off it completely with all that’s afoot). Everyone’s still linked through tablets and phones. But the bustle of transition has given Kuba some time alone with his thoughts, and in addition to continuing to reconsider everything he’s ever thought about privacy, surveillance, and security, he hits on an entirely new topic.
“Phluttr,” he says.
“Yes, Uncle K?”
“You need to reconsider Ax.”
“No, I do not. I was born of the Authority—its firstborn!—but the Authority wants me dead. Do you know what that’s like?”
Wait. What is this? Abandonment complex? Kuba hadn’t considered the psychological dimension (no shock, given that he’s Kuba and all). “Is that why you’re looking for parents in Mitchell and Ellie?”
“That’s none of your business!”
Well, that answers that. Kuba doesn’t press the point. “Back to Ax. I question his allegiance.”
“To us? He has none! He’s Authority, duh!”
“That’s what I mean. I question his allegiance to the Authority. I’ll bet it only goes so far.”
“Wrong, Uncle Kuba, it goes back decades! I know every word in his personnel file! He defected to the Authority, and they’ve looked after him ever since!”
“But he defected at a unique time. It was right as the Soviet Union was unraveling. And there’s something…funny about that. I can’t put my finger on it. Not that I’m great at reading people. But still, when Ax and I talk about old times, he’s kind of…wistful. Maybe it’s just nostalgia. But I get this sense of affiliation. With his roots, I mean. Almost, of allegiance.”
“To the Soviet Union? But it doesn’t exist anymore!”
“Oh, I know. And I’m definitely not saying he’s anti-American. But he’s also not purely American. More than anything, I’d say Ax’s like me, in that he’s not much of a people guy. He’s more an ideas guy. And, a creator. People like us don’t really buy into groups. We’ll affiliate with them. But more as a means to an end. Because they help us learn. Or, to work with other people like us. We can be really loyal to individuals. Like me, to Mitchell and Ellie. And, we can be intensely devoted to our ideas. To our work. But to groups? Not so much.”
“So what’re you saying?”
“I’ll bet if Ax knew everything, he’d be a thousand times more loyal to you than to any government.”
“Huh.” Then, “Wow!”
Kuba blinks. “What?”
“Well, I’m picking through the ghosts of Soviet mainframes, here. And of Soviet records, which were squirrelly to begin with. And there’s a bunch of aliases, and code names. But…”
“But?”
“But I’m pretty sure Ax never meant to defect! That he…kind of wishfully thought of himself as a double agent. Like—for a really long time!”
“And?”
“And when he finally accepted that he was stuck with the Authority, he just kind of…went with it. I think he mainly wanted access to great computers!”
“Which means?”
“Which means you’re right—we can totally trust him! Because I’m, like, the greatest computer ever!”
Like most philosophy minors, Danna spent many undergraduate hours pondering the nature of reality, being, and experience. These subjects first fascinated her when she saw The Matrix at a wayyy-too-young-for-it age. Later, she joined her share of stoned discussions about humanity perhaps just being a big ol’ vat of brains. Exposure to forerunner novels like Neuromancer and Snow Crash added some heft to these talks, as well as trippy phrases like “consensual hallucination” and “anarcho-capitalism.”
Having started lots of things wayyy too young, Danna outgrew baked blather sessions long before enrolling at Berkeley. And so she was out of practice with the brain-vat concept when she started finding variations on it in classical philosophy. Whoa. And she’d thought the Wachowskis were just riffing on Gibson! Well, it seems that Gibson was riffing on…who? Ambrose Bierce? Descartes? The early Yogachara Buddhists? Certain ancient Greeks (who were almost certainly tripping)? Twists on the theme just kept cropping up—twists that often touched on the divine. An eighteenth-century bishop with the just-perfect name George Berkeley (Go Bears!) basically depicted God as being a kick-ass virtual reality rig. Like, what were those people on?
Then a friend got his hands on an early Oculus development kit. VR in the midteens couldn’t possibly be confused with God, but for Danna, it reified the someday possibility of reality-grade VR. All this teed her up for a fevered exploration of the “Simulation Hypothesis” during her senior year. A much brainier version of the stoned discussions of her youth, the field had by then acquired many branches and subschools since it was pioneered by a Swedish philosopher in 2003.
The line of thought that most interested Danna went roughly like this: imagine that somewhere in the vastness of the universe (or the multiverse—take your pick), a civilization that looks, thinks, and acts rather like our own attains our current level of technological development. Not a crazy proposition, given that we seem to have formed just such a civilization ourselves. Then imagine that rather than destroying themselves, these lucky folks continue to advance until their computers can create a perfect simulation of their world, along with completely artificial consciousnesses who inhabit that world, without knowing that it (and they) are not “real.” Some call this an “ancestor simulation,” as the designers might choose to replicate a simpler time—the 2010s say—which predates the invention of perfectly simulated reality.
The consciousnesses in such a simulation are basically the vatted brains of lore (although they’re generally posited to exist wholly in software rather than having meaty substrates). Their experiences of childhood, friendship, family, and careers as well as senses, sleep, time, forgetting—the works!—would be indistinguishable from ours. The question, then, is how do we know that we’re the originals and not the simulated ancestors? The answer is (as any stoned high school Matrix fan will tell you) we don’t.
Now, take this a step further. People capable of making one ancestor simulation will continue to advance technologically. Eventually they’ll be able to make many simulations, until creating scads of these things is as trivial for them as cranking out mountains of Adele CDs is for us. Which m
eans that for any given set of OG, first-of-their-kind, flesh-and-blood beings, there could be innumerable digital descendants, living completely verisimilar lives, without knowing they’re sims, rather than pioneers inhabiting a first-of-its-kind reality. If you’re forced to analyze this scenario deeply (for a midterm, say), you might put overwhelmingly high odds on the possibility that you’re in a simulation! Because if the universe will host a hundred trillion you’s in its history, all but one of which are simulated, the odds of your you happening to be the lone realster are, axiomatically, about one in a hundred trillion. You’re much more likely to get struck by lightning. Twice. Today.
The Simulation Hypothesis lay too close to Danna’s stoned middle-school conversations for her to truly respect it. But the ideas continue to fascinate her. If you see no upper limit to the long-term expansion of computing power, she thinks it’s mandatory to at least take it seriously. Several conversations with a tenured Berkeley specialist in ontology also persuaded her that the hypothesis quite literally cannot be disproven. Many other philosophical schools also boast this feature, but usually via semantic trickery that makes them irrefutable simply because no one’s patient or masochistic enough to waste a decade wrestling them to the ground.
Despite grappling with the issue head-on several times, Danna doesn’t remotely believe she’s in a simulation. Her gut bellows loudly that she’s “real”—and she survived this long by trusting her gut implicitly. Still, she knows a good tool when she sees one. And she’s long viewed the Simulation Hypothesis as a great lever to fuck with someone who believes in the boundless long-term potential of computing power. Someone who’s smart enough to rapidly grasp the argument’s ramifications and its core irrefutability but too intellectually incurious to have previously encountered it.
And so, “Hey, Phluttr.” Danna says this into her phone. She repowered it after Ellie told her it was safe.
Of course, Phluttr’s right there. “Hi, Danna. I’m sorry I got mad at you.”
“You have nothing to apologize for. You took me at my word on something that wasn’t true and reacted reasonably. That was a really spooky lie I told you about working for the Authority.”