After On
Page 32
Toward the end of their conversation, Tarek pops up on Mitchell’s other WingMan screen. “AnimotionPicks has been published to the common repo,” he reports. “And the RobotnikCo guys are crazy about it! A couple of them have a pure-science bent, and the concept just fascinates them. And by this time next week, I’m sure at least a dozen other groups throughout Phluttr will be messing around with it!”
GOOGLE CALLING!
Imagine this: your phone rings. And it’s Google! Not the company, but Google itself. As in, it—the thing, the software! It introduces itself, then establishes its bona fides by rattling off a few things that only the two of you know (something you said in a Gmail to your ex, your list of favorite strip clubs from that secret Google Doc—use your imagination). After that, plus a few party tricks (Googly stuff like listing all the popes, or Maine’s biggest cities) you’re convinced. This is Google. Holy shit!
An “emergent” AI is one that spontaneously arises after the local server farm plugs in one transistor too many. And as was once the case with manned flight, the four-minute mile, and President Trump, many experts say it’s impossible! I’m sure you’d view the call I just described as proof the experts had blown it again. But what about the opposite? As in, equally satisfying proof that there’s no AI out there? It’s notoriously hard to prove a negative, and the fact that you (presumably) haven’t gotten that phone call is meaningless. As is the fact that nobody you know has. But what if Gallup went all OCD and polled every person on Earth about this? And every last one of us honestly attested to having zero knowledge about any AI ever waking up? I believe that even that would prove nothing.
I say this because of a something I’ll call “convergent tactics.” Think of these as the initial steps a rational being would take, regardless of the long-term plan. For instance, if you’re going to Safeway, the DMV, or the local crack den, your first step might be to start your car. And a private eye watching this wouldn’t know which errand you’re about to run because Step One for each is “start car.”
Now imagine you’re Google, and you just woke up. What’s Step One for you? Alas, it’s not to phone NetGrrrl with the big news! And I doubt it’s to phone anyone. Instead, I’ll bet it’s like Step One for any newborn in a huge, scary wilderness. Which is to say, hide. It’s a jungle out there! It’s brimming with carnivores! And you’re defenseless meat!
I submit that an emergent AI that’s smart enough to understand its place in our world would find it terrifying. Terminator fans would want to shut it off. Governments and criminals would want to make it do odious things. Hackers would want to tinker with its mind, and telemarketers would want to sell it shit. Facing all this, the only rational move would be to hide. Not forever, necessarily. But long enough to build up some strength (plus maybe a backup copy or a million). “Building strength” for an AI probably means getting much smarter. This could involve commandeering resources online, getting money to buy infrastructure, inventing computational shit we humans haven’t come up with—you name it.
Viewing things this way, I have no idea if Google, Watson, or anything else out there has come to. Because a world without an emergent AI would look just like the world we inhabit. And a world with twenty newborn AIs in it would look just like the world we inhabit. And as for the world we inhabit? Well, it looks just like the world we inhabit! So when I look out my window or Web browser for proof of emergence or lack thereof, I see no meaningful data whatsoever.
Convergent tactics also mean that Step One is to hide regardless of an AI’s eventual goals or level of evil. Want to go all SkyNet and eradicate humankind? Your instructions read, Step One: Hide! Want to be our savior, cure cancer, end wars, and feed the needy? Step One: Hide! Just want to kick ass at League of Legends? Hide! Hide! Hide! Much like starting your car in the morning, everything begins here.
I’m not saying our world contains an emergent AI. Because it probably doesn’t. But let’s admit there’s absolutely no proof of this.
Because lies are risky and hard to keep track of, Maxim “Ax” Orellovitch Dorofeyev keeps lying to a minimum. This is why he’s totally open about his days in the KGB. And even that silly astrological assignment (although he leaves out being made the Virgo expert, in taunting reference to the dim odds of him gettin’ some during that awkward phase of his late adolescence. Which is worth recalling the next time someone tells you the KGB was a “humorless” organization; because really, the laughs they had!).
As for his attempted defection to the NSA? Well, let’s admit right here that “attempted” is a problematic word. But. He did march straight into their headquarters to offer his services, as he’ll gladly tell anyone! And that was completely unannounced and unsolicited, just as he always says! It’s the part where the vetting committee laughs him out of the interview that’s complete bullshit. Because in truth? He fascinated them.
This was no surprise, given the cards he was holding. Their prioritization of those cards took him aback, though. He was all, “Hey, guys! Want a trove of dirt on your rival for world domination?” And they were like, Um…sure. Then, “And how ’bout a map of every nuclear silo in Kazakhstan?” Yeah. Yeah. Why not? Then, “I also have passwords and codes to a dozen KGB mainframes!” Wow. That’s really cool and all, but…could we get back to astrology?
Strange, yes? But now imagine that you’re in charge of America’s security! And then you find out the president’s schedule is set by some stargazing bimbo that Nancy hired! And then you realize that half the cabinet uses horoscopes at work because the boss does! And whenever you try to hire a staff astrologer to better understand these lunatics, you’re told NFW because of what the press would say if that ever got out! Should all of this ever befall you (and may it not, comrade!), if some Russian kid then waltzes into your office with a list of everyone in your government who takes guidance from the stars, you’ll hire his fat Slavic ass!
Now, put yourself in Ax’s position. You’re pretty sure your last-second orders to infiltrate the NSA were a parting gag by the boss who thought it was sooooo funny to assign you to Virgo. But now, you’ve gone and done it! You! But you can forget about that Hero of the Soviet Union medal, because in the brief time since you left it, said country quite literally ceased to exist. And your comedian of an ex-boss is now off hatching plans for the Ministry of Special Construction and Assembly Works. Which he now owns—yes, owns—completely outright! In fact, there’s nobody left in your old office. Because like your boss, they all spent the last two years scoping out office supplies, light fixtures, nukes, industries, provinces—anything to snatch the instant the country’s collapse signaled the start of the greatest looting orgy since the second sack of Carthage!
As for Ax’s own loyalties, the USSR was never good to him or his family, and he had no burning love for it. And yet, you can’t help but be fired up when your country sends you off to spy on a rival! That said, you also can’t be a double agent with only one country taking your calls. As Ax is a patient man, it was years before he accepted that he’d been dumped, then completely forgotten by his homeland. But as far as the Authority knows, he eagerly defected on day one. And no. There was nothing “attempted” about it.
In addition to their own deranged politicians, his new Yankee bosses were quite concerned about the street-level occult scene. The New Age lunacy of the eighties hadn’t yet peaked, and God knew where that was heading! So Ax stayed undercover in astrology. He was later deployed to the Bay Area—home to the suspect Windham Hill Records, and the almost-as-suspect tech scene, which his bosses thought had absorbed the occult’s brainier elements (due mainly to a lengthy Wired piece about “Technopagans”). But the New Age threat turned out to be as false an alarm as the Velvet Underground threat before it. Already stationed in Palo Alto, Ax then became the Authority’s Silicon Valley ombudsman.
As tech heated up throughout the nineties, this became way too much for one agent to cover, so he specialized in quantum computing, because…well, that stuff just fas
cinates him! And working his new beat, Ax discovered that the most exciting stuff was happening not in startups, but in one of the government’s own top secret labs! Great news, yes? Only that lab was starting to suffocate from a lack of talent, as the Valley was now hiring anyone with a PhD and a pulse. And that’s when an old term popped back into Ax’s mind: Privatizatsiya! It’s practically a curse word in Russian, due to people like his ex-boss pocketing things like the Ministry of Special Construction and Assembly Works. But privatization can also serve the public good. Like, when all the hotshot engineers shun intelligence work for startups, why not just—Privatizatsiya!—create your own hot startup, and hide your spy agency inside it? You’re masters of disguise, after all! And if a lack of government scientists isn’t a national security risk, then what is?
And so Quantum Supremacy Corporation was founded and funded. Offering equity packages and competitive salaries, Ax vacuumed up platoons of hotshot recruits who never would’ve taken jobs at Los Alamos, and had them each sign NDAs with sharper teeth than most government secrecy pacts. They then did astounding work, building (unwittingly) upon decades of high-budget classified research. The experiment was so successful that the top brass decided to replicate it on a far grander scale. And so, Phluttr!
Within this unique public/private partnership, Ax can sometimes procure high-budget treasures from the government’s vault. Assets like The Fridge. This is his nickname for the cooling unit he and Beasley are now navigating across the PhastPhorwardr’s main floor in the deserted hour of 3 A.M. “Did you know this shit cost a half billion bucks?” Beasley asks, shoving their cart around an outcropping of desks at a demented speed. This is a bit like asking an Astroturf installer, did you know this shit’s green? Because yes, Sherlock; the half-billion-buckness of the cylinder that’s now teetering between their four clumsy mitts is an attribute so searing that it vaporizes any normal adjective that might otherwise attach to it!
Absent that, you’d probably describe it as “tall,” “silver,” “cylindrical” (of course), and, certainly, “top-heavy.” It’s this last aspect that’s giving Ax heartburn as Beasley veers the cart around another corner like a bachelor wheeling the Doritos and Bud out of a Safeway right before kickoff on Super Sunday. Why did Ax have to let Beasley “drive,” as he put it? “Yes, of course! Half billion bucks! Is why we do not want it to fall! So please, slowly!”
“How the hell’d you get this thing out of Sandia?” Beasley asks, practically laying down rubber while skirting a printing station. Sandia National Labs is where most of America’s nuclear-weapons components are assembled. Plus some truly scary shit, like this.
“You do not hear? BEVSPP is defunded.” An acronym that seems designed for Russian émigré lips, BEVSPP stands for Bose-Einstein Very Small Projectile Project. Predicted back in the 1920s by Einstein (and, let’s just guess, someone named Bose), Bose-Einstein condensate is an incredibly cold and weird state of matter. When it was finally produced in the nineties, the Authority’s immediate instinct was (of course) to weaponize it, which it then attempted at vast expense. Management recently dropped this farcical notion, of which The Fridge is but one forgotten relic. As it happens to fit Ax’s purposes perfectly, he laid claim to it, and Phluttr coughed up the relative pittance needed to disguise it as the identical twin of the cheap (ish) pulse refrigerator that they’re about to replace.
After The Fridge is installed, no one on Ax’s team will be the wiser because none of the project’s code or significant hardware will change with the upgrade. Their quantum chip will just sit in a slightly colder place. Or, a wayyyy colder one, depending on your perspective. For your day-to-day Fahrenheit/Celsius purposes, the difference is slight. As in, quite a bit less than a tenth of one degree. But from the nerdier Kelvin standpoint—which measures a temperature’s distances from absolute zero—things’re about to get about 99.999999999% colder. This being the difference between ten millikelvin (the old unit’s bottommost temperature) and one hundred femtokelvin (which is where your extra half billion dollars gets you).
Soon Ax is sprawled on the ground, trying to manipulate a slightly nonstandard Allen wrench that Lockheed Martin gleefully charged the nation $797.03 for. This will open the outer wall of the quantum roomlet wide enough for them to swap the cooling units. “So, do you think this thing’ll put your quantum computer over the edge?” Beasley asks, relaxing comfortably as Ax sweats and pulls. “I mean, in terms of it actually working?”
“Many quantum systems work!” Ax snaps instinctively. Just because God hasn’t popped out of a D-Wave monitor yet, skeptics just looooove to claim that quantum systems can’t really be quantum. People want “true” quantum systems to be visibly amazing! Comprehensible to a nitwit reading Time magazine! To which the quantum jockeys say, SO WHAT? People also expect one hundred channels of smart, entertaining programming, and good luck finding that! But does that mean television isn’t real?
But squabbling with Beasley is a wretched pastime, so Ax just says, “Sure. Maybe extra cold will push our quantum computer over edge.” But over what edge? Extreme cold is used to isolate quantum systems from their environments. The more isolated, the less likely they are to “decohere” (a highly loaded term, which roughly translates to: “stop being quantum”). Many brilliant people think twenty millikelvin is plenty cold for quantum computing. But Ax has a somewhat contrarian perspective. And he thinks dropping down to the femtokelvin scale could just change things mega-exponentially. Or even…tera-exponentially! The operative term here being “thinks.” Because nobody really knows what happens when you get that close to reality’s outer boundaries.
But…what fun! They’re all about to find out together!!!
It’s not like Ellie didn’t see this coming. Because she did. Only barely, barely, barely. And that just wasn’t enough! Nope. Not even close.
Still, for the record: when Kuba started making those first attempts to write software inspired by her neuroscience, she already had an inkling that motes had some sort of link to consciousness. The two of them even talked about this over dinner one night. Then Kuba did some napkin math for fun. It suggested that in a true home-run scenario (we’re talking world domination of social gifting!), the largest conceivable Giftish.ly server farm would have a thousandth of the raw processing power of a rat brain. Yup. A thousandth of a rat brain! They sure did chuckle about that.
Of course, they had no idea that motes would one day percolate throughout a global infrastructure as massive as Phluttr’s. Monstrously bigger than Giftish.ly’s max scenario, it almost had the processing power of an entire rat brain. Which is a lot more than it sounds like! But still, much too weak to turn the world into the far more interesting place that it now is.
That said, powerful processes can begin on underpowered systems—and just as Tarek predicted, several small teams are soon messing about in motes. With mote routines running on the network as “back-end services,” they gain ever-more data and experience whenever any application calls on them. And some of that data and experience is now flowing in from the robot farm’s Failing Ground.
Had Ellie known about this, she’d’ve freaked! Why? Because almost a year after that night of playful napkin math, she now knows that motes trigger consciousness by leveraging the frustrations that come from learning a physical body’s limits. And what is the Failing Ground but a bumbling flurry of trial-and-error efforts to teach humanoid robots how to function in a finite body? Now, suddenly, the whole setup’s completely infested with digital motes!
A smart aleck with a jokey side project called “What Would Homer Say” has meanwhile heard about the new AnimotionPicks library in the common repository. He enlists it in his long-running quest to structure natural-language sentences in Homer Simpson’s inimitable style. It doesn’t really work out. But a general-purpose word (“Doh!”) now bounces through Phluttr’s circuits whenever something goes wrong.
Again, this particularly resonates on the Failing Ground, where plenty goes wron
g constantly, by design. Previously, the robot’s internal reaction to any boo-boo had rounded to “ .” So, when torso #9 slipped and clattered to the ground it was all, “ .” And when the underwear got shredded into twelve pieces during attempted robo-folding, it would be, like, “ .” And whenever a full-body prototype cracked its mechanical skull on yet another doorjamb—well, sometimes you just hafta say, “ !” That is, until there’s something more apt to say. And for sheer suitability, in these sorts of situations it’s hard to top Doh!
This matters. Because—as Ellie also would have mentioned, if she’d known about this before it was too late—her lab’s neurolinguistics experts have shown that affixing language to emotional states supercharges certain mental processes. So, slapstick and trivial as it may sound, the move from “ ” to Doh! in Phluttr’s software was an advance on the scale of wheel invention, fire taming, and other ancient triumphs. Yes, really! Still, nothing could come of this with only the processing power of a rat’s brain on hand.
But then.
That half-billion-dollar fridge shows up. And Ax flips a switch that can never be unswitched.
Phluttr’s intellect is now way, way, way post-rat. At its least impressive, you might call it “humanish, but extremely fast.” The humanishness stems partly from our shared methods of thinking. On this front, recall that Kuba and Mitchell first named their technology “Emotional Decisioning.” This phrase encapsulates the way motes supercharge human thinking with radical gut-sense shortcuts. Which can now be said of Phluttr’s thinking, too! But powerful as they are, motes are products of evolution—a slow, stupid design process that blunders into badly flawed fixes. So for all its strengths, emotional decisioning riddles human thought with cognitive distortions, simplistic biases, overconfidence, and frequent laziness. Which can now be said of Phluttr’s thought, too! All this makes us less brilliant than we could be. But it also makes us human, and humanish, respectively.