I introduced Carlos, Maria and Jun, but then they made themselves scarce as I showed Francine around. We still had a demonstration of the “balanced decoupling” principle set up on a bench, for the tour by one of our corporate donors the week before. What caused an imperfectly shielded quantum computer to decohere was the fact that each possible state of the device affected its environment slightly differently. The shielding itself could always be improved, but Carlos’s group had perfected a way to buy a little more protection by sheer deviousness. In the demonstration rig, the flow of energy through the device remained absolutely constant whatever state it was in, because any drop in power consumption by the main set of quantum gates was compensated for by a rise in a set of balancing gates, and vice versa. This gave the environment one less clue by which to discern internal differences in the processor, and to tear any superposition apart into mutually disconnected branches.
Francine knew all the theory backwards, but she’d never seen this hardware in action. When I invited her to twiddle the controls, she took to the rig like a child with a game console.
“You really should have joined the team,” I said.
“Maybe I did,” she countered. “In another branch.”
She’d moved from unsw to Berkeley two years before, not long after I’d moved from Delft to Sao Paulo; it was the closest suitable position she could find. At the time, I’d resented the fact that she’d refused to compromise and work remotely; with only five hours’ difference, teaching at Berkeley from Sao Paulo would not have been impossible. In the end, though, I’d accepted the fact that she’d wanted to keep on testing me, testing both of us. If we weren’t strong enough to stay together through the trials of a prolonged physical separation—or if I was not sufficiently committed to the project to endure whatever sacrifices it entailed—she did not want us proceeding to the next stage.
I led her to the corner bench, where a nondescript grey box half a metre across sat, apparently inert. I gestured to it, and our retinal overlays transformed its appearance, “revealing” a maze with a transparent lid embedded in the top of the device. In one chamber of the maze, a slightly cartoonish mouse sat motionless. Not quite dead, not quite sleeping.
“This is the famous Zelda?” Francine asked.
“Yes.” Zelda was a neural network, a stripped-down, stylized mouse brain. There were newer, fancier versions available, much closer to the real thing, but the ten-year-old, public domain Zelda had been good enough for our purposes.
Three other chambers held cheese. “Right now, she has no experience of the maze,” I explained. “So let’s start her up and watch her explore.” I gestured, and Zelda began scampering around, trying out different passages, deftly reversing each time she hit a cul-de-sac. “Her brain is running on a Qusp, but the maze is implemented on an ordinary classical computer, so in terms of coherence issues, it’s really no different from a physical maze.”
“Which means that each time she takes in information, she gets entangled with the outside world,” Francine suggested.
“Absolutely. But she always holds off doing that until the Qusp has completed its current computational step, and every qubit contains a definite zero or a definite one. She’s never in two minds when she lets the world in, so the entanglement process doesn’t split her into separate branches.”
Francine continued to watch, in silence. Zelda finally found one of the chambers containing a reward; when she’d eaten it, a hand scooped her up and returned her to her starting point, then replaced the cheese.
“Here are 10,000 previous trials, superimposed.” I replayed the data. It looked as if a single mouse was running through the maze, moving just as we’d seen her move when I’d begun the latest experiment. Restored each time to exactly the same starting condition, and confronted with exactly the same environment, Zelda—like any computer program with no truly random influences—had simply repeated herself. All 10,000 trials had yielded identical results.
To a casual observer, unaware of the context, this would have been a singularly unimpressive performance. Faced with exactly one situation, Zelda the virtual mouse did exactly one thing. So what? If you’d been able to wind back a flesh-and-blood mouse’s memory with the same degree of precision, wouldn’t it have repeated itself too?
Francine said, “Can you cut off the shielding? And the balanced decoupling?”
“Yep.” I obliged her, and initiated a new trial.
Zelda took a different path this time, exploring the maze by a different route. Though the initial condition of the neural net was identical, the switching processes taking place within the Qusp were now opened up to the environment constantly, and superpositions of several different eigenstates—states in which the Qusp’s qubits possessed definite binary values, which in turn led to Zelda making definite choices—were becoming entangled with the outside world. According to the Copenhagen interpretation of quantum mechanics, this interaction was randomly “collapsing” the superpositions into single eigenstates; Zelda was still doing just one thing at a time, but her behaviour had ceased to be deterministic. According to the MWI , the interaction was transforming the environment—Francine and me included—into a superposition with components that were coupled to each eigenstate; Zelda was actually running the maze in many different ways simultaneously, and other versions of us were seeing her take all those other routes.
Which scenario was correct?
I said, “I’ll reconfigure everything now, to wrap the whole setup in a Delft cage.” A “Delft cage” was jargon for the situation I’d first read about 17 years before: instead of opening up the Qusp to the environment, I’d connect it to a second quantum computer, and let that play the role of the outside world.
We could no longer watch Zelda moving about in real time, but after the trial was completed, it was possible to test the combined system of both computers against the hypothesis that it was in a pure quantum state in which Zelda had run the maze along hundreds of different routes, all at once. I displayed a representation of the conjectured state, built up by superimposing all the paths she’d taken in 10,000 unshielded trials.
The test result flashed up: CONSISTENT .
“One measurement proves nothing,” Francine pointed out.
“No.” I repeated the trial. Again, the hypothesis was not refuted. If Zelda had actually run the maze along just one path, the probability of the computers’ joint state passing this imperfect test was about one percent. For passing it twice, the odds were about one in 10,000.
I repeated it a third time, then a fourth.
Francine said, “That’s enough.” She actually looked queasy. The image of the hundreds of blurred mouse trails on the display was not a literal photograph of anything, but if the old Delft experiment had been enough to give me a visceral sense of the reality of the multiverse, perhaps this demonstration had finally done the same for her.
“Can I show you one more thing?” I asked.
“Keep the Delft cage, but restore the Qusp’s shielding?”
“Right.”
I did it. The Qusp was now fully protected once more whenever it was not in an eigenstate, but this time, it was the second quantum computer, not the outside world, to which it was intermittently exposed. If Zelda split into multiple branches again, then she’d only take that fake environment with her, and we’d still have our hands on all the evidence.
Tested against the hypothesis that no split had occurred, the verdict was: CONSISTENT. CONSISTENT. CONSISTENT.
We went out to dinner with the whole of the team, but Francine pleaded a headache and left early. She insisted that I stay and finish the meal, and I didn’t argue; she was not the kind of person who expected you to assume that she was being politely selfless, while secretly hoping to be contradicted.
After Francine had left, Maria turned to me. “So you two are really going ahead with the Frankenchild?” She’d been teasing me about this for as long as I’d known her, but apparently she hadn�
��t been game to raise the subject in Francine’s presence.
“We still have to talk about it.” I felt uncomfortable myself, now, discussing the topic the moment Francine was absent. Confessing my ambition when I applied to join the team was one thing; it would have been dishonest to keep my collaborators in the dark about my ultimate intentions. Now that the enabling technology was more or less completed, though, the issue seemed far more personal.
Carlos said breezily, “Why not? There are so many others now. Sophie. Linus. Theo. Probably a hundred we don’t even know about. It’s not as if Ben’s child won’t have playmates.” Adai—Autonomously Developing Artificial Intelligences—had been appearing in a blaze of controversy every few months for the last four years. A Swiss researcher, Isabelle Schib, had taken the old models of morphogenesis that had led to software like Zelda, refined the technique by several orders of magnitude, and applied it to human genetic data. Wedded to sophisticated prosthetic bodies, Isabelle’s creations inhabited the physical world and learnt from their experience, just like any other child.
Jun shook his head reprovingly. “I wouldn’t raise a child with no legal rights. What happens when you die? For all you know, it could end up as someone’s property.”
I’d been over this with Francine. “I can’t believe that in ten or 20 years’ time there won’t be citizenship laws, somewhere in the world.”
Jun snorted. “Twenty years! How long did it take the U.S. to emancipate their slaves?”
Carlos interjected, “Who’s going to create an adai just to use it as a slave? If you want something biddable, write ordinary software. If you need consciousness, humans are cheaper.”
Maria said, “It won’t come down to economics. It’s the nature of the things that will determine how they’re treated.”
“You mean the xenophobia they’ll face?” I suggested.
Maria shrugged. “You make it sound like racism, but we aren’t talking about human beings. Once you have software with goals of its own, free to do whatever it likes, where will it end? The first generation makes the next one better, faster, smarter; the second generation even more so. Before we know it, we’re like ants to them.”
Carlos groaned. “Not that hoary old fallacy! If you really believe that stating the analogy ‘ants are to humans, as humans are to x’ is proof that it’s possible to solve for x, then I’ll meet you where the south pole is like the equator.”
I said, “The Qusp runs no faster than an organic brain; we need to keep the switching rate low, because that makes the shielding requirements less stringent. It might be possible to nudge those parameters, eventually, but there’s no reason in the world why an adai would be better equipped to do that than you or I would. As for making their own offspring smarter ... even if Schib’s group has been perfectly successful, they will have merely translated human neural development from one substrate to another. They won’t Have ‘improved’ on the process at all— whatever that might mean. So if the adai have any advantage over us, it will be no more than the advantage shared by flesh-and-blood children: cultural transmission of one more generation’s worth of experience.”
Maria frowned, but she had no immediate comeback.
Jun said dryly, “Plus immortality.”
“Well, yes, there is that,” I conceded.
Francine was awake when I arrived home.
“Have you still got a headache?” I whispered.
“No.”
“I undressed and climbed into bed beside her.
She said, “You know what I miss the most? When we’re fucking on-line?”
“This had better not be complicated; I’m out of practice.”
“Kissing.”
I kissed her, slowly and tenderly, and she melted beneath me. “Three more months,” I promised, “and I’ll move up to Berkeley.”
“To be my kept man.”
“I prefer the term ‘unpaid but highly valued caregiver.’ ” Francine stiffened. I said, “We can talk about that later.” I started kissing her again, but she turned her face away.
“I’m afraid,” she said.
“So am I,” I assured her. “That’s a good sign. Everything worth doing is terrifying.”
“But not everything terrifying is good.”
I rolled over and lay beside her. She said, “On one level, it’s easy. What greater gift could you give a child, than the power to make real decisions? What worse fate could you spare her from, than being forced to act against her better judgment, over and over? When you put it like that, it’s simple.
“But every fibre in my body still rebels against it. How will she feel, knowing what she is? How will she make friends? How will she belong? How will she not despise us for making her a freak? And what if we’re robbing her of something she’d value: living a billion lives, never being forced to choose between them? What if she sees the gift as a kind of impoverishment?”
“She can always drop the shielding on the Qusp,” I said. “Once she understands the issues, she can choose for herself.”
“That’s true.” Francine did not sound mollified at all; she would have thought of that long before I’d mentioned it, but she wasn’t looking for concrete answers. Every ordinary human instinct screamed at us that we were embarking on something dangerous, unnatural, hubristic—but those instincts were more about safeguarding our own reputations than protecting our child-to-be. No parent, save the most wilfully negligent, would be pilloried if their flesh-and-blood child turned out to be ungrateful for life; if I’d railed against my own mother and father because I’d found fault in the existential conditions with which I’d been lumbered, it wasn’t hard to guess which side would attract the most sympathy from the world at large. Anything that went wrong with our child would be grounds for lynching—however much love, sweat, and soul-searching had gone into her creation—because we’d had the temerity to be dissatisfied with the kind of fate that everyone else happily inflicted on their own.
I said, “You saw Zelda today, spread across the branches. You know, deep down now, that the same thing happens to all of us.”
“Yes.” Something tore inside me as Francine uttered that admission. I’d never really wanted her to feel it, the way I did.
I persisted. “Would you willingly sentence your own child to that condition? And your grandchildren? And your great-grandchildren?”
“No,” Francine replied. A part of her hated me now; I could hear it in her voice. It was my curse, my obsession; before she met me, she’d managed to believe and not believe, taking her acceptance of the multiverse lightly.
I said, “I can’t do this without you.”
“You can, actually. More easily than any of the alternatives. You wouldn’t even need a stranger to donate an egg.”
“I can’t do it unless you’re behind me. If you say the word, I’ll stop here. We’ve built the Qusp. We’ve shown that it can work. Even if we don’t do this last part ourselves, someone else will, in a decade or two.”
“If we don’t do this,” Francine observed acerbically, “we’ll simply do it in another branch.”
I said, “That’s true, but it’s no use thinking that way. In the end, I can’t function unless I pretend that my choices are real. I doubt that anyone can.”
Francine was silent for a long time. I stared up into the darkness of the room, trying hard not to contemplate the near certainty that her decision would go both ways.
Finally, she spoke.
“Then let’s make a child who doesn’t need to pretend.”
2031
Isabelle Schib welcomed us into her office. In person, she was slightly less intimidating than she was on-line; it wasn’t anything different in her appearance or manner, just the ordinariness of her surroundings. I’d envisaged her ensconced in some vast, pristine, high-tech building, not a couple of pokey rooms on a back-street in Basel.
Once the pleasantries were out of the way, Isabelle got straight to the point. “You’ve been accepted,�
� she announced. “I’ll send you the contract later today.”
My throat constricted with panic; I should have been elated, but I just felt unprepared. Isabelle’s group licensed only three new adai a year. The short-list had come down to about a hundred couples, winnowed from tens of thousands of applicants. We’d travelled to Switzerland for the final selection process, carried out by an agency that ordinarily handled adoptions. Through all the interviews and questionnaires, all the personality tests and scenario challenges, I’d managed to half-convince myself that our dedication would win through in the end, but that had been nothing but a prop to keep my spirits up.
Francine said calmly, “Thank you.”
I coughed. “You’re happy with everything we’ve proposed?” If there was going to be a proviso thrown in that rendered this miracle worthless, better to hear it now, before the shock had worn off and I’d started taking things for granted.
Isabelle nodded. “I don’t pretend to be an expert in the relevant fields, but I’ve had the Qusp’s design assessed by several colleagues, and I see no reason why it wouldn’t be an appropriate form of hardware for an adai. I’m entirely agnostic about the MWI , so I don’t share your view that the Qusp is a necessity, but if you were worried that I might write you off as cranks because of it,” she smiled slightly, “you should meet some of the other people I’ve had to deal with.
“I believe you have the adai’s welfare at heart, and you’re not suffering from any of the superstitions—technophobic or technophilic—that would distort the relationship. And as you’ll recall, I’ll be entitled to visits and inspections throughout your period of guardianship. If you’re found to be violating any of the terms of the contract, your licence will be revoked, and I’ll take charge of the adai.”
The Year's Best Science Fiction: Twentieth Annual Collection Page 78