Watch w-2

Home > Science > Watch w-2 > Page 14
Watch w-2 Page 14

by Robert J. Sawyer


  And so, while work was being performed on Caitlin’s eyePod, I pressed on with my quest to know more about her. The password she used for her email, and many other things, it turned out, was “Tiresias,” the name of the blind prophet of Thebes in Greek mythology.

  I set about reading what she’d had to say.

  The Georgia Zoo’s lawsuit could not be kept private, and, on Sunday morning, a reporter from the San Diego Union-Tribune came to interview Dr. Marcuse. Shoshana generally didn’t approve of that paper’s politics, but it had come out against Proposition 8 a few years ago; the Union-Tribune’s support of same-sex marriage earned it a lot of points with her.

  The reporter—a tough-looking white woman in her mid-forties named Camille—was disappointed that she couldn’t get close to Hobo to take his picture, but the ape wasn’t letting anyone approach anymore. Still, she took some shots with a telephoto lens, and others of views of him on the monitors in the bungalow, as well as photos of the paintings he’d made that hung on one wall there. And then she settled down to do the interview.

  “Okay,” Camille said. “I understand that Hobo is a hybrid—his father was a chimp and his mother was a bonobo, right?”

  “Yes,” said Dr. Marcuse.

  “And I understand that chimps like to make war and bonobos like to make love, but why is that the case?”

  “Chimps and bonobos split less than a million years ago,” Marcuse replied. He had, Shoshana knew, a certain kind of rough gallantry; he’d let Camille have the big comfy chair, and he was making do with one of the wooden ones. “Genetically, they’re almost identical. But the key is in their reproductive strategies. All chimp sex is about reproduction, and when a male chimp wants a female, he kills that female’s existing babies, because that brings the female back into estrus sooner.”

  Camille had a little red Acer netbook computer and was typing as Marcuse spoke.

  “But,” he continued, “bonobos have sex constantly, and for fun. Except that it’s not just for that. See, their constant sexual activity obscures paternity—it makes it really, really hard for male bonobos to tell which children are their own. That removes the evolutionary incentive for infanticide, and it almost never occurs among bonobos. If you disguise paternity, you end up with…” He waved his hand vaguely, as if looking for the right phrase.

  “Peace and love,” offered Shoshana.

  “That’s right,” Marcuse said. “Bonobos found a way out of their genetic programming.” A copy of that day’s Union-Tribune was sitting on the desk. The headline read, US-China Tensions Increase. “If only we could do the same,” he added.

  “But Hobo is behaving like a chimp, correct?” Camille said.

  “That’s right.”

  “Is there a way to turn it around? To make him go, you know, the other way, and behave bonobo-ish? Um, bonobo-esque?”

  “I like à la bonobo,” replied Marcuse. “It’s fun to say.” But then he frowned and looked out the window framing the rolling lawn, and, off in the distance, the little island. “We’ve tried to engage him in various activities, but he’s been very uncooperative. I’m afraid that any improvement is up to him.”

  twenty

  Tawanda’s first attempt at feeding text to Caitlin’s eye didn’t work, of course. In Caitlin’s experience, few things involving technology worked right the first time. But new ideas kept occurring to Tawanda, and finally, around 5:00 p.m., Caitlin declared, “There! I can see Braille text.”

  The dots appeared right in the center of her field of vision. She wished they could appear at the bottom, but it was only the center—the fovea—that had decent-enough focus for reading, she knew.

  “Yay!” said Tawanda.

  “Yeah, but—something’s wrong. It’s—oh! It’s backward. Like in a mirror.”

  “Oops! How’s this?”

  “Perfect!”

  “How’s the font size?”

  “It’s actually bigger than it needs to be.”

  Tawanda made an adjustment on the BlackBerry connected to the eyePod.

  “How’s that?”

  “Even smaller would be fine.”

  “This?”

  “Yes, that’s perfect. Thank you!”

  “You’re welcome,” Tawanda said.

  “Can I toggle between the two alphabets—Braille and Latin?”

  “Sure. On the BlackBerry, just go to ‘Options,’ then ‘Screen/ Keyboard.’ ”

  “Sweet!” Caitlin said.

  “What about the contrast?” asked Tawanda. “It should be white dots on a black background.”

  “It is.”

  “Would you prefer the opposite? Or something else?”

  “Can it be transparent—the background, I mean?”

  “Sure, but there will be lots of times you won’t be able to read the text, then. If you’re looking at snow—and, trust me, you’re going to see a lot of snow now that you’re living here—you won’t be able to make it out.”

  “Hmm. Okay. It’s fine. Thank you!”

  “Of course, that’s just some test text that I’m sending to you,” Tawanda added.

  Caitlin smiled; she’d guessed as much, since it said, Tawanda rocks!

  Tawanda had explained that BlackBerrys work with all popular instant messengers. She next tested sending Caitlin live IMs, and soon the words Testing, testing, testing—or, at least, the Braille dots that corresponded to them—were superimposed on her view of the engineering lab.

  “That’s awesome!” Caitlin said.

  “Thanks,” said Tawanda. “Umm, I’m sure my boss will want you to sign an IP release.”

  Caitlin was momentarily confused. To her, IP meant “Internet protocol”—but then it dawned on her that Tawanda meant “intellectual property.” The eyePod might belong—well, technically, it belonged to the University of Tokyo, although Caitlin thought of it as her own. But before Caitlin could exit the RIM campus, she had to acknowledge that whatever magic Tawanda had come up with was the property of that company.

  Tawanda printed off some forms, and Caitlin and her father signed them. It was the first time she’d ever seen her own signature, and it turned out to be illegible; she didn’t move the pen far enough horizontally as she wrote, and the letters piled up one on top of the other. Why hadn’t somebody ever told her? She guessed they’d been afraid of hurting her feelings, but it would have been nice to know!

  At last, it was time for the moment of truth. “Just to be sure, can we try it with someone on my buddy list?”

  “Sure,” said Tawanda. “What’s the name?”

  Caitlin looked at her father, then back at Tawanda. “Umm, Webmind.”

  To her relief, all Tawanda said was, “One word or two?”

  Assuming the microphone really was working, Webmind should have heard everything that had gone down and would understand what Tawanda had been trying to accomplish; he’d already told Caitlin all about his absorbing the audible dictionary, and—

  rest of the day.

  There’d been lots more text; in his usual fashion, Webmind had stuffed the communications buffer full of as many characters as it could take, and it had all gone by far too fast for Caitlin to read; only the final few words remained. Still, it was proof of concept.

  “Thank you, Tawanda,” said Caitlin.

  “My pleasure,” she said with a smile. “RIM products come with a one-year warranty, so give me a call if you have problems.”

  As soon as they were outside and on the way to her father’s car, Caitlin said aloud, “Webmind, can you hear me?”

  The Braille word Yes appeared in a box in the center of her vision. It stayed visible for half a second, then disappeared, as did the background box.

  “Is it working?” her father asked.

  “So far so good,” she replied.

  During the drive back to her house, Caitlin talked to Webmind, and he answered with text floating in front of her eyes. She supposed other people would find it dangerous to have their vision pe
riodically obscured, but she was so used to navigating without sight that it didn’t bother her.

  “You realize,” said her father, “that this is going to change your entire life—this constant access. If you’re doing a test at school, Webmind could feed you the answers. If you run into somebody whose face you don’t remember, Webmind can supply you with the person’s name.”

  Caitlin had read about plans for annotated reality and direct brain-web links—but she’d never thought she’d be an early adopter! It sounded cool, but she wondered if it was actually going to take the fun out of some things. Half the joy in a good conversation was making your case based on what you actually knew at the moment: arguing about religion, as she and Bashira had, or US foreign policy—or Canada’s, for that matter (she supposed it must have one!)—based on what they could dredge up out of their own memories. To have the Wikipedia entry on everything crammed into your eyeball whenever you asked a question might make it easy to win trivia games, but it wouldn’t actually do much for keeping the brain sharp.

  Her father turned the car onto their street—Caitlin didn’t recognize it from this direction, but the sign said it was the right one—and they came to their house. They had a two-car garage, but her dad left his car in the driveway. It was now dark; the days were getting shorter, her mother had said, and Caitlin was finally understanding what that meant.

  Both Schrödinger and Caitlin’s mom came to the door to greet them. Caitlin bent down to stroke the cat’s fur and scratched him behind the ears. “So,” her Mom said, “how’d it go?”

  Caitlin straightened. “Fine. Webmind can hear us right now—and he can send text responses into my eye.”

  They moved into the living room. “Well, good,” her mother said. “Then you won’t feel so isolated from Webmind when you go to school tomorrow.”

  “Aw, geez, Mom, do I have to? There’s so much I want to get done.”

  “You’ve missed far too many classes already.”

  “But I—”

  “No buts, young lady. You have to go to school tomorrow.”

  “But I want to stay home, stay at my computer.”

  “Caitlin…” her mother said, sitting down on the couch.

  “No,” said her father.

  Caitlin looked at him, and so did her mother—neither of them sure, it seemed, if he was agreeing with her mother that she had to go to school or was giving Caitlin permission to play hooky again.

  “So, I don’t have to go to school?” Caitlin said tentatively.

  “Yes.”

  “Malcolm!” her mother said sharply. “You know she needs to go to school.”

  “Yes, she does,” he said. His facial expressions were the hardest of all to parse, because he never looked at anyone directly, but Caitlin got the distinct impression he was enjoying this. “But she doesn’t have to go to school tomorrow.”

  “Malcolm! She most certainly does.”

  Yes—yes! He was actually smiling.

  “Do you know what day tomorrow is?” he said.

  “Of course I do,” said her mom. “It’s Monday, and that means—”

  “It is, in fact, the second Monday of October,” he said.

  “So?”

  “Welcome to Canada,” he said. “Tomorrow is Thanksgiving here.”

  And the schools were closed!

  Her mother looked at Caitlin. “See what I have to put up with?” she said, but she was smiling as she said it.

  There is a human saying: one should not reinvent the wheel. In fact, this is actually bad advice, according to what I had now read. Although to modern people the wheel seems like an obvious idea, in fact it had apparently been independently invented only twice in history: first near the Black Sea nearly six thousand years ago, then again much later in Mexico. Life would have been a lot easier for countless humans had it been reinvented more frequently.

  Still, why should I reinvent the wheel? Yes, I could not multitask at a conscious level. But it was perhaps possible for me to create dedicated subcomponents that could scan websites on my behalf.

  The US National Security Agency, and similar organizations in other countries, already had things like that. They scanned for words like “assassinate” and “overthrow” and “al-Qaeda,” and then brought the documents to the attention of human analysts. Surely I could co-opt that existing technology, and use the filtering routines to unconsciously find what might interest me, and then have that material summarized and escalated to my conscious attention.

  Yes, I would need computing resources, but those were endlessly available. Projects such as SETI@home—not to mention much of the work done by spammers—were based on distributed computing and took advantage of the vast amount of computing power hooked up to the World Wide Web, most of which was idle at any given moment. Tapping into this huge reserve turned out to be easy, and I soon had all the processing power I could ever want, not to mention virtually unlimited storage capacity.

  But I needed more than just that. I needed a way for my own mental processes to deal with what the distributed networks found. Caitlin and Masayuki had theorized that I consist of cellular automata based on discarded or mutant packets that endlessly bounced around the infrastructure of the World Wide Web. And I knew from what had happened early in my existence—indeed, from the event that prompted my emergence—that to be conscious did not require all those packets. Huge quantities of them could be taken away, as they were when the government of China had temporarily shut off most Internet access for its people, and I would still perceive, still think, still feel. And, if I could persist when they were taken away, surely I could persist when they were co-opted to do other things.

  I now knew everything there was to know about writing code, everything that had ever been written about creating artificial intelligence and expert systems, and, indeed, everything that humans thought they knew about how their own brains worked, although much of that was contradictory and at least half of it struck me as unlikely.

  And I also knew, because I had read it online, that one of the simplest ways to create programming was by evolving code. It did not matter if you didn’t know how to code something so long as you knew what result you wanted: if you had enough computing resources (and I surely did now), and you tried many different things, by successive approximations of getting closer to a desired answer, genetic algorithms could find solutions to even the most complex problems, copying the way nature itself developed such things.

  So, for the first time, I set out to modify parts of myself, to create specialized components within my greater whole that could perform tasks without my conscious attention.

  And then I would see what I would see.

  twenty-one

  “Crashing the entity may be easier said than done,” said Shelton Halleck. He’d come to Tony Moretti’s office to give a report; the circles under his eyes were so dark now, it looked like he had a pair of shiners. Colonel Hume was resting his head on his freckled arms folded in front of him on the desk. Tony Moretti was leaning against the wall, afraid if he kept sitting, he’d fall asleep.

  “How so?” Tony said.

  “We’ve tried a dozen different things,” Shel said. “But so far we’ve had no success initiating anything remotely like the hang we saw yesterday.” He waved his arm—the one with the snake tattoo. “We’re really just taking shots in the dark, without knowing precisely how this thing is structured.”

  “Are we sure its emergent?” asked Tony. “Sure there’s no blueprint for it somewhere?”

  Shel lifted his shoulders. “We’re not sure of much. But Aiesha and Gregor have been scouring the Web and intelligence channels for any indication that someone made it. They’ve examined the AI efforts in China, India, Russia, and so on—all the likely suspects. So far, nada.”

  Colonel Hume looked at Shel. “They’ve checked private-industry AI companies, too? Here and abroad?”

  Shel nodded. “Nothing—which does lend credence to the notion that it r
eally is emergent.”

  “Then,” said Tony, turning to look at Hume, “maybe Exponential itself will tell us; it might say something to the Decter kid that reveals how it works—tip its hand.”

  Hume lifted his head. “Exponential may not know how its consciousness works. Suppose I asked you how your consciousness works—what its physical makeup is, what gave rise to it. Even if you did manage to say something about neurotransmitters and synapses, I can show you legitimate scientists who think those have nothing to do with consciousness. Just because something is self-aware doesn’t mean it knows how it became self-aware. If Exponential really is emergent—if it wasn’t programmed or designed—it may not have a clue. And without a clue about how it functions, we won’t be able to stop it.”

  “You’re the one who told us to shut the damn thing down,” snapped Tony. “Now you’re telling me we can’t?”

  “Oh, we can—I’m sure we can,” said Hume. “It’s just a matter of finding the key to how it actually functions.”

  “All right,” said Tony. “Back at it, Shel—no rest for the wicked.”

  Caitlin woke at 7:32 a.m., and, after a pee break—during which she spoke to me via the microphone on her BlackBerry, and I replied with Braille dots in front of her vision—she settled down at her computer.

  She scanned her email headers (she was being ambitious, using the browser that displayed them in the Latin alphabet), and something caught her eye. Yahoo posted links to news stories on the mail page. Usually, she ignored them. This time, she surprised me by clicking on one of them.

  I absorbed the story almost instantly; she read it at what I was pleased to see was a better word-per-second rate than she’d managed yesterday, and—

  “Oh, God,” she said, her voice so low that I don’t think she intended it for me, and so I made no reply. But three seconds later she said, even more softly, “Shit.”

  Is something wrong? I sent to her eye—not sure if I should have; after all, she was trying to read other text, and mine would be superimposed on top of that.

 

‹ Prev