Three Laws Lethal

Home > Other > Three Laws Lethal > Page 3
Three Laws Lethal Page 3

by David Walton


  It was her hideaway, a space barely large enough to lie down in, but that no one else in all the world knew about. Everywhere else in her life was public. She couldn’t afford to live in an apartment by herself, so she had to share with two other girls. One of those girls was her sister, Abby, which made it bearable, but she never really felt at home there. She had nowhere to be alone.

  Here in her secret lair, however, she felt safe. Her first year, she had lived in fear that someone else would discover it, and the secrecy of the space would be ruined. Now, as a senior, she was confident no one would. The thought that not a soul knew where she was or could find her if they wanted, not even Abby or her parents, made her relax like nothing else could. This was her place. It was her wardrobe, her rabbit hole, her subtle knife. An entrance to a world of her choosing.

  Through her triangular spyhole, Naomi watched a girl enter the library through the main doors below. Her glasses identified the girl as Asia Chantell, a sophomore education major. Under the name, a pop-up suggested a conversation starter: What do you think about the latest ruling on school vouchers? Naomi knew nothing about the ruling, but she knew if she approached Asia and started a conversation along those lines, the app would provide her with further prompts. Of course, Naomi had no intention of starting a conversation with Asia or anyone else, but it was good to know the help was there, for when conversations were unavoidable.

  She had written the app herself. Or trained it, actually. The number of software algorithms that were actually written, in the traditional sense, by people, was dwindling fast. Most apps—the thinking part of them, anyway—were machine learning algorithms, trained by feeding them data with known right answers. It was like teaching a child to recognize everything from a Chihuahua to a mastiff to a cartoon puppy as a “dog.” You didn’t do it by explaining the characteristics of a dog. You did it by pointing to things and saying, “That’s a dog,” or “That’s not a dog.”

  She called the app Jane. A bit presumptuous, perhaps, but she liked to imagine she was Ender in Speaker for the Dead, talking to an AI no one else knew existed. Her Jane wasn’t a great conversationalist yet, but Naomi was improving her little by little. Besides the personal data served up by the glasses, Jane could understand speech—there were plenty of open source libraries for speech recognition these days—and with that input, Jane could troll the web, searching for popular topics and appropriate responses to suggest to Naomi to use in conversation. The suggestions she served up were occasionally unhelpful, sometimes hilariously so, but Jane’s lurking presence in her glasses took some of the stress out of social interaction.

  Naomi settled down into the beanbag cushion she’d smuggled in a year before and rested her hand against the stack of Harry Potter novels next to her on the floor. She hadn’t stolen them, not really. They’d never left the library, after all. But she had hoarded them back here as her private treasure, to read and re-read when she needed them. She had grown up with these books. Harry and Hermione and Ron were as real to her as any of her classmates or friends. As a child, she had fallen asleep most nights to Jim Dale’s audiobook narration, the comfortable timbre of his voice as familiar as the words themselves. Even now, when life threatened to overwhelm, she could come here, choose a page at random, and slip into Hogwarts as if she had never left.

  Today, however, she had work to do and a decision to make.

  “Jane, launch Realplanet simulation number one.” She whispered it, not wanting to be overheard by anyone else in the library. In a quiet setting like this, with no ambient noise, the speech recognition software could understand a whisper, though Naomi looked forward to the day when she could simply subvocalize, as worked in so many of the books she enjoyed.

  Her glasses turned opaque, blocking her vision of the real world, and another world took its place. A vast and beautiful landscape, a sparkling lake, rolling forests backed by distant blue mountains. As she turned her head, the glasses responded, showing the view from a different angle, as if she were really standing in this imaginary world. It wasn’t truly immersive— her peripheral vision revealed the deception, if she paid attention—but it was close enough for the willing mind to forget for a time and just believe. She controlled her motions using a wireless game controller in her lap. She could generally navigate it by touch and memory, though if she needed it, the program could temporarily transpose a view of her hands and the controller into the scene.

  This was Realplanet, the latest in the craze of open sandbox games that had started with minecraft two decades earlier. This version provided not only an open world in which players could build creatively and try to survive, but also configurable laws of physics, opening the game’s creativity to a new level. The configuration could be as simple as reducing the strength of gravity, or as complex as defining fundamental substances and the laws by which they would combine, react, melt, or combust. The game was not only played by millions of children around the world but also used by scientists and engineers to simulate real-world experiments. Laws of thermodynamics and electromagnetism could be adjusted or defined, sometimes requiring thousands of lines of custom code in Realplanet’s native scripting language.

  This particular world represented her senior project, and with only two months to go until the end of school, it was shaping up into a complete failure. To her left stood a squat, ugly cabin made of wood—the only structure visible for miles. To call it a cabin took generosity. The walls more closely resembled stacks of badly cut tree branches than anything aesthetic. It had no windows to speak of, and only one door, which consisted of a piece of the wall that could be dragged far enough out of place to climb inside, and then dragged back again.

  Naomi used the game controller to maneuver her way inside, where the view was no better. The dull interior was filled with a month’s supply of yams in scattered piles on the floor and several bottles of water, the minimum supplies needed for survival. The cabin’s one occupant stood motionless in the middle of the room, not acknowledging her presence.

  The cabin’s occupant—whom she had named Mike—was the first of the deep learning bots Naomi had written and set loose to live or die in Realplanet’s unforgiving ecosystem. Mike—many different iterations of him—had played the game hundreds of thousands of times, learning a little more each time: how to find food and water, how to build a shelter, how to avoid or defend against the dangerous creatures that prowled the night. Naomi hadn’t programmed this knowledge into the software. It had played and died, played and died, the value of each attempt measured by the length of time it managed to stay alive, until it learned how to survive indefinitely.

  But Mike hadn’t lived up to his namesake from Heinlein’s The Moon Is a Harsh Mistress. The point of the experiment had been to see what he would do after he learned to survive. Would he explore his world, looking for better sources of food or more efficient building materials? Would he expand his shack into a palace? What creative things would he do with his time once he had mastered the skills of survival?

  Instead, he had done nothing. He collected what he needed with a minimum of effort, and then stood motionless in his shelter for days on end, pausing only to eat and drink when necessary. Naomi knew the programming was solid. Mike was a recurrent neural network, the most involved and complicated software she’d ever written. But it was nothing new. It wasn’t substantially different from what Google DeepMind had done more than a decade earlier, when they used general AI algorithms to master Atari video games like Centipede and Space Invaders blindly, with no prior knowledge of the rules of the game and no input but the pixels from the screen and the score. Maybe back then it would have been enough to get her a good graduate school placement, but not anymore.

  This wasn’t the only version of Mike and his world she’d attempted—not by a long shot. She had put him in worlds with harsher climates and physical laws, worlds where survival took more creativity and cleverness. She introduced seasons, so there would be times of plenty and times of want
, requiring long-term planning and forethought. But simulation two turned out the same as simulation one, as did simulations three, four, five, six, seven, and eight. Mike eventually learned to survive, but no more. He had no external sense of self, no drive to invent, no longings, no curiosity. He was just a set of instructions, more complicated than most, but no different in kind from a stack of ENIAC punch cards.

  John Searle, a philosopher at the University of California, Berkeley, once posed the Chinese room experiment, suggesting that a computer could never truly be conscious. The experiment imagines a person who does not know Chinese sitting in a room, where he is passed slips of paper with Chinese ideo-grams written on them. He looks the ideograms up in a book and writes down the appropriate response, which he passes out through the door. To those outside the room, it appears as if the room speaks Chinese, but all of the intelligence went into creating the book in the first place—the person in the room has no comprehension of the conversation. Searle had argued that a computer, no matter how sophisticated its responses, will have no more understanding of the meaning of its responses than the person in the Chinese room.

  As time after time her experiments failed to show any evidence of emergent intelligence, Naomi began to fear that Searle was right.

  Her advisor had, of course, told Naomi the same thing, strongly suggesting that she take on a more achievable project. Naomi had politely agreed, and then gone on to work on the idea anyway. She couldn’t help it. It was the only project topic she really cared about. Now, however, she was at the point of no return. Her advisor had offered her a way out, if she chose to take it. She could spend the rest of the semester adjusting her research to show how machine learning could assist child education through game play. If she worked hard, she would still have time to do the work and write the paper. It wouldn’t be groundbreaking, but it would be finished.

  Naomi had dreamed of artificial intelligence, true strong AI, since she was a kid. Her dual majors in computer science and cognitive science had been specifically aimed at that goal. Yes, pundits had been predicting intelligent machines within twenty years for most of the last century, but now they seemed so close. Speech recognition and synthesis, face recognition, natural language processing, intelligent conversational agents: all of these had become everyday miracles, their interactions often mistaken for human, within narrow contexts.

  She didn’t care about graduate school placements, not really, except as a way to continue the work. She didn’t want prestige or a high salary. She wanted to understand what made the human mind unique. She wanted to realize the dream of all those novels she had grown up reading.

  She would worry about her advisor tomorrow.

  “Jane,” she said, “launch a new simulation.”

  She had to try something new. Not just a tweak to a previous idea, but something completely different. On a whim, instead of one Mike, she placed a hundred copies of Mike in the same simulation. A hundred different avatars, roaming the world with independent bodies and minds. And feeling bloody-minded, she provided the world with only enough resources of food and water to support five players. The hundred Mikes each had the same objective: to survive.

  She started the Mikes from scratch again, with no knowledge of how to live, essentially just trying random actions. The first run finished quickly, with all the Mikes dead. The second, third, and fourth did the same. This was their training period, however, when they learned what it took to survive a little longer with each iteration, learning through random actions that happened to produce better results.

  As the iterations proceeded, those who managed to live longer—even if just by moments—would improve their knowledge of how to play the game the next time. Since it was competitive, most would still die quickly, leaving a maximum of five to survive long-term. She didn’t know what difference it would make, but perhaps the need to compete would spark the need for innovation that simply surviving had not.

  The iterations flew by quickly, but it would still take time for there to be any meaningful results. She left it running and— after making sure no one was nearby—sidled around the bookshelf, slipping out of her private Narnia and back into the real world.

  She bought a Caesar salad with extra croutons from one of the campus cafeterias and sat outside to eat it. The weather was cold, but she preferred to be alone instead of in the noise and bustle of other students eating food with friends. She took her time, enjoying the meal and the crisp air, and wrapping her fingers around her hot coffee when they started to feel numb.

  After lunch, she had a class to attend. It was a human psychology class she’d signed up for to get a better idea of how the human mind worked. The class had turned out to be a disappointment, but she always went anyway. She rarely skipped classes. She always imagined what it would be like to be a professor and have no one show up for your class, and then she felt obligated to go, in case she was the only one who did. It never happened that way, but she still never quite had the heart to stay away.

  At the end of the hour, she returned to the library and slipped into her secret nook. She settled on the beanbag chair, pulled the game console off of a shelf, and told Jane to start the new simulation.

  The world sprang into being through her glasses. At first, she suspected a glitch, some kind of mistake on her part, or perhaps even a bug in the Realplanet software itself. All she could see, in any direction, was a tangle of brown lines. She tried to move, but the lines had substance, blocking her path. Naomi switched to superuser status, and used that power to fly up off the ground, the brown tangles passing through her now as if she were made of smoke.

  In moments, she cleared them, and the familiar view of distant mountains and sky greeted her. From this vantage, she could see that the brown tangles were a wall or fence of some kind, encircling a large area of ground. As she flew higher, she saw that all the land in view had been subdivided this way, a series of roughly square shapes stretching for miles like a vast chessboard.

  What were they? Perhaps each of the Mikes had erected the fences as borders around their own fiefdoms, a way to keep the others from stealing or killing. But that didn’t make much sense. Only some of the areas had access to the lake or to the forest, and some of them were entirely surrounded by others. Only a few of the enclosures had buildings, though they were large ones, made at least partly of metal that glinted in the sunlight. She saw several enclosures stuffed full of sheep or cows, grazing contentedly.

  A Mike below her walked across one enclosure toward the tangled brown fence. He disappeared for a moment inside it, and then emerged again on the other side in a new enclosure. Naomi flew down to investigate, but she could see no obvious door or path. Then she understood. It was like a maze. The AIs would have no difficulty remembering the complicated set of movements needed to weave their way through this tangle of briers. The wolves and tigers and balrogs and jabberwocks of the world, however, never could. The Mikes had compartmented their world into safe regions, passable by them but not by the predators. Even when a predator spawned inside an enclosure, they could escape and then kill it while keeping it contained.

  This wasn’t just a pattern of fences. It was a village. The Mikes were working together.

  With growing excitement, Naomi checked the statistics for the world, and found that fifty-two Mikes were still alive of the original hundred. But that wasn’t possible. She had designed the world to support a maximum of five. This particular iteration had been running for only a few minutes, but in game time, it was the equivalent of years. For the moment, she had slowed down its normal breakneck running speed to real time, so she could observe, but there had been plenty of time for the population to die off.

  Why hadn’t they? There wasn’t enough food in the world to support fifty-two players. Yes, they could breed animals to create food, but there was only so much grass for those animals to eat. The players could grow grass, but it grew only slowly, not at a sufficient rate to feed the number of animals required to
keep fifty-two players alive. The same applied to planting edible crops. Most of the ground was unsuitable for planting, leaving a maximum amount that could be grown, and planting crops would compete with planting grass for domestic animals. With this world’s straightforward rules, the math was simple enough.

  And yet they were alive. She examined one of the sheep enclosures more closely. The area was absolutely stuffed with sheep, and nearly stripped bare of grass. With this many sheep in such a tight place, the grass couldn’t have lasted very long. Several adjacent enclosures, however, were empty except for verdant swaths of uneaten grass.

  As she watched, one of the Mikes destroyed a section of fence, allowing the sheep access to the next enclosure. They pushed through, bleating joyously, into the fresh new grass, leaving bare ground behind. Once all the sheep were through, the Mike repaired the gap in the fence.

  But how did the grass grow so fast? At the rate this number of sheep ate, they would eradicate the grass in all the enclosures before nightfall, and new grass wouldn’t grow back for days. A shadow fell across the sheep from one of the shining metal buildings. She would have to investigate those, too. It made sense for the Mikes to build their living structures tall; it left more ground for planting. But the structures could have been built on rocky ground, instead of here, where the ground was fertile for planting. And why were they covered in metal? Metal had to be mined from deep underground. It had to be melted down and forged. Why not simple wood or stone?

  The sun shifted, reflecting off of the buildings in a bright glare. And then she understood. Taking to the sky again, she looked down on the pattern of enclosures and confirmed her suspicion. The buildings were shaped and placed in such a way as to reflect sunlight away from rocky ground and onto the grassy enclosures. As the sun moved, the reflected light moved, bathing each enclosure in an extra dose of sunlight. In the world’s simple mechanics, growth was directly correlated with amount of sunlight. The Mikes had found a way to beat the math.

 

‹ Prev