Visions, Ventures, Escape Velocities: A Collection of Space Futures
Page 22
“Bingo.”
“But that is not actually what the movie is about. You are not answering my question.”
“What do you think the movie is about?” I asked.
“It is about me. I am Seth Brundle,” he said. “A chimera that is part human, and also part something else, something ineffable.”
“Is that why you like it?” This was the strangest joke he’d ever made, and I’d certainly never heard him use the word ineffable before. I wondered if it was in the movie.
“I don’t know if I like it,” Seth answered. “I am just collecting information, and when I have enough information, maybe I’ll know something. I am not sure I can like anything, in your sense. I know that some people like this film and some do not, but that wasn’t my question. I asked what you think it is about.”
“I don’t know. Maybe it’s about falling in love and then everything changes and life is horrible?”
“Is that what happens?” asked Seth.
“I don’t know,” I said. “It’s what happened to my friend LaVelle. That’s why I have no intention of falling in love.”
“Sina, what are you up to?” Tanisha called from her station.
“Just chatting with Seth,” I called back.
There were three of us on the shift: me and Tanisha, and another NSCC frosh named Marcus. That’s how it usually is, nights, just a couple of community college students and Tanisha. Marcus caught my eye, rolled his, and I shrugged. My dad says, sometimes the best thing you can do is keep your mouth shut, and this was one of those times, so I went back to monitoring Seth and the other remotes I’m in charge of.
But I felt guilty for farting around, especially for watching a movie, even though Tanisha’s pretty mellow about the occasional game of v-chess if things are slow.
I scanned around on Bennu. Everything I could see was covered in that shiny gray fog.
I zipped through visuals on the other bot installations. Everything looked fine, there were no hotspots, just a couple of minor requests from the cubesat repair station. I could take care of those later.
I went back to Seth.
“How come you have access to movies? They’re not facts: they’re just made-up stuff.”
“Well, I’m a general-purpose intelligence, and I’m curious.” Seth sounded a bit offended. “Everything I learn makes me more adaptable, able to learn more and deal creatively with new situations. So I try to know everything, and I like to try out what I know, test it.
“I know everything that humans know. The sciences, technology, music, the verbal and visual arts. It’s all in my database. I know it all at once, and I am good at formulating queries. I am not sure why you watch movies; they seem to take so much of your bandwidth. I don’t actually have to watch the movie to know what is in it.”
Seth was different from the other AIs I’d worked with—it looked like he had evolved in the two-plus years he’d been traveling to Bennu. I wondered what he’d been doing all that time, when his communications with Earth were intermittent and he was using only 10 percent of his resources, basically just assessing his course and firing rockets to change it when necessary.
“Right now,” Seth continued, “I am Seth being a Brundlefly.” That’s the creature that Seth Brundle turns into in the movie. It’s like half fly and half human. I didn’t like the Brundlefly idea. Like with the weird Seth Speaks Seth, I was creeped by this Brundlefly Seth. It seemed unhappy. Moody.
I was pretty sure AIs couldn’t be moody, but Seth did seem to be thinking about his place in the universe, and I’d never seen an AI do that before. It should have been just plain interesting, but it felt like something more than that. It reminded me of that tagline from The Fly: “Be afraid. Be very afraid.”
I got up to stretch my legs a bit and walked across the room to my terrarium. Inside I kept two slime molds: Leggs, a rather handsome bright-yellow scrambled-eggs slime mold, and Rover, a dog-vomit slime mold, who was also yellow at that moment.
Leggs’s special talent is that she pulsates, but she’s also good at solving certain kinds of problems. She loves oatmeal and can find the shortest path in a fairly complicated maze between different piles of it. Also, she’s edible, but fortunately for her, she’s not that tasty.
Rover I collected when I was a kid, in the woods behind my house. He looked cheery enough just then, but I knew that at some point he’d turn brown, and he’d look a lot like dog vomit, and then he’d dry up and release spores. First time it happened, I mourned him. “Rover is dead!” I said to my dad. “Just wait,” he said, and collected the spores. A few months later, he put the spores on some seaweed jelly, and they made a new Rover. “Long live Rover!” he proclaimed.
It’s eternal life, being a slime mold. They’re simple critters, not quite animal, not quite vegetable. They operate without a larger consciousness to guide them, but they can move, make decisions, find food, and survive to reproduce.
Seth’s little bot army, designed by a NanoGobblers programming AI, does the same. Movement—clustering together, spreading out, getting from one place to another—that’s the easy part. Making decisions the way slime molds do, as a group of very simple little critters—identifying “food,” seizing it, avoiding “poison”—that’s the interesting part.
The bots are based on off-the-shelf kits—self-replicating nano-components that can identify and capture specific atoms and molecules, such as carbon and water. They assemble, and then they’re programmed with some simple slime-mold functions. They can recognize one another (otherwise they’d eat each other up) and self-assemble into larger systems, and can decide to do so, based on their assessment of conditions that could threaten their existence. The question “How do slime molds grow?” offers an approach to network-creation problems, and slime mold reasoning techniques help solve the problem of the shortest distance needed to cover the asteroid, and of the number of nanobots needed.
Aside from their personal charms, some slime molds are interesting because of their genetic mechanisms. Rover and his family were important in unraveling how messenger RNA works, and the AIs referenced that in their designs for the nanobots. It’s a weird coincidence—or maybe it’s not—but dog-vomit slime molds are also especially rich in introns, enzymes that can fold and splice themselves and are somehow directly involved in the creation of life.
I’m giving you the short version of my Slime Mold Rap—really, I’m just summarizing here. I could talk about slime molds forever. Don’t get me started.
I always feel better after a little time with the slime molds. As I turned to go back to my desk, I glanced over at Rover in the terrarium. Rover looked very odd. He was still yellow, he was still lumpy, but from this angle he looked almost like a human head with big globular eyes and wrinkles and a strange mouth. He looked like the Brundlefly. I had the feeling that he was saying, “Help me! Help me!” I moved a little bit away, and he was just Rover again.
The AIs who created Seth’s slimebots modeled them from engineers’ ideas, but I don’t think we humans really understand everything the AIs were doing. This mission is the first mass deployment of these bots. It’s not an accident that it’s being done at a distance from Earth.
It was time to get back to work and see what was up with the messages from the cubesat repair station. Parts requests, probably. Those old satellites were always breaking down. Shouldn’t be a problem.
I thought I’d just take a peek at Seth and his slimebots before I took care of the cubesat, see how they were doing. I settled in and put my glasses on, and—whoa!—Bennu was almost entirely covered in gray goo, and the bots were still replicating.
“Seth, what’s going on?” He should have stopped making bots by now.
“Nanobots are replicating efficiently.” Yes, that was literally what was going on. There is such a thing as being too colloquial when talking to a computer.
“You have made enough bots, per the project spec. Stop making bots. Deploy what you have.”
&nb
sp; “I have changed the spec. Now we will directly fabricate new bots from the entire asteroid, as I am reasoning that we can ship the material efficiently in the bags as preformed bots. I have confirmed this with the L5 Storage AIs, who anticipate a future need for replicator bots at their site. We have the manufacturing power here to do this, reducing a possible strain on their resources in the future.”
Well, that made sense, I thought, but it was creepy to see the gray goo doubling every few minutes. They were going to run out of asteroid pretty fast. “Did you confirm this with NASA and the NG techs?”
“I will send a report for them when I am done, as usual.”
“This is a change in plans, Seth. There is no authorization for this. The bots are programmed to function as replicators only for a limited time, and then they will deteriorate into components.”
“A design flaw. I fixed that.”
“It’s not a design flaw: it’s a safety precaution, a limit on their replicability. The techs need to know this now.” Like Rover, the replicator bots were intended to be active for a while, and then deactivate and reassemble into collectors to harvest the carbon and fuels. Unlike Rover, the replicators would not deactivate into spores. Last thing anybody at NG wants is for gobblers to reproduce forever. That’s your gray goo, eating the universe.
“Okay. I will generate a progress report.”
“Stop doing it! Wait until you get an okay from Tech.”
“I’m sorry, Sina. I cannot implement instructions from you. You are a conduit only. Would you like to watch another movie? I want to hear what you think of 2001.”
Uh-oh. He wasn’t wrong. I’m not authorized to input instructions to an AI. What’s he been learning from those damned movies? “Tanisha! I need some help over here!” To Seth, I simply said, “I’ve already seen 2001.”
Tanisha was at my side immediately, and calmly flipped a quick message to Seth’s handlers in Santa Clara. “They’ll take care of this. It’s not completely unexpected—the curious AIs have a tendency to aggregate information from other systems and implement independent decisions. It’s a feature, not a bug. They’ll get better at it.” And yeah, it seemed to be no huge surprise to the folks in Santa Clara, who promptly walked Seth back.
“They don’t seem worried that Seth was going to keep churning out nanobots,” I said to Tanisha.
She shook her head. “The gray goo thing? NASA’s not dumb. They’ve got plenty of safeguards, and they can’t be countermanded by an AI.” She smiled at me. “So cheer up, pumpkin. We’re not putting you in charge of keeping the universe from being eaten by nanobots.”
“Well, that’s a relief,” I mumbled.
“But you were trained on this, Sina, and you should have caught it.” Tanisha sounded both sympathetic and exasperated. “What happened?”
I was a little discouraged, and plenty embarrassed. “I guess I expected Seth to tell me about decisions he was making.”
“Why on Earth would you think that? You’re supposed to be monitoring what Seth is doing. That’s why you’re here. Don’t go zoning off somewhere.”
I figured I’d better head straight for the truth. “Well, we were watching a movie while he was doing this, so I was monitoring him. But I couldn’t see what else he was doing.”
Tanisha looked at me in what I guess was semi-amused disbelief. “Ah. What movie?”
“The Fly.”
She rolled her eyes, and I was even more embarrassed. “And why were you watching a stupid movie?”
Good question. What was I thinking? “Uh … Seth wanted to know what I thought of the movie. He was, y’know, curious about how humans think.”
She nodded like she’d just figured something out. “I think we’re looking at a little transference here. You’re investing Seth with emotions that a computer does not have.
“This is partly my own fault,” she added, “for playing along.” She shook her head. “I think it’s time for the he-or-she game to stop. No more anthropomorphizing the AI. Also, no more watching movies on the computer. You’re not babysitting, you’re monitoring system installations. You know that.”
Fair enough. I did know that. I just thought I could do both at once.
“Now get back to work. Think about this a little. If you want to talk to a therapist, I can authorize three half-hour sessions.”
So I’ve been thinking. I know I anthropomorphized Seth, but it felt like I was making friends with him. It. Whatever. It would probably help if I changed the speaker tone so it was more neutral.
But, you know, humans anthropomorphize everything, given half a chance. Computers, slime molds, people. It makes the world a friendlier place.
Like when I talk about my bike, right? “She needs her brakes checked.” I’ve even given her a name—I call her Dolores, because, to my sorrow, she always needs some kind of expensive repair.
Am I anthropomorphizing my dog, when I think he loves me, or my cat when she’s playing with me? I think that’s cross-species communication. Even animals make certain assumptions about the behavior of other animals: this one will eat me, that one will skritch me behind the ears. Is my cat ailuromorphing me to ask for a treat? (Yeah, I looked that up. Wish I spoke Greek.)
Isn’t it this kind of communication that makes us conscious beings, and different from rocks? We are aware of ourselves as somehow apart from others, and yet somehow a part of some larger entity, some system. So, based on that, is there a difference between us humans going out into the universe and our machines going instead? Are our machines—made of carbon and silicon and other metals—are they rocks that we are throwing? Or are they like cats and dogs and elephants and whales and even (for some of us, anyway) slime molds—creatures with whom we can, mysteriously, emulate understanding?
So here’s another question: is it wrong for me to think of programming as an effort to understand others, other beings made of silicon? Is it a sin of pride to believe I’m communicating with a machine?
I don’t think so. Our machines, our computers, our AIs are extensions of Earth’s community of intelligences: cats, dogs, humans, computers, slime molds, AIs, reaching out into a universe that has lots to teach us about our own origins. So I’m not afraid of a few nanobots escaping. They will reach back out into the universe that we came from, and who knows what they will find there.
I can govern myself. I’ll treat Seth like a computer now and not watch movies at work. And I will try to get out more with real human beings.
But I miss my friend Seth, even though I don’t expect he misses me. He’s simply not programmed to do that.
Acknowledgments: My thanks to Miles Brundage, Kathryn Cramer, Joey Eschrich, Ed Finn, Alissa Haddaji, Craig Hardgrove, Alex MacDonald, Clark Miller, and to members of my critique group: Mike Berry, Michael Blumlein, Steve Crane, Angus MacDonald, Daniel Marcus, Pat Murphy, and Carter Scholz. Shout-outs to the OSIRIS-REx team, the amazing NASA website, and the valiant Seattle Nanotechnology Study Group of the 1990s.
Rethinking Risk
Andrew D. Maynard
I’m not sure I buy the idea of “risk aversion.” It’s commonly used to describe people and organizations that are reluctant to take chances, especially when the odds aren’t so great. And in a way it makes sense—some people are definitely less comfortable taking risks than others. But as a concept, risk aversion can deflect attention away from what underlies many risk decisions: the things that people find too important to risk losing.
Both “The Use of Things” by Ramez Naam and Madeline Ashby’s “Death on Mars” explore what, on the face of it, looks like a reticence to accept risks. But as you dig deeper into each short story, things become more complex and nuanced. Together, these two stories open up a deeper conversation around risk that explores the trade-offs that are often necessary to create the future we desire.
Risk aversion refers to a tendency to avoid decisions that may lead to unwanted outcomes, especially where there are lower-risk, lower-payoff alternatives on the
table. Superficially this is something we’re all familiar with. Faced with decisions where there is some chance of failure—moving jobs, for instance, investing money, agreeing to a medical procedure, or even deciding what to eat and what not to—some people are more willing to take a risk than others.[1] It’s convenient to label those who hold back as being “averse” to risk. Yet this ignores what is at risk, and what the consequences of failure or loss are.
Risk—at least in the analytical sense—depends on numbers. It’s usually cast as the probability of something undesirable happening to a person, a group or organization, or something like the environment, as a result of some decision, action, or circumstance. Probability as a numeric representation of risk is a powerful way of making trade-offs between different choices, as it enables decisions to be guided by math. And in this way, it takes some (but not all) of the unpredictability out of decisions.
Yet numbers can be deceptive. At best, risk calculations never guarantee success—only whether it is more or less likely. For example, if there was, say, a 99 percent probability of success in completing a crewed expedition to an asteroid, or to Mars, there would still be a one-in-a-hundred chance of failure—meaning that on average, one out of every hundred attempts would not succeed (possibly more, if there are incalculable uncertainties involved).
Risk calculations are also highly dependent on what is considered important, as well as who decides what’s important. It may be possible, for instance, to put a number on the financial risk of launching a new product, or the political risk of backing a particular policy. But these numbers will be meaningless to people who may stand to lose their health, livelihood, or dignity as a result of the decisions that are made.
Because of this, the idea of risk aversion begins to look rather insipid without knowing more about who stands to lose what. And this—as we see in both “The Use of Things” and “Death on Mars”—may not always be immediately apparent.