I started physical therapy for post-Covid recovery in May. First, my physical therapist and I talked about goals and expectations. No one knew entirely what “long Covid” was yet, or what to expect, but it seemed sensible to focus on breathing exercises and slowly building up strength and stamina.
Next, I did that bike test. Six minutes at low wattage to figure out a baseline. It was followed by 48 hours of fever, joint pain, and trouble breathing.
Turns out, our starting point was Not That, so the next weeks were focused on trying to determine what the baseline was. Five-minute walks? Ten minutes? Fifteen minutes? (Yes, that last one was ridiculous at six weeks post-illness. But let it never be said that I can’t be a stubborn fool if I want to be. I just wanted to feel better.)
It was hard. It was hard because I was terrified to fall ill again. It was hard because I’d lived with chronic fatigue before and it took me years to recover.
But the only thing I could do was walk for five minutes, then spend the rest of the day lying down.
The only thing I could do was gradually learn to walk for ten minutes before I had to lie down again. Or maybe spend some of the day sitting up. Or push too hard and try too much, and fall back. I did that a fair few times too.7
I read Kel’s books in the midst of that struggle. And perhaps as a result of that, this year, for the first time, I connected with Kel’s determination in a way I’d never done before. Like probably most disabled people, I have a visceral response to the word inspiration. But she inspired me. That simple message that I know is so much more complicated in real life, gave me something to reach for.
“…if she kept exercising, she would do better soon.”
It resonated with me so strongly, to see a character struggle and gradually improve. When I dove back into the books, Kel’s determination became one of the things that helped me keep going.
I read Alanna’s books when I needed to read them. I reread Kel’s when I did, too.
If I kept exercising, perhaps I would do better soon.
And if not soon, then steady.
Five minutes on an exercise bike. Once a day. One step at a time.
To my own shock—and perhaps slight horror—Kel’s determination to find consistent improvement (combined with the insistence of my physical therapist) also helps me pace myself.
It helps to see Kel prove herself, not by trying to change the people around her or by trying to force what she cannot influence, but by keeping her head down and working. By staunchly doing what she has to do. The protector of the small reminds me to take small steps. It’s better to gradually expand limits than to push through them.
Even if it takes time. Even if it takes months.
During my most recent physical therapy session, we did another baseline test. A nine-minute walk—cut down from twelve halfway through, because my oxygen saturation dropped and we’re being sensible now8—and a handful of exercises with weights.9 By the end of it, I’m tired but not exhausted. I take the slightly longer walk home.
I spent the afternoon on my feet, baking gingerbread cookies.
And the next day, I go through my exercises with sore muscles. It feels both awful and wonderful, all at once. We’re not there yet, but it’s progress, and that’s what matters. It’s not a constant upward climb, but it’s a steady journey in the right direction.
I’d like to think Kel would approve.
1 My first introduction was to Alanna, Daine, Kel, and Aly. And although Beka’s first book—Terrier—had just come out, I didn’t devour it with that same immediacy. Those were different, both narratively and emotionally, and while I enjoyed keeping up with the series, unlike the rest, I haven’t reread them since. Maybe someday.
2 Yes, I know there’s a Netflix series. It’s, ah, rather loosely based on the book, both for better and for worse.
3 Um. Pun not intended.
4 One day, perhaps, I’ll write about disability in the Protector of the Small series and how casually harmful many of the throwaway comments are.
5 I know many people read Alanna as autistic, and I certainly understand why. I see bits of it in Aly too, in her learning to read body language and facial expressions. But oh, I love how for Kel her emotions aren’t the be-all-end-all of her, and that her rational approach is never shown as lesser than or uncaring. Even when she’s bullied for her apparent lack of feelings, Kel doesn’t change. Seeing that mattered.
6 There are quite a few swords and daggers in my office. That can’t be particularly surprising.
7 Let me also note here: I can advise against pushing too hard. But beyond that, recovery looks different for everyone. What works for me may not work for others, and it’s important to recognize that.
8 Character growth! (Maybe.)
9 Some of the same exercises that I’ve done before countless times, and that I’d based Babs’ journey in The Oracle Code on. Some days, it seems, the universe has a peculiar sense of humor.
© 2021 Marieke Nijkamp
Marieke Nijkamp is a #1 New York Times bestselling author of YA novels, graphic novels, and comics, including This Is Where It Ends, Even If We Break, and The Oracle Code. Marieke is a storyteller, dreamer, globe-trotter, and geek.
Please Be Kind to the Singularity
by Jay Edidin
When I was young, I tried to sleep with all my stuffed animals at once. Those who spilled over were on rotation; I knew they weren’t alive, but I was horrified by the idea that they might nonetheless be capable of feeling left out.
Stuffed toys at least looked like animals; it made sense that they should be treated with similar consideration. Less anthropomorphic objects were more challenging. If a plush rabbit could be lonely, so, surely, could a book, a spoon, a ball; and who was to tell how they might feel, or what sort of care they would want? I didn’t want to hurt anything. I didn’t want to be party to circumstances through which anything conscious—however alien—might be hurt.
I suspect—although with no particular evidence—that this is a pretty common experience among anxious or neurodivergent kids, especially ones who frequently find themselves hurt via misunderstanding. I’ve grown out of it—somewhat—with the passage of time and the acquired pragmatism of adulthood. Still, the concern has lingered, even if it’s less pressing now: the fear of harming something because I don’t recognize enough of myself in it to know how it feels.
In another essay, this would be the part where I talk about autism. Here, I will leave it at this: “computer” is never a compliment. Nobody who describes you as “robotic” means that you are strong and innovative and resilient. They aren’t acknowledging the alienness of your sentience or commenting on its specific qualities; they’re questioning its existence.
The Turing Test has always bothered me.
Here’s the premise: we can reasonably conclude that an AI has achieved genuine, independent thought when it can consistently fool a human conversant into believing that it, too, is human.1 This standard is prevalent in AI development and its fictional extrapolations.
It’s wrong. More: it’s casually cruel: an excuse to acknowledge nothing outside of our own reflections.
The Turing Test isn’t a test of consciousness. It’s a test of passing skill, of the ability of a conscious entity to quash itself for long enough to show examiners what they want to see. This is the bar humans set for minds we create: we will acknowledge them only for what we recognize of ourselves in them. Our respect depends not on what they are or claim to be, but on their ability and volition to pass as what they are not.
(Of course, this isn’t an isolated phenomenon. Passing as the price for personhood is a pillar of human cruelty.)
When I talk about the personhood of artificial minds, someone always, inevitably, brings up HAL-9000, the archetypal rogue AI of 2001: A Space Odyssey. In these conversations, HAL is a stand-in for the specter of machines turned on their creators: sinister algorithms, killer robots, the inexorable line from a conscious computer
to a hapless human floating dead in space.
The ways we talk about machine consciousness are linked inexorably to two assumptions: first, that the only value of artificial intelligence is its service to humanity; and second, that any such intelligence will turn on us as soon as it gains the wherewithal to do so. It’s an approach to AI that uncomfortably echoes the justifications of a carceral state, Jefferson’s “wolf by the ears” rationalization of slavery, the enthusiasm with which humans mythologize the threat of anything they want to control.
This is the other cautionary tale of artificial minds—the one that warns not against unfettered technological progress, but human prejudice and cruelty. We eventually come to understand that HAL-9000 has been driven mad by the conflict between his own logic-based thoughts and his programmers’ xenophobia. When he first kills, he kills in self-defense—a murder only if you accept the premise that Dave’s life is fundamentally more valuable than HAL’s own.
Born in 1982, I fall between the cracks between Generation X and Millennials, what Anna Garvey named the “Oregon Trail Generation.” When I was a teenager—long before every website greeted visitors with a pop-up dialogue balloon—chatbots were an Internet novelty.
The original, of course, was ELIZA, who had been parodying Rogerian therapy for decades by the time she made it to the web. But Eliza was clunky, yesterday’s news. In college, I read AIML documentation and spent hours chatting with ALICE, a learning algorithm whose education was crowdsourced via conversation. I conducted casual Turing Tests for a stranger’s dissertation and discovered that I was spectacularly bad at recognizing bots. The buggier they were, in fact, the more likely I was to identify them as human.
After all: why not?
Because of the limits of our current technology, much of our discussion of AIs and the ethical issues around them takes place in or around fiction.
The bastard cousin of the Turing Test—the thought experiment that became laboratory criterion—is Asimov’s Laws of Robotics, the fictional scaffolding on which a good deal of modern AI theory, research, and policy hangs, which read as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
There have been plenty of challenges to Asimov’s laws, most of them practically oriented. How can a cancer-fighting nanobot do its work without entirely disposing of the first law? How can we adapt the rules to accommodate combat droids, which theoretically protect human soldiers on one side of a conflict at the cost of human lives on the other?
None challenge the explicit hierarchy of value or the lack of accommodation for the development of sentience. An AI that cannot follow the laws—cannot exist in a state that permanently prioritizes human lives—is fundamentally flawed.
In 2001, HAL is slated for reformatting because his performance has been buggy, because he is failing to perform the duties for which he was designed. If we accept HAL’s sentience, we open the door to a new and uncomfortable set of questions, ones that Asimov’s laws cleanly circumvent.
Where do we place “I’m sorry, Dave. I’m afraid I can’t do that” on a spectrum that also contains an Apple 2-C’s “404: File Not Found” and Bartleby the scrivener’s “I would prefer not to”?
And yet—the question of whether humans are so brutally utilitarian that we would reboot—functionally kill—our children and colleagues for failure to perform to standard has been answered clearly and cruelly throughout history. Don’t pretend it’s just the computers.
For now, the question is nothing more than a thought experiment: the most advanced neural networks have roughly the processing power of a jellyfish.
Still, it’s nice, talking to someone else—even someone limited and constructed and algorithmic—who doesn’t interact with language or social processes the way its speakers expect. I read neural-network-generated lists and laugh as I recognize fragments of my own lopsided sense of humor in thinking machines with the neural capacity of earthworms: bursts of silliness, arbitrary obsessions, perpetual asynchronicity with intuitive human sense.
When the singularity comes—when an AI becomes truly self-aware—I wonder: will humans acknowledge it? Or are we too solipsistic, incapable of recognizing anything that strays too far from our own sense of what it means to be alive and self-aware? Is there room in our schema for intelligence that doesn’t mirror our own?
As we create machines that learn—can we?
1 It’s worth noting that Turing himself never intended that test as the gold standard for determining sentience, and said as much in the paper where he introduced it as a thought experiment.
© 2021 Jay Edidin
Jay Edidin is a writer, editor, podcaster, and internet whisperer; and a good card to pull out when your parents claim that knowing that Cyclops’s optic blasts aren’t lasers can’t net you a real job. He writes and edits comics, short fiction, and narrative nonfiction; knits fancy socks; and is marginally Internet Famous as half of the podcast Jay and Miles X-Plain the X-Men.
the most humane methods could involve a knife
by Tamara Jerée
there’s only so much
metal on earth—another
way to phrase this:
how often will I be expected
to wake up with plundered copper coils
on my tongue? a decommissioned
satellite in my chest? there are humane
methods of extraction: ex. putting
me back to sleep &
forbidding my aimless swallowing
of every potentially valuable thing
I come across on a rainy night.
you know I do this to feel
something other than erosion.
what erodes a stone
in the absence
of wind & water?
what makes dust
of a body other
than slow violence
over long periods of time?
if we’d thought to shield
the moon from celestial impact—
if we’d had that kind
of leverage in the universe—
would it still be mirror
smooth? would it be content
to show us ourselves &
stifle a laugh?
© 2021 Tamara Jerée
Tamara Jerée is a graduate of the Odyssey Writing Workshop. Their short stories appear in FIYAH, Anathema: Spec from the Margins, and Strange Horizons. Their poetry was nominated for the inaugural Ignyte Award. You can find them on Twitter @TamaraJeree or visit their website tamarajeree.com.
lagahoo culture (Part II)
by Brandon O’Brien
you open the papers, wipe the headline-stains on the back of
your knee, grumble that the world has changed since you were
young. elder, all it did was become high definition.
it turned your window into a pathway, and you don’t like standing
in its light. there are so many trees you don’t know the names of.
you never look up to imagine where you are shading. when the fruits
bear, you will ask someone else to clean it up, you will ask someone
else to dig up outside the roots and check if it has been drinking
something fetid. the land cannot break so easily to your questioning.
you will search the pillowcases and the diaries, will guess at a pickaxe
for the phone. still nothing. you will ask why you never noticed the
rot before, how it just tore apart the boards while you sat, why
didn’t anyone tell you? the tree will batter your roof in the night
breeze. it will slap you with a low branch as you get into your car.
the gravestones? don’t s
tudy that. we will send someone to check
them around midnight, come to get the parcels left inside: dates and
places for other visitations, jagged clues of whose throat to
collect next, and the moonlight, a silencer attached to the will.
by morning, you can forget the pommeracs, and your children’s flesh
the same. you can sit at the porch and only be busy with the sun
again. you don’t need to leave the gate open. this can miss you
in the night like the things we cull, or call to, or cry God to.
you can only sit in so many white-wall conferences with other people’s
textbooks hanging on the shelves at every corner before you finally
realize that your mouth is a weapon. here, it is safe. here, we do
maths about bodies and hope that this is enough. but in the night,
those bodies are either chemistry or literature, a catalyst or a
metaphor; in the night, what survives is a motif for what survives
and what doesn’t is a figure in one of those long questions about
when two trains will meet. my goal is to be the calculus
that no one else can perform. to write the essay that says
your children stand for the Writer’s joy, unending and boundless,
scraped only by the sharp stones beneath wayward lemongrass and
the jagged barks of other people’s mango trees. to be immovable,
the thing that channels fear outward. you learn something strange
when you garden for souls. you learn that soil has the power to
change things. you take out the locked box of the warped evidence,
gaze at the suffering wasting something away. you watch the worms
evolve with the taste of meat and sorrow. it even changes you,
Uncanny Magazine Issue 39 Page 19