Book Read Free

Peter Watts Is an Angry Sentient Tumor: Revenge Fantasies and Essays

Page 19

by Peter Watts


  2 Not that Mother Theresa was either convenient or an outlier, if you dig into her biography a little. The woman was a monster.

  3 Koenigs et al, 2007. Damage to the prefrontal cortex increases utilitarian moral judgements. Nature. doi:10.1038/nature05631

  4 http://www.shaktitechnology.com/

  5 http://psy.ucsd.edu/chip/ramabio.html

  Dumb Adult

  Blog Mar 15 2016

  We didn’t have “Young Adult” when I was your age, much less this newfangled “New Adult” thing they coddle you with. We had to jump right from Peter the Sea Trout and Freddy and the Ignormus straight into Stand on Zanzibar and Solaris, no water wings or training wheels or anything.

  Amazingly, I managed to read anyway. I discovered Asimov and Bradbury and Bester at eleven, read Zanzibar at twelve, Solaris at thirteen. I may have been smarter than most of my age class (I hope I was—if not, I sure got picked on a lot for no good reason), but I was by no means unique; I only discovered The Sheep Look Up when a classmate recommended it to me in the eleventh grade. And judging by the wear and tear on the paperbacks in the school library, everyone was into Asimov and Bradbury back then. Delany too, judging by the way the covers kept falling off The Einstein Intersection. Back in those days we didn’t need no steenking Young Adult.

  Now get off my lawn.

  I’ll admit my attitude could be a bit more nuanced. After all, my wife has recently been marketed as a YA author, and her writing is gorgeous (although I would argue it’s also not YA). Friends and peers swim in young-adult waters. Well-intentioned advisers, ever mindful of the nichiness of my own market share, have suggested that I try writing YA because that’s where the money is, because that’s the one part of the fiction market that didn’t implode with the rest of the economy a few years back.

  But I can’t help myself. It’s not that I don’t think we should encourage young adults to read (in fact, if we can’t get them to read more than the last generation, we’re pretty much fucked). It’s that I’m starting to think YA doesn’t do that.

  I’m starting to think it may do the opposite.

  Hanging out at last fall’s SFContario, I sat in on a panel on the subject. It was populated by a bunch of very smart authors who most assuredly do not suck, who know far more about YA than I do, and whom I hope will not take offense when I shit all over their chosen pseudogenre—because even this panel of experts had a hard time coming up with a working definition of what a Young Adult novel even was.

  The rules keep changing, you see. It wasn’t so long ago that you couldn’t say “fuck” in a YA novel; these days you can. Back around the turn of the century, YA novels were 100% sex-free, beyond the chaste fifties-era hand-holding and nookie that never seemed to involve the unzipping of anyone’s fly; today, YA can encompass not just sex, but pregnancy and venereal disease and rape. Stories that once took place in some parallel, intercourse-free universe now juggle gay sex and gender fluidity as if they were just another iteration of Archie and Betty down at the malt shop (which is, don’t get me wrong, an awesome and overdue thing; but it doesn’t give you much of a leg up when you’re trying to define “Young Adult” in more satisfying terms than “Books that can be found in the YA section at Indigo”).

  Every now and then one of the panelists would cite an actual rule that seemed to hold up over time, but which was arcane unto inanity. In one case, apparently, a story with an adolescent protagonist—a story that met pretty much any YA convention you might want to name—was excluded from the club simply because it was told as an extended flashback, from the POV of the protagonist as a grown adult looking back. Apparently it’s not enough that a story revolve around adolescents; the perspective, the mindset of the novel as artefact must also be rooted in adolescence. If adults are even present in the tale, they must remain facades; we can never see the world through their eyes.

  Remember those old Peanuts TV specials where the grownups were never seen, and whose only bits of dialog consisted entirely of muted trombones going mwa-mwa-mwa? Young Adult, apparently.

  Finally the panel came up with a checklist they could all agree upon. To qualify as YA, a story would have to incorporate the following elements:

  • Youthful protagonist(s)

  • Youthful mindset

  • Corrupt/dystopian society (this criterion may have been intended to apply to modern 21st-century YA rather than the older stuff, although I suppose a cadre of Evil Cheerleaders Who Run The School might qualify)

  • Inconvenient/ineffectual/absent parents: more a logistic constraint than a philosophical one. Your protagonists have to be free to be proactive, which is hard to pull off with parents always looking over their shoulders and telling them it’s time to come in now.

  • Uplifting, or at least hopeful ending: your protags may only be a bunch of meddlesome kids, but the Evil Empire can’t defeat them.

  Accepting these criteria as authoritative—they were, after all, hashed out by a panel of authorities—it came to me in a blinding flash. The archetypal YA novel just had to be—wait for it—

  A Clockwork Orange.

  Think about it: a story told from the exclusive first-person perspective of an adolescent, check. Corrupt dystopian society, check. Irrelevant parents, check. And in the end, Alex wins: the government sets him free once again, to rape and pillage to his heart’s content. Admittedly the evil government isn’t outright defeated at the end of the novel; it simply has to let Alex walk, let him get back to his life (a more recent YA novel with the same payoff is Cory Doctorow’s Little Brother). Still: it failed to defeat the meddlesome kid.

  So according to a panel of YA authors—or at least, according to the criteria they laid out—one of the most violent, subversive, and inaccessible novels of the Twentieth Century is a work of YA fiction. Which pretty much brings us back to eleven-year-old me and John Brunner. If A Clockwork Orange is Young Adult, aren’t that category’s boundaries so wide as to be pretty much meaningless?

  But there’s one rule nobody mentioned, a rule I suspect may be more relevant than all the others combined. A Clockwork Orange is not an easy read by any stretch. Not only are the words big and difficult, half of them are in goddamn Russian. The whole book is written in a polygot dialect that doesn’t even exist in the real world. And I suspect that toughness, that inaccessibility, would cause most to exclude it from YAhood.

  In order to be YA, the writing has to be simple. It may have once been a good thing to throw the occasional unfamiliar word at an adolescent; hell, it might force them to look the damn thing up, increase their vocabulary a bit. No longer. I haven’t read a whole lot of YA—Gaiman, Doctorow, Miéville are three that come most readily to mind—but I’ve noticed a common thread in their YA works that extends beyond merely dialing back the sex and profanity. The prose is less challenging than the stuff you find in adult works by the same authors.

  Well, duh, you might think: of course it’s simpler. It’s written for a younger audience. But increasingly, that isn’t the case anymore, at least not since they started printing Harry Potter with understated “adult” covers, so all those not-so-young-adult fans could get their Hogwarts fix on the subway without being embarrassed by lurid and childish artwork. The Hunger Games was first recommended to me by a woman who was (back then) on the cusp of thirty, and no dummy.

  All these actual adults, reading progressively simpler writing. All us authors, chasing them down the stairs. Hell, Neil Gaiman took a classic that nine-year-old Peter Watts devoured without any trouble at all—Rudyard Kipling’s The Jungle Book—and dumbed it down to an (admittedly award-winning) story about ghosts and vampires, aimed at an audience who might find a story about sapient wolves and tigers too challenging. It may only be a matter of time before Nineteen Eighty Four is reissued using only words from the Eleventh Edition of the Newspeak Dictionary. We may already be past the point when anyone looking to read Twenty Thousand Leagues Under the Sea looks any further than the Classics Illustrate
d comic.

  I know how this sounds. I led with that whole crotchety get-off-my-lawn shtick because the Old are famously compelled to rail against the failings of the Young, because rants about the Good Old Days are as tiresome when they’re about literacy as they are when they’re about music or haircuts. It was a self-aware (and probably ineffective) attempt at critic-proofing.

  So let me emphasize: I’ve got nothing against clear, concise prose (despite the florid nature of my own, sometimes). Hemingway wrote simple prose. Orwell extolled its virtues. If that was all that made up Young Adult, even I would be a YA writer (at least, I don’t think your average sixteen-year-old would have any trouble getting through Starfish).

  But there’s a difference between novels that happen to be accessible to teens, and novels that put teens in their heat-sensitive, wallet-lightening crosshairs. I know of one author who had to go back and tear up an adult novel, already written, by the roots: rewrite and duct-tape it onto YA scaffolding because that’s the only way it would sell. I know a very smart, highly-respected editor who once raved about the incredible, well-thought-out plotting of the Harry Potter books, apparently blind to the fact that Rowling—her claims to the contrary notwithstanding—seemed to be just making shit up as she went along1.

  A long time ago, a childhood friend gave me the collected tales of Edgar Allan Poe for my tenth birthday. I loved that stuff. It taught me things—made me teach myself things, in the same way a Jethro Tull song a few decades later forced me to look up the meaning of “overpressure wave.” I have to wonder if YA does that, if it improves one’s reading skills or merely panders to them. I doubt that your vocabulary is any bigger when you finish Harry Potter and the Well-Deserved Bitch-Slap than when you started. You may have been entertained, but you were not upgraded.

  Of course, if entertainment’s all you’re after, no biggie. The problem, though, is that it acts like a ratchet. If we only allow ourselves to write down, never up—and if the age of the YA market edges up, never down—it’s hard to see how the overall sophistication of our writing can do anything but decline monotonically over time.

  Who among you will tell me this is a good thing?

  1 I mean, think about it: we have a protagonist whose central defining feature is the murder of his parents when he was an infant. And when he discovers that time travel is so trivially accessible that his classmate uses it for no better purpose than to double up her course load, it never once occurs to him to wonder: Hey— maybe I can go back and save my parents! This is careful plotting?

  In Praise of War Crimes

  Nowa Fantastyka Jan 2014

  The Terminator has made it into the pages of Science. The December 20 issue of that prestigious journal contains an article entitled “Scientists Campaign Against Killer Robots,” which summarizes the growing grass-roots movement against autonomous killing machines on the battlefield. Lest you think we’re talking about the burgeoning fleet of drones deployed with such enthusiasm by the US—you know, those weapons the Obama administration praises as so much “more precise” than conventional airstrikes, at least during those press conferences when they’re not expressing regrets over another Yemeni wedding party accidentally massacred in the latest Predator attack—let me bring you up to speed. Predators are puppets, not robots. Their pilots may be sipping coffee in some air-conditioned office in Arizona, running their vehicles by remote control, but at least the decision to turn kids into collateral is made by people.

  Of course, the problem (okay: one of the problems) with running a puppet from 8,000 km away is that its strings can be jammed, or hacked. (You may have heard about those Iraqi insurgents who tapped into Predator video feeds using $26 worth of off-the-shelf parts from Radio Shack.) Wouldn’t it be nice if we didn’t need that umbilicus. Wouldn’t it be nice if our robots could think for themselves.

  I’ve got the usual SF-writer’s hard-on for these sorts of issues (I even wrote a story on the subject—“Malak”—a couple of years back). I keep an eye open for these sorts of developments. We’re told that we still have a long way to go before we have truly autonomous killing machines: robots that can tell friend from foe, assess relative threat potentials, decide to kill this target and leave that one alone. They’re coming, though. True, the Pentagon claimed in 2013 that it had “no completely autonomous systems in the pipeline or being considered”—but when was the last time anyone believed anything the Pentagon said, especially in light of a 2012 US Department of Defense Directive spelling out criteria for “development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms”1?

  Root through that directive and you’ll find the usual mealy-mouthed assurances about keeping Humans In Ultimate Control. It’s considered paramount, for example, that “in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator”. But you don’t have to be Isaac Asimov to see how easy it would be to subvert that particular Rule of Robotics. Suppose a human operator does approve a target, just before contact with a drone is lost. The drone is now authorized to hunt that particular target on its own. How does it know that the target who just emerged from behind that rock is the same one who ducked behind it ten seconds earlier? Does it key on facial features? What happens if the target is wearing clothing that covers the face? Does it key on clothing? What happens if the target swaps hats with a friend?

  According to Science, the fight against developing these machines—waged by bodies with names like the Convention on Certain Conventional Weapons and the International Committee for Robot Arms Control—centers on the argument that robots lack the ability to discriminate reliably between combatants and civilians in the heat of battle. I find this argument both troubling and unconvincing. The most obvious objection involves Moore’s Law: even if robots can’t do something today, there’s a damn good chance they can do it tomorrow. Another problem—one that can bite you in the ass right now, while you’re waiting for tomorrow to happen—is that even people can’t reliably distinguish between friend and foe all the time. North American cops, at least, routinely get a pass when they gun down some innocent civilian under the mistaken impression that their victim was going for a gun instead of a cell phone.

  Does anyone truly believe that we’re going to hold machines to a higher standard than we hold ourselves? Or as Lin et al put it back in 2008 in “Autonomous Military Robotics: Risk, Ethics, and Design”:

  “An ethically-infallible machine ought not to be the goal. Our goal should be to design a machine that performs better than humans do on the battlefield, particularly with respect to reducing unlawful behaviour or war crimes.”

  Ah, war crimes. My final point. Because it’s actually really hard to pin a war crime on a machine. If your garden-variety remote-controlled drone blows up a party of civilians, you can always charge the operator on the other side of the world, or the CO who ordered him to open fire (not that this ever happens, of course). But if a machine decided to massacre all those innocents, who do you blame? Those who authorized its deployment? Those who designed it? Some computer scientist who didn’t realize that her doctoral research on computer vision was going to get co-opted by a supervisor with a fat military contract?

  Or does it stop being a war crime entirely, and turn into something less—objectionable? At what point does collateral damage become nothing more than a tragic industrial accident?

  To me, the real threat is not the fallibility of robots, but the deliberate exploitation of that fallibility by the generals. The military now has an incentive: not to limit the technology, not to improve its ability to discriminate foe from friend, but to deploy these fallible weapons as widely as possible. Back in 2008 a man named Stephen White wrote a paper called “Brave New World: Neurowarfare and the Limits of International Humanitarian Law.” It was about the legal and ethical implications
of neuroweaponry, but its warning rings true for any technology that takes life-and-death decisions out of human hands:

  “. . . international humanitarian law would create perverse incentives that would encourage the development of an entire classes of weapons that the state could use to evade criminal penalties for even the most serious types of war crimes.”

  Granted, the end result might not be so bad. Eventually the technology could improve to the point where robotic decisions aren’t just equal to, but are better than those arising from the corruptible meat of human brains. Under those conditions it would be a war crime to not hand the kill switch over to machines. Under their perfected algorithms, combat losses would dwindle to a mere fraction of the toll inflicted under human command. We could ultimately end up in a better place.

  Still. I’m betting we’ll spill rivers of blood in getting there.

  1 https://www.hsdl.org/?abstract&did=726163

  The Last of Us, The Weakest Link.

  Nowa Fantastyka May 2014

  Blog Jan 8 2019

  I’ve been writing video games for almost as long as I’ve been publishing novels. You can be forgiven for not knowing that; nothing written in my gaming capacity has ever made it to production. The usual course of events goes something like this: I work with a talented development team to serve up a kick-ass proposal. Over the following few months, the rest of the team disappears, one by one, under mysterious circumstances. Finally I get an email from some new Executive Producer I’ve never heard of, who praises my “terrific” work and tells me he’ll be in touch if they ever need my services again.

  They never do. Nothing I’ve worked on has ever made it to market unmutilated; characters flattened to cardboard, innovative aliens reduced to evil yoghurt, all subtlety and nuance and interpersonal conflict flensed away before, ultimately, being jettisoned altogether.

 

‹ Prev