The Most Human Human

Home > Nonfiction > The Most Human Human > Page 15
The Most Human Human Page 15

by Brian Christian


  Universal Machines

  There is some support for the existentialist position in the nature of the human brain. As neurologist V. S. Ramachandran explains (emphasis mine), “Most organisms evolve to become more and more specialized as they take up new environmental niches, be it a longer neck for the giraffe or sonar for the bat. Humans, on the other hand, have evolved an organ, a brain, that gives us the capacity to evade specialization.”

  What’s truly intriguing is that computers work the same way. What sets computers apart from all of the other tools previously invented is something called their universality. The computer was initially built and understood as an “arithmetic organ,” yet it turns out—as nearly everything can be translated into numbers of some sort—to be able to process just about everything: images, sound, text, you name it. Furthermore, as Alan Turing established in a shocking 1936 paper, certain computing machines exist called “universal machines,” which can, by adjusting their configuration, be made to do absolutely anything that any other computing machines can do. All modern computers are such universal machines.

  As a result of Turing’s paper, computers become in effect the first tools to precede their tasks: their fundamental difference from staplers and hole-punchers and pocket watches. You build the computer first, and then figure out what you want it to do. Apple’s “There’s an app for that!” marketing rhetoric proves the point, trying to refresh our sense of wonder, in terms of their iPhone, at what we take completely for granted about desktops and laptops. It’s fascinating, actually, what they’re doing: reinscribing our sense of wonder at the universality of computers. If the iPhone is amazing, it is only because it is a tiny computer, and computers are amazing. You don’t decide what you need and then go buy a machine to do it; you just buy the machine and figure out later, on the fly, what you need it to do. I want to play chess: I download a chess program and voilà. I want to do writing: I get a word-processing program. I want to do my taxes: I get a spreadsheet. The computer wasn’t built to do any of that, per se. It was just built.

  The raison-d’être-less-ness of computers, in this sense, seems to chip away at the existentialist idea of humans’ unique purchase on the idea of existence before essence. In other words, another rewriting of The Sentence may be in order: our machines, it would seem, are just as “universal” as we are.

  Pretensions to Originate

  Although computer science tends to be thought of as a traditionally male-dominated field, the world’s first programmer was a woman. The 1843 writings of Ada Lovelace (1815–52, and who was, incidentally, the daughter of poet Lord Byron) on the computer, or “Analytical Engine,” as it was then called, are the wellspring of almost all modern arguments about computers and creativity.

  Turing devotes an entire section of his Turing test proposal to what he calls “Lady Lovelace’s Objection.” Specifically, the following passage from her 1843 writings: “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”

  Such an argument seems in many ways to summarize what most people think about computers, and a number of things could be said in response, but Turing goes straight for the jugular. “A variant of Lady Lovelace’s objection states that a machine can ‘never do anything really new.’ This may be parried for a moment with the saw ‘There is nothing new under the sun.’ Who can be certain that ‘original work’ that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles.”

  Instead of ceding the Lovelace objection as a computational limitation, or arguing that computers can, in fact, “be original,” he takes the most severe and shocking tack possible: arguing that originality, in the sense that we pride ourselves for having it, doesn’t exist.

  Radical Choice

  The notion of originality and, relatedly, authenticity is central to the question of what it means to “just be yourself”—it’s what Turing is getting at when he questions his (and our) own “original work,” and it was a major concern for the existentialists, too.

  Taking their cue from Aristotle, the existentialists tended to consider the good life as a kind of alignment of one’s actual life and one’s potential. But they weren’t swayed by Aristotle’s arguments that, to put it simply, hammers were made to hammer and humans were made to contemplate. (Though just how opposed they were to this argument is hard to gauge, given that they did, let’s not forget, become professional philosophers themselves.) Nor were the existentialists liable to take a kind of Christian view that God had a purpose in mind for us that we would or could somehow “discover.” So if there’s nothing at all that a human being is, then how do we fulfill an essence, purpose, or destiny that isn’t there?

  Their answer, more or less, is that we must choose a standard to hold ourselves to. Perhaps we’re influenced to pick some particular standard; perhaps we pick it at random. Neither seems particularly “authentic,” but we swerve around paradox here because it’s not clear that this matters. It’s the commitment to the choice that makes behavior authentic.

  As our notion of the seat of “humanity” retreats, so does our notion of the seat of artistry. Perhaps it pulls back, then, to this notion of choice—perhaps the art is not, we might speculate, in the product itself, nor necessarily in the process, but in the impulse.

  Defining Games

  The word “game” is a notoriously hard one to define.6

  But allow me to venture a definition: a game is a situation in which an explicit and agreed-upon definition of success exists.

  For a private company, there may be any number of goals, any number of definitions of success. For a publicly traded company there is only one. (At least, for its shareholders there is only one: namely, returns.) Therefore not all business is a game—although much of big business is.

  In real life, and this cuts straight back to the existence/essence notion of Sartre’s, there is no notion of success. If success is having the most Facebook friends, then your social life becomes a game. If success is gaining admittance to heaven upon death, then your moral life becomes a game. Life is no game. There is no checkered flag, no goal line. Spanish poet Antonio Machado puts it well: “Searcher, there is no road. We make the road by walking.”

  Allegedly, game publisher Brøderbund was uncomfortable with the fact that SimCity was a game with no “objectives,” no clear way to “win” or “lose.” Says creator Will Wright, “Most games are made on a movie model with cinematics and the requirement of a climactic blockbuster ending. My games are more like a hobby—a train set or a doll house. Basically they’re a mellow and creative playground experience.” But the industry wouldn’t have it. Brøderbund “just kept asking me how I was going to make it into a game.” To me, Brøderbund’s unease with SimCity is an existential unease, maybe the existential unease.

  Games have a goal; life doesn’t. Life has no objective. This is what the existentialists call the “anxiety of freedom.” Thus we have an alternate definition of what a game is—anything that provides temporary relief from existential anxiety. This is why games are such a popular form of procrastination. And this is why, on reaching one’s goals, the risk is that the reentry of existential anxiety hits you even before the thrill of victory—that you’re thrown immediately back on the uncomfortable question of what to do with your life.7

  Master Disciplines

  The computer science department at my college employed undergraduates as TAs, something I never encountered, at least not on that scale, in any other department. You had to apply, of course, but the only strict requirement was that you’d taken the class. You could TA it the very next semester.

  If we were talking about x, all the TA really had to know about was x. You could try to exceed those bounds out of curiosity, but doing so was rarely important or relevant to the matter at hand.

  My philosophy seminars, though, were a completely different story. When you are tr
ying to evaluate whether argument y is a good one or not, any line of attack is in play, and so is any line of defense. You’d almost never hear a seminar leader say something like “Well, that’s a good point, but that’s outside the scope of today’s discussion.”

  “There is no shallow end,” a philosophy professor once told me. Because any objection whatsoever, from any angle, can fell a theory, you can’t carve out a space of philosophical territory, master it in isolation, and move on to the next.

  My first day of class in the philosophy major, the professor opens the semester by saying that anyone who says that “philosophy is useless” is already philosophizing, building up an intellectual argument to make a point that is important to them, and therefore defeating their own statement in the very breath of uttering it. Poet Richard Kenney calls philosophy one of the “master disciplines” for this reason. You question the assumptions of physics and you end up in metaphysics—a branch of philosophy. You question the assumptions of history and you end up in epistemology—a branch of philosophy. You try to take any other discipline out at the foundations and you end up in philosophy; you try to take philosophy out at the foundations and you only end up in meta-philosophy: even deeper in than when you started.

  For this reason the philosophy TAs tended to be Ph.D. students, and even then, you’d frequently angle to get into the discussion section led by the professor him- or herself. Unlike the computer science professors and TAs, their whole training, their whole life experience—and the whole of the discipline—were in play at all times.

  The other master discipline—concerning itself with linguistic Beauty rather than linguistic Truth—is poetry. As with philosophy, every attempt at escape lands you deeper than where you started.

  “When I wrote ‘Howl’ I wasn’t intending to publish it. I didn’t write it as a poem,” Allen Ginsberg says, “just as a piece of writing for my own pleasure. I wanted to write something where I could say what I really was thinking, rather than poetry” (emphasis mine).

  With poetry, as with philosophy, there is no exterior, only certain well-behaved interiors: in philosophy we call them sciences (physics originally began as the largely speculative field of “natural philosophy”), and in poetry we call them genres. If a play wanders too far from the traditions and conventions of playwriting, the script starts to be regarded as poetry. If a short story starts to wander out of safe short-story territory, it becomes a prose poem. But poetry that wanders far from the conventions of poetry is often simply—e.g., “Howl”—better poetry.

  Human as Anti-Expert System

  All this leads me to the thing I keep noticing about the relationship between human-made and human-mimicking bots and humans themselves.

  The first few years that the Loebner Prize competition was run, the organizers decided they wanted to implement some kind of “handicap,” in order to give the computers more of a fighting chance, and to make the contest more interesting. What they chose to do, as we discussed, was to place topic restrictions on the conversations: at one terminal, you could only talk about ice hockey, at another terminal you could only talk about the interpretation of dreams, and so on.

  The idea was that the programmers would be able to bite off some kind of subset of conversation and attempt to simulate just that subdomain. This makes sense, in that most artificial intelligence research has been the construction of so-called “expert systems,” which hone just one particular task or skill (chess being a clear example).

  Part of the problem with this, though, is that conversation is just so leaky: If we’re talking hockey, can I compare hockey to other sports? Or is that going outside the domain? Can I argue over whether top athletes are overpaid? Can I gossip about a hockey player who’s dating a movie actress? Can I remark on the Cold War context of the famous U.S.A.–U.S.S.R. Olympic gold medal hockey match in the 1980s? Or is that talking about “politics”? Conversational boundaries are just too porous and ill defined. This caused huge headaches for the prize committee.

  This question of domain, of what’s in and what’s out, turns out to be central to the whole notion of the man-machine struggle in the Turing test—it may well embody the entire principle of the test.

  I was talking to Dave Ackley about this kind of domain restriction. “If you make the discourse small enough, then the difference between faking it and making it starts to disappear,” he says. “And that’s what we’ve been seeing. So we’ve got, you know, voice recognition on corporate phone menus: you exploit the fact that you’re in a limited context and people either say digits or ‘operator.’ Or ‘fuck you,’ ” and we both chuckle. Somehow that “fuck you” touched off a kind of insight; it seems to perfectly embody the human desire to bust out of any cage, the human frustration of living life multiple-choice-style and not write-in-style.8

  If you ask the Army’s SGT STAR chatbot something outside the bounds of what he knows how to respond to, he’ll say something like “I have been trained to ask for help when I’m not sure about an answer. If you would like a recruiter to answer your question, please send the Army an e-mail by clicking ‘Send Email’ and a live recruiter will get back to you shortly.” And most any telephone menu—infuriatingly, not all—will give you that “none of the above” option. And that option takes you to a real person.

  Sadly, the person you’re talking to is frequently a kind of “expert system” in their own right, with extremely limited and delineated abilities. (“Customer service is often the epitome of empowerment failure,” writes Timothy Ferriss.) Often, in fact, the human you’re talking to is speaking from a script prepared by the company and not, in this sense, much more than a kind of human chatbot—this is part of what can make talking to them feel eerie. If what you want to communicate or do goes outside of this “menu” of things the employee is trained/allowed to do, then you must “exit the system” again: “Can I talk with a manager?”

  In some sense, intimacy—and personhood—are functions of this kind of “getting out of the system,” “domain generality,” the move from “expertise” to “anti-expertise,” from strictly delimited roles and parameters to the unboundedness that human language makes possible. People frequently get to know their colleagues by way of interactions that are at best irrelevant to, and at worst temporarily impede the progress toward, the work-related goal that has brought them together: e.g., “Oh, is that a picture of your kids?” This is true even of the simple “How are you?” that opens virtually every phone call, no matter how agenda-driven. How two people’s lives are going is outside the agenda—but this initial “non sequitur,” however perfunctory, serves a profound purpose. These not-to-the-purpose comments remind us that we’re not just expert systems, not just goal-driven and role-defined. That we are, unlike most machines, broader than the context we’re operating in, capable of all kinds of things. Griping about the weather with the barista, instead of simply stating your order and waiting patiently, reinforces the fact that he or she is not simply a flesh-and-blood extension of the espresso machine, but in fact a whole person, with moods and attitudes and opinions about most everything under the sun, and a life outside of work.

  Domain General

  One of the leading academics interested in the Turing test (and, as it turns out, an outspoken critic of the Loebner Prize) is Harvard’s Stuart Shieber, who actually served in the very first Loebner Prize contest as one of the “referees.” It’s a role that didn’t exist as I prepared for the 2009 test: the referees were there to keep the conversations “in bounds”—but what did that mean, exactly? The organizers and referees at the first Loebner Prize competition held an emergency meeting the night before the competition9 to address it.

  I called Shieber. “The night before the first competition there was a meeting with the referees,” he says. “How are we going to make sure that the confederates stay on topic and the judges don’t ask things outside of the—They’re not supposed to ask anything tricky—And what is a trick question? And it boiled down t
o, is it the kind of thing that would come up naturally in a conversation with a stranger on an airplane? You’re not going to ask someone out of the blue about sonnets or chess or something.” He pauses a split second. “If I were [in charge], that’s the first thing I’d get rid of.”

  The Loebner Prize both has and hasn’t followed Shieber’s advice. After 1995, amid controversy on what a conversational domain was, let alone how to enforce it, the Loebner Prize committee decided to dissolve the referee position and move to an unrestricted test. Yet the “strangers on a plane” paradigm persists—enforced not so much by statute as by custom: it just comes off as kind of “square” to grill your interlocutors with weird, all-over-the-place questions. It just isn’t done. The results, I think, suffer for it.

  The advantage of specific prescribed topics was, at least, that conversations tended to hit the ground running. Looking back at those years’ transcripts, you see some hilariously specific opening volleys, like:

  JUDGE: Hi. My name is Tom. I hear I’m supposed to talk about dreams. I recently had a nightmare, my first in many years. The funny thing was that I had recently put Christmas lights up. Does light mess with the subconscious? Is that why I had a nightmare, or is it something less obvious?

  In comparison, with the topic-less conversations, you often see the judge and the interlocutor groping around for something to talk about—the commute? the weather?—

 

‹ Prev