The Most Human Human

Home > Nonfiction > The Most Human Human > Page 4
The Most Human Human Page 4

by Brian Christian


  The New York Times reported in June 2010—in an article titled “The End of the Best Friend”—on the practice of deliberate intervention, on the part of well-meaning adults, to disrupt close nuclei of friends from forming in schools and summer camps.4 One sleepaway camp in New York State, they wrote, has hired “friendship coaches” whose job is to notice whether “two children seem to be too focused on each other, [and] … put them on different sports teams [or] seat them at different ends of the dining table.” Affirms one school counselor in St. Louis, “I think it is kids’ preference to pair up and have that one best friend. As adults—teachers and counselors—we try to encourage them not to do that.” Chatroulette and Omegle users “next” each other when the conversation flags; these children are being nexted by force—when things are going too well.

  Nexted in Customer Service

  The same thing happens sometimes in customer service, where the disruption of intimacy seems almost tactical. Recently a merchant made a charge to my credit card in error, which I attempted to clear up, resulting in my entering a bureaucratic Rube Goldberg machine the likes of which I had never before experienced. My record for the longest single call was forty-two minutes and eight transfers.

  The ultimate conclusion reached at the end of this particular call was “call back tomorrow.”

  Each call, each transfer, led me to a different service rep, each of whom was skeptical and testy about the validity of my refund request. If I managed to get a particular rep on my side, to earn their sympathy, to start to build a kind of relationship and come across as a distinct “nonanonymous” human being, it was only a few minutes before I’d be talking to someone else, anonymous again. Here’s my name, here’s my account number, here’s my PIN, here’s my Social, here’s my mother’s maiden name, here’s my address, here’s the reason for my call, yes, I’ve already tried that …

  What a familiarity with the construction of Turing test bots had begun showing me was that we fail—again and again—to actually be human with other humans, so maddeningly much of the time. And it had begun showing me how we fail—and what to do about it.

  Cobbled-together bits of human interaction do not a human relationship make. Not fifty one-night stands, not fifty speed dates, not fifty transfers through the bureaucratic pachinko. No more than sapling tied to sapling, oak though they may be, makes an oak. Fragmentary humanity isn’t humanity.

  The Same Person

  If the difference between a conversational purée and a conversation is continuity, then the solution, in this case, is extraordinarily simple: assign a rep to a case. A particular person sees it through from start to finish. The same person.

  For a brief period a tiny plastic tab that held the SIM card in my phone had gotten loose, and so my phone only worked when I was pressing on this plastic tab with my finger. As a result, I could only make calls, not receive them. And if I took my finger off the tab mid-call, the call dropped.

  The tab is little more valuable than the plastic equivalent of a soda can’s pull tab, which it resembles in appearance, and is roughly as essential for the proper functioning of the device it’s attached to. I was out of warranty; protocol was that I was out of luck and needed a new, multi-hundred-dollar phone. “But this tab weighs one gram and costs a penny to manufacture,” I said. “I know,” said the customer service rep.

  There was no way, no way at all, I couldn’t just purchase a tab from them?

  “I don’t think it will work,” she said. “But let me talk to a manager.”

  Then the same woman got back on the line. “I’m sorry,” she said. “But …” I said. And we kept talking. “Well, let me talk to a senior manager, hold on,” she says.

  As I’m holding, I feel my hand, which has now been pushing down steadily on the plastic tab for about fifteen minutes, begin to cramp. If my finger slips off the tab, if she hits the wrong button on her console, if there is some glitch in my phone provider’s network, or hers—I am anonymous again. Anybody. A nobody. A number. This particular person and I will never reconnect.

  I must call again, introduce myself again, explain my problem again, hear again that protocol is against me, plead my case again.

  Service works by the gradual buildup of sympathy through failed attempted solutions. If person X has told you to try something and it doesn’t work, person X feels slightly sorry for you. X is slightly responsible for the problem now, having used up some of your time. Person Y, however, is considerably less moved that you tried following her colleague X’s advice to no avail—even if it is the same advice that she herself would have given you had she been party to that earlier conversation. That’s beside the point. The point is that she wasn’t the one who gave you that advice. So she is not responsible for your wasted time.

  The same woman, as if miraculously, again returns. “I can make an exception for you,” she says.

  It occurs to me that an “exception” is what programmers call it when software breaks.

  50 First Dates

  Sometimes even a single, stable point of view, a unifying vision and style and taste, isn’t enough. You also need a memory. In the 2004 comedy 50 First Dates, Adam Sandler courts Drew Barrymore, but in the process discovers that due to an accident she can’t form new long-term memories.

  Philosophers interested in friendship, romance, and intimacy more generally have, in recent times, endeavored to distinguish between the types of people we like (or, the things we like about people) and the specific people we feel connections with in our lives. University of Toronto philosopher Jennifer Whiting has dubbed the former “impersonal friends.” The difference between the numerous “impersonal friends” out there, who are more or less fungible, and the few individuals we care about specifically, who aren’t fungible with anyone on the planet, lies, she says, in so-called “historical properties.” Namely, your actual friends and your innumerable “impersonal friends” are fungible—but only at the moment the relationship begins. From there, the relationship puts down roots, builds up a shared history, shared understanding, shared experiences, sacrifices and compromises and triumphs …

  Barrymore and Sandler really are good together—life-partner good—but she becomes “someone special” to him, whereas he is doomed to remain merely “her type.” Fungible. And therefore—being no different from the next charming and stimulating and endearing guy who shows up at her restaurant—vulnerable to losing her.

  His solution: give her a historical-properties crash course every morning, in the form of a video primer that recaps their love. He must fight his way out of fungibility every morning.

  Statefulness

  A look at the “home turf” of many chatbots shows a conscious effort on the part of the programmers to make Drew Barrymores of us: worse, actually, because it was her long-term memory that kept wiping clean. At 2008 Loebner Prize winner Elbot’s website, the screen refreshes each time a new remark is entered, so the conversational history evaporates with each sentence; ditto at the page of 2007 winner Ultra Hal. At the Cleverbot site, the conversation fades to white above the box where text is entered, preserving only the last three exchanges on the screen, with the history beyond that gone: out of sight, and hopefully—it would seem—out of the user’s mind as well. The elimination of the long-term influence of conversational history makes the bots’ jobs easier—in terms of both the psychology and the mathematics.

  In many cases, though, physically eliminating the conversation log is unnecessary. As three-time Loebner Prize winner (’00, ’01, and ’04), programmer Richard Wallace explains, “Experience with [Wallace’s chatbot] A.L.I.C.E. indicates that most casual conversation is ‘state-less,’ that is, each reply depends only on the current query, without any knowledge of the history of the conversation required to formulate the reply.”

  Not all types of human conversations function in this way, but many do, and it behooves AI researchers to determine which types of conversations are “stateless”—that is, with each re
mark depending only on the last—and to attempt to create these very sorts of interactions. It’s our job as confederates, as humans, to resist it.

  One of the classic stateless conversation types, it turns out, is verbal abuse.

  In 1989, twenty-old University College Dublin undergraduate Mark Humphrys connects a chatbot program he’d written called MGonz to his university’s computer network and leaves the building for the day. A user (screen name “SOMEONE”) from Drake University in Iowa tentatively sends the message “finger” to Humphrys’s account—an early-Internet command that acts as a request for basic information about a user. To SOMEONE’s surprise, a response comes back immediately: “cut this cryptic shit speak in full sentences.” This begins an argument between SOMEONE and MGonz that will last almost an hour and a half.

  (The best part is undoubtedly when SOMEONE says, a mere twenty minutes in, “you sound like a goddamn robot that repeats everything.”)

  Returning to the lab the next morning, Humphrys is stunned to find the logs, and feels a strange, ambivalent emotion. His program might have just passed the Turing test, he thinks—but the evidence is so profane that he’s afraid to publish it.

  Humphrys’s twist on the age-old chatbot paradigm of the “non-directive” conversationalist who lets the user do all the talking was to model his program, rather than on an attentive listener, on an abusive jerk. When it lacks any clear cue for what to say, MGonz falls back not on therapy clichés like “How does that make you feel?” or “Tell me more about that” but on things like “you are obviously an asshole,” “ok thats it im not talking to you any more,” or “ah type something interesting or shut up.” It’s a stroke of genius, because, as becomes painfully clear from reading the MGonz transcripts, argument is stateless.

  I’ve seen it happen between friends: “Once again, you’ve neglected to do what you’ve promised.” “Oh, there you go right in with that tone of yours!” “Great, let’s just dodge the issue and talk about my tone instead! You’re so defensive!” “You’re the one being defensive! This is just like the time you x!” “For the millionth time, I did not even remotely x! You’re the one who …” And on and on. A close reading of this dialogue, with MGonz in mind, turns up something interesting, and very telling: each remark after the first is only about the previous remark. The friends’ conversation has become stateless, unanchored from all context, a kind of “Markov chain” of riposte, meta-riposte, meta-meta-riposte. If we can be induced to sink to this level, of course the Turing test can be passed.

  Once again, the scientific perspective on what types of human behavior are imitable shines incredible light on how we conduct our own, human lives. There’s a sense in which verbal abuse is simply less complex than other forms of conversation. Seeing how much MGonz’s arguments resemble our own might shame us into shape.

  Retorts, no matter how sharp or stinging, play into chatbots’ hands. In contrast, requests for elaboration, like “In what sense?” and “How so?” turn out to be crushingly difficult for many bots to handle: because elaboration is hard to do when one is working from a prepared script, because such questions rely entirely on context for their meaning and, because they extend the relevant conversational history, rather than resetting it.

  In fact, since reading the papers on MGonz, and its transcripts, I find myself much more able to constructively manage heated conversations. Aware of their stateless, knee-jerk character, I recognize that the terse remark I want to blurt has far more to do with some kind of “reflex” to the very last sentence of the conversation than it does with either the actual issue at hand or the person I’m talking to. All of a sudden the absurdity and ridiculousness of this kind of escalation become quantitatively clear, and, contemptuously unwilling to act like a bot, I steer myself toward a more “stateful” response: better living through science.

  1. When something online makes me think of a friend I haven’t talked to in a while, and I want to send them a link, I make sure to add some kind of personal flourish, some little verbal fillip to the message beyond just the minimal “hey, saw this and thought of you / [link] / hope all’s well,” or else my message risks a spam-bin fate.

  E.g., when I received the other week a short, generically phrased Twitter message from one of the poetry editors of Fence magazine saying, “hi, i’m 24/female/horny … i have to get off here but message me on my windows live messenger name: [link],” my instinct wasn’t to figure out how to politely respond that I was flattered but thought it best to keep our relationship professional; it was to hit the “Report Spam” button.

  2. Sic. Weintraub’s program, like many that followed it, faked typos.

  3. Such anonymity brings hazard, though, at least as much as serendipity. I read someone’s account of trying out Chatroulette for the first time: twelve of the first twenty video chats he attempted were with men masturbating in front of the camera. For this reason, and because it was more like the Turing test, I stuck to text. Still, my first two interlocutors on Omegle were guys trolling, stiltedly, for cybersex. But the third was a high school student from the suburbs of Chicago: we talked about Cloud Gate, the Art Institute, the pros and cons of growing up and moving out. Here was a real person. “You’re normal!!” she wrote, with double exclamation marks; my thought exactly.

  4. Motives range from wanting the children not to put all of their emotional eggs in one basket, to wanting them to branch out and experience new perspectives, to reducing the occasionally harmful social exclusion that can accompany tight bonds.

  3. The Migratory Soul

  I’m Up Here

  The Turing test attempts to discern whether computers are, to put it most simply, “like us” or “unlike us”: humans have always been preoccupied with their place among the rest of creation. The development of the computer in the twentieth century may represent the first time that this place has changed.

  The story of the Turing test, of the speculation and enthusiasm and unease over artificial intelligence in general, is, then, the story of our speculation and enthusiasm and unease over ourselves. What are our abilities? What are we good at? What makes us special? A look at the history of computing technology, then, is only half of the picture. The other half is the history of mankind’s thoughts about itself. This story takes us back through the history of the soul itself, and it begins at perhaps the unlikeliest of places, that moment when the woman catches the guy glancing at her breasts and admonishes him: “Hey—I’m up here.”

  Of course we look each other in the eyes by default—the face is the most subtly expressive musculature in the body, for one, and knowing where the other person is looking is a big part of communication (if their gaze darts to the side inexplicably, we’ll perk up and look there too). We look each other in the eyes and face because we care about what the other person is feeling and thinking and attending to, and so to ignore all this information in favor of a mere ogle is, of course, disrespectful.

  In fact, humans are known to have the largest and most visible sclera—the “whites” of the eyes—of any species. This fact intrigues scientists, because it would seem actually to be a considerable hindrance: imagine, for example, the classic war movie scene where the soldier dresses in camouflage and smears his face with green and brown pigment—but can do nothing about his conspicuously white sclera, beaming bright against the jungle. There must be some reason humans developed it, despite its obvious costs. In fact, the advantage of visible sclera—so goes the “cooperative eye hypothesis”—is precisely that it enables humans to see clearly, and from a distance, which direction other humans are looking. Michael Tomasello at the Max Planck Institute for Evolutionary Anthropology showed in a 2007 study that chimpanzees, gorillas, and bonobos—our nearest cousins—follow the direction of each other’s heads, whereas human infants follow the direction of each other’s eyes. So the value of looking someone in the eye may in fact be something uniquely human.

  But—this happens not to be the woman’s argument in thi
s particular case. Her argument is that she’s at eye level.

  As an informal experiment, I will sometimes ask people something like “Where are you? Point to the exact place.” Most people point to their forehead, or temple, or in between their eyes. Part of this must be the dominance, in our society anyway, of the sense of vision—we tend to situate ourselves at our visual point of view—and part of it, of course, comes from our sense, as twenty-first-centuryites, that the brain is where all the action happens. The mind is “in” the brain. The soul, if anywhere, is there too; in fact, in the seventeenth century, Descartes went so far as to try to hunt down the exact “seat of the soul” in the body, reckoning it to be the pineal gland at the center of the brain. “The part of the body in which the soul directly exercises its functions1 is not the heart at all, or the whole of the brain,” he writes. “It is rather the innermost part of the brain, which is a certain very small gland.”2

  Not the heart at all—

  Descartes’s project of trying to pinpoint the exact location of the soul and the self was one he shared with any number of thinkers and civilizations before him, but not much was thought of the brain for most of human history. The ancient Egyptian mummification process involved, for instance, preserving all of a person’s organs except the brain—thought3 to be useless—which they scrambled with hooks into a custard and scooped out through the nose. All the other major organs—stomach, intestines, lungs, liver—were put into sealed jars, and the heart alone was left in the body, because it was considered, as Carl Zimmer puts it in Soul Made Flesh, “the center of the person’s being and intelligence.”

  In fact, most cultures have placed the self in the thoracic region somewhere, in one of the organs of the chest. This historical notion of heart-based thought and feeling leaves its fossil record in the idioms and figurative language of English: “that shows a lot of heart,” we say, or “it breaks my heart,” or “in my heart of hearts.” In a number of other languages—e.g., Persian, Urdu, Hindi, Zulu—this role is played by the liver: “that shows a lot of liver,” their idioms read. And the Akkadian terms karšu (heart), kabattu (liver), and libbu (stomach) all signified, in various different ancient texts, the center of a person’s (or a deity’s) thinking, deliberation, and consciousness.

 

‹ Prev