The Most Human Human

Home > Nonfiction > The Most Human Human > Page 11
The Most Human Human Page 11

by Brian Christian


  3. Glenn Murcutt, whom we’ll hear more from later in this chapter: “We are taught that creativity is the most important thing in architecture. Well, I don’t believe that. I think that the creative process leads to discovery, and discovery is the most important thing.”

  4. In fact, the etymology of “sabotage,” which comes from the French word sabot, meaning a type of wooden clog, is said (perhaps apocryphally) to be rooted in a story of workers destroying mechanical looms by throwing their shoes at, and into, them.

  5. See also, for instance, the Army chatbot SGT STAR, who gives very Army-like yet strangely endearing answers like the following: “Brian: What is Delta Force? SGT STAR: Delta Force is rumored to be the Army’s counter-terrorist operations unit. In line with operational security we cannot discuss, nor ponder the unit’s abilities.” He even admonishes you in an Army-like (yet strangely endearing) way: “Brian: Wow, that sounds pretty badass. SGT STAR: OK Private, watch your language or I’ll have to shut you down.” The pathetic fallacy be damned—er, darned—I couldn’t close the window without apologizing. “Sorry about that.” “That’s OK,” he said, and I felt better.

  6. There’s a very real downside to this particular style of luxury. Princeton psychologist Daniel Kahneman notes that arguments between couples are worse in luxury cars than in crappy cars, precisely for the things that they’ve paid top dollar for about the car. It’s soundproof, so the noises of the world don’t get in. It’s comfortable, it runs smoothly and quietly, the suspension treats you gingerly. And so the argument goes on and on. Most disagreements are not 100 percent resolvable so much as they can be converted into more or less satisfactory compromises that then assume a lower priority than the other issues of life. They are terminated more from without than from within. Carl Jung puts it nicely: “Some higher or wider interest appeared on the person’s horizon, and through this broadening of his or her outlook the unsolvable problem lost its urgency.” Interruptions can be helpful.

  7. I do tend on the whole to think that words mean most when they’re composed freshly. E.g., that the sinner’s improvised confession in the booth does more, means more, than the umpteen un-site-specific Hail Marys he’s prescribed and recites by rote.

  5. Getting Out of Book

  Success in distinguishing when a person is lying and when a person is telling the truth is highest when … the lie is being told for the first time; the person has not told this type of lie before.

  –PAUL EKMAN

  For Life is a kind of Chess …

  –BENJAMIN FRANKLIN

  How to Open

  Entering the Brighton Centre, I found my way to the Loebner Prize competition. Stepping into the contest room, I saw rows of seating where a handful of audience members had already gathered, and up front what could only be the bot programmers worked hurriedly, plugging in tangles of wires and making the last flurries of keystrokes. Before I could get too good a look at them, or they at me, the test’s organizer this year, Philip Jackson, greeted me and led me behind a velvet curtain to the confederate area. Out of view of the audience and the judges, four of us sat around a table, each at a laptop set up specifically for the test: Doug, a Canadian linguistics researcher for Nuance Communications; Dave, an American engineer working for the Sandia National Laboratories; and Olga, a South African programmer for MathWorks. As we introduced ourselves, we could hear the judges and audience members slowly filing in, but couldn’t see them around the curtain. A man flittered by in a Hawaiian shirt, talking a mile a minute and devouring finger sandwiches. Though I had never met him before, I knew instantly he could only be one person: Hugh Loebner. Everything was in place, we were told, between bites, and the first round of the test would be starting momentarily. We four confederates grew quiet, staring at the blinking cursors on our laptops. I tried to appear relaxed and friendly with Dave, Doug, and Olga, but they had come to England for the speech technology conference, and were just here this morning because it sounded interesting. I had come all this way just for the test. My hands poised hummingbird-like over the keyboard, like a nervous gunfighter’s over his holsters.

  The cursor, blinking. I, unblinking.

  Then all at once, letters and words began to materialize—

  Hi how are you doing?

  The Turing test had begun.

  And all of a sudden—it was the strangest thing. I had the distinct sensation of being trapped. Like that scene in so many movies and television shows where the one character, on the brink of death or whatever, says, breathlessly, “I have something to tell you.” And the other character always, it seems to me, says, “Oh my God, I know, me too. Do you remember that time, when we were scuba diving and we saw that starfish that was curled up and looked like the outline of South America, and then later, when I was back on the boat and peeling my sunburn, I said that it reminded me of this song, but I couldn’t remember the name of the song? It just came to me today—” And the whole time we’re thinking, Shut up, you fool!

  I learned from reading the Loebner Prize transcripts that there are two types of judges: the small-talkers and the interrogators. The latter are the ones that go straight in with word problems, spatial-reasoning questions, deliberate misspellings … They’re laying down a verbal obstacle course and you have to run it. This type of thing is extraordinarily hard for programmers to prepare against, because anything goes—and this is (a) the reason that Turing had language, and conversation, in mind as his test, because it is really, in some sense, a test of everything, and (b) the kind of conversation Turing seemed to have envisioned, judging from the hypothetical conversation snippets in his 1950 paper. The downside to the give-’em-the-third-degree approach is that there’s not much room to express yourself, personality-wise. Presumably, any attempts to respond idiosyncratically are treated as coy evasions for which you get some kind of Turing test demerits.

  The small-talk approach has the advantage that it’s easier to get a sense of who a person is—if there indeed is a person, which is, of course, the if of the conversation. And that style of conversation comes more naturally to layperson judges. For one reason or another, it’s been explicitly and implicitly, at various points in time, encouraged among Loebner Prize judges. It’s come to be known as the “strangers on a plane” paradigm. The downside of this is that these types of conversations are, in some sense, uniform: familiar in a way that allows a programmer to anticipate a number of the questions.

  So here was a small-talk, stranger-on-a-plane judge, it seemed. I had this odd sensation of being in that classic film/TV position. “I have something to tell you.” But that something was … myself. The template conversation spread out before me: Good, you? / Pretty good. Where are you from? / Seattle. How about yourself? / London. / Oh, so not such a far trip, then, huh? / Nope, just two hours on the train. How’s Seattle this time of year? / Oh, it’s nice, but you know, of course the days are getting shorter … And more and more I realized that it, the conversational boilerplate, every bit as much as the bots, was the enemy. Because it—“cliché” coming from a French onomatopoeia for the printing process, words being reproduced without either alteration or understanding—is what bots are made of.

  I started typing.

  hey there!

  Enter.

  i’m good, excited to actually be typing

  Enter.

  how are you?

  Enter.

  Four minutes thirty seconds. My fingers tapped and fluttered anxiously.

  I could just feel the clock grinding away while we lingered over the pleasantries. I felt—and this is a lot to feel at “Hi, how are you doing?”—this desperate urge to get off the script, cut the crap, cut to the chase. Because I knew that the computers could do the small-talk thing; it’d be playing directly into their preparation. How, I was thinking as I typed back a similarly friendly and unassuming greeting, do I get that lapel-shaking, shut-up-you-fool moment to happen? Once those lapels were shaken, of course, I had no idea what to say next.
But I’d cross that bridge when I got there. If I got there.

  Getting Out of Book

  The biggest AI showdown of the twentieth century happened at a chessboard: grandmaster and world champion Garry Kasparov vs. supercomputer Deep Blue. This was May 1997, the Equitable Building, thirty-fifth floor, Manhattan. The computer won.

  Some people think Deep Blue’s victory was a turning point for AI, while others claim it didn’t prove a thing. The match and its ensuing controversy form one of the biggest landmarks in the uneasy and shifting relationship between artificial intelligence and our sense of self. They also form a key chapter in the process by which computers, in recent years, have altered high-level chess forever—so much so that in 2002 one of the greatest players of the twentieth century, Bobby Fischer, declared chess “a dead game.”

  It is around this same time that a reporter named Neil Strauss writes an article on a worldwide community of pickup artists, beginning a long process in which Strauss ultimately, himself, becomes one of the community’s leaders and most outspoken members. Along the course of these experiences, detailed in his 2005 bestseller, The Game, Strauss is initially awed by his mentor Mystery’s “algorithms of how to manipulate social situations.” Over the course of the book, however, this amazement gradually turns to horror as an army of “social robots,” following Mystery’s method to a tee, descend on the nightlife of Los Angeles, rendering bar patter “dead” in the same ways—and for the same reasons—that Fischer declared computers to have “killed” chess.

  At first glance it would seem, of course, that no two subjects could possibly be further apart than an underground society of pickup artists and supercomputer chess. What on earth do these two narratives have to do with each other—and what do they have to do with asserting myself as human in the Turing test?

  The answer is surprising, and it hinges on what chess players call “getting out of book.” We’ll look at what that means in chess and in conversation, how to make it happen, and what the consequences are if you don’t.

  All the Beauty of Art

  At one point in his career, the famous twentieth-century French artist Marcel Duchamp gave up art, in favor of something he felt was even more expressive, more powerful: something that “has all the beauty of art—and much more.” It was chess. “I have come to the personal conclusion,” Duchamp wrote, “that while all artists are not chess players, all chess players are artists.”

  The scientific community, by and large, seemed to agree with that sentiment. Douglas Hofstadter’s 1980 Pulitzer Prize–winning Gödel, Escher, Bach, written at a time when computer chess was over twenty-five years old, advocates “the conclusion that profoundly insightful chess-playing draws intrinsically on central facets of the human condition.” “All of these elusive abilities … lie so close to the core of human nature itself,” Hofstadter says, that computers’ “mere brute-force … [will] not be able to circumvent or shortcut that fact.”

  Indeed, Gödel, Escher, Bach places chess alongside things like music and poetry as one of the most uniquely and expressively human activities of life. Hofstadter argues, rather emphatically, that a world-champion chess program would need so much “general intelligence” that it wouldn’t even be appropriate to call it a chess program at all. “I’m bored with chess. Let’s talk about poetry,” he imagines it responding to a request for a game. In other words, world-champion chess means passing the Turing test.

  This was the esteem in which chess, “the game of kings,” the mandatory part of a twelfth-century knight’s training after “riding, swimming, archery, boxing, hawking, and verse writing,” the game played by political and military thinkers from Napoleon, Franklin, and Jefferson to Patton and Schwarzkopf, was held, from its modern origins in fifteenth-century Europe up through the 1980s. Intimately bound to and inseparable from the human condition; expressive and subtle as art. But already by the 1990s, the tune was changing. Hofstadter: “The first time I … saw … a graph [of chess machine ratings over time] was in an article in Scientific American … and I vividly remember thinking to myself, when I looked at it, ‘Uh-oh! The handwriting is on the wall!’ And so it was.”1

  A Defense of the Whole Human Race

  Indeed, it wasn’t long before IBM was ready to propose a meeting in 1996 between their Deep Blue machine and Garry Kasparov, the reigning world champion of chess, the highest-rated player of all time, and some say the greatest who ever lived.

  Kasparov accepted: “To some extent, this match is a defense of the whole human race. Computers play such a huge role in society. They are everywhere. But there is a frontier that they must not cross. They must not cross into the area of human creativity.”

  Long story short: Kasparov stunned the nation by losing the very first game—while the IBM engineers toasted themselves over dinner, he had a kind of late-night existential crisis, walking the icy Philadelphia streets with one of his advisers and asking, “Frederic, what if this thing is invincible?” But he hit back, hard, winning three of the next five games and drawing the other two, to win the match with an entirely convincing 4–2 score. “The sanctity of human intelligence seemed to dodge a bullet,” reported the New York Times at the match’s end, although I think that might be a little overgenerous. The machine had drawn blood. It had proven itself formidable. But ultimately, to borrow an image from David Foster Wallace, it was “like watching an extremely large and powerful predator get torn to pieces by an even larger and more powerful predator.”

  IBM and Kasparov agreed to a rematch a year later in Manhattan, and in 1997 Kasparov sat down to another six-game series with a new version of the machine: faster—twice as fast, in fact—sharper, more complex. And this time, things didn’t go quite so well. In fact, by the morning of the sixth, final game of the rematch, the score is tied, and Kasparov has the black pieces: it’s the computer’s “serve.” And then, with the world watching, Kasparov plays what will be the quickest loss of his entire career. A machine defeats the world champion.

  Kasparov, of course, immediately proposes a 1998 “best out of three” tiebreaker match for all the marbles—“I personally guarantee I will tear it in pieces”—but as soon as the dust settles and the press walks away, IBM quietly cuts the team’s funding, reassigns the engineers, and begins to slowly take Deep Blue apart.

  Doc, I’m a Corpse

  When something happens that creates a cognitive dissonance, when two of our beliefs are shown to be incompatible, we’re still left with the choice of which one to reject. In academic philosophy circles this has a famous joke:

  A guy comes in to the doctor’s, says, “Doc, I’m a corpse. I’m dead.”

  The doctor says, “Well, are corpses … ticklish?”

  “Course not, doc!”

  Then the doctor tickles the guy, who giggles and squirms away. “See?” says the doctor. “There you go.”

  “Oh my God, you’re right, doc!” the man exclaims. “Corpses are ticklish!”

  There’s always more than one way to revise our beliefs.

  Retreat to the Keep

  Chess is generally considered to require “thinking” for skillful play; a solution of this problem will force us either to admit the possibility of a mechanized thinking or to further restrict our concept of “thinking.”

  –CLAUDE SHANNON

  So what happened after the Deep Blue match?

  Most people were divided between two conclusions: (1) accept that the human race was done for, that intelligent machines had finally come to be and had ended our supremacy over all creation (which, as you can imagine, essentially no one was prepared to do), or (2) what most of the scientific community chose, which was essentially to throw chess, the game Goethe called “a touchstone of the intellect,” under the bus. The New York Times interviewed the nation’s most prominent thinkers on AI immediately after the match, and our familiar Douglas Hofstadter, seeming very much the tickled corpse, says, “My God, I used to think chess required thought. Now, I realize it doe
sn’t.”

  Other academics seemed eager to kick chess when it was down. “From a purely mathematical point of view, chess is a trivial game,” says philosopher and UC Berkeley professor John Searle. (There are ten thousand billion billion billion billion possible games of chess for every atom in the universe.) As the New York Times explained:

  In “Gödel, Escher, Bach” [Hofstadter] held chess-playing to be a creative endeavor with the unrestrained threshold of excellence that pertains to arts like musical composition or literature. Now, he says, the computer gains of the last decade have persuaded him that chess is not as lofty an intellectual endeavor as music and writing; they require a soul.

  “I think chess is cerebral and intellectual,” he said, “but it doesn’t have deep emotional qualities to it, mortality, resignation, joy, all the things that music deals with. I’d put poetry and literature up there, too. If music or literature were created at an artistic level by a computer, I would feel this is a terrible thing.”

  In Gödel, Escher, Bach, Hofstadter writes, “Once some mental function is programmed, people soon cease to consider it as an essential ingredient of ‘real thinking.’ ” It’s a great irony, then, that he was among the first to throw chess out of the boat.

  If you had to imagine one human being completely unable to accept either of these conclusions—(a) humankind is doomed, or (b) chess is trivial—and you’re imagining that this person’s name is “Garry Kasparov,” you’re right. Whose main rhetorical tear after the match, as you can well imagine, was, That didn’t count.

  Garry Kasparov may have lost the final game, he says. But Deep Blue didn’t win it.

 

‹ Prev