The Most Human Human

Home > Nonfiction > The Most Human Human > Page 7
The Most Human Human Page 7

by Brian Christian


  Shiv practices what he preaches. His and his wife’s marriage was arranged—they decided to tie the knot after talking for twenty minutes14—and they committed to buying their house at first sight.

  Coming Back to Our Senses

  All this “hemispheric bias,” you might call it, or rationality bias, or analytical bias—for it’s in actuality more about analytical thought and linguistic articulation than about the left hemisphere per se—both compounds and is compounded by a whole host of other prevailing societal winds to produce some decidedly troubling outcomes.

  I think back, for instance, to my youthful days in CCD—Confraternity of Christian Doctrine, or Catholicism night classes for kids in secular public schools. The ideal of piousness, it seemed to me in those days, was the life of a cloistered monk, attempting a kind of afterlife on earth by living, as much as possible, apart from the “creatural” aspects of life. The Aristotelian ideal: a life spent entirely in contemplation. No rich foods, no aestheticizing the body with fashion, no reveling in the body qua body through athletics—nor dancing—nor, of course, sex. On occasion making music, yes, but music so beholden to prescribed rules of composition and to mathematical ratios of harmony that it too seemed to aspire toward pure analytics and detachment from the general filth and fuzziness of embodiment.

  And so for many of my early years I distrusted my body, and all the weird feelings that came with it. I was a mind, but merely had a body—whose main purpose, it seemed, was to move the mind around and otherwise only ever got in its way. I was consciousness—in Yeats’s unforgettable words—“sick with desire / And fastened to a dying animal.” After that animal finally did die, it was explained to me, things would get a lot better. They then made sure to emphasize that suicide is strictly against the rules. We were all in this thing together, and we all just had to wait this embodiment thing out.

  Meanwhile, on the playground, I was contemptuous of the seemingly Neanderthal boys who shot hoops and grunted their way through recess—meanwhile, my friends and I talked about MS-DOS and Stephen Hawking. I tended to view the need to eat as an annoyance—I’d put food in my mouth to hush my demanding stomach the way a parent gives a needy infant a pacifier. Eating was annoying; it got in the way of life. Peeing was annoying, showering was annoying, brushing the crud off my teeth every morning and night was annoying, sleeping a third of my life away was annoying. And sexual desire—somehow I’d developed the idea that my first boyhood forays into masturbation had stamped my one-way ticket to hell—sexual desire was so annoying that I was pretty sure it had already cost me everything.

  I want to argue that this Aristotelian/Stoic/Cartesian/Christian emphasis on reason, on thought, on the head, this distrust of the senses, of the body, has led to some profoundly strange behavior—and not just in philosophers, lawyers, economists, neurologists, educators, and the hapless would-be pious, but seemingly everywhere. In a world of manual outdoor labor, the sedentary and ever-feasting nobility made a status symbol of being overweight and pale; in a world of information work, it is a luxury to be tan and lean, if artificially or unhealthily so. Both scenarios would seem less than ideal. The very fact that we, as a rule, must deliberately “get exercise” bodes poorly: I imagine the middle-class city dweller paying money for a parking space or transit pass in lieu of walking a mile or two to the office, who then pays more money for a gym membership (and drives or buses there). I grew up three miles from the Atlantic Ocean; during the summer, tanning salons a block and a half from the beach would still be doing a brisk business. To see ourselves as distinct and apart from our fellow creatures is to see ourselves as distinct and apart from our bodies. The results of adopting this philosophy have been rather demonstrably weird.

  Turing Machines and the Corporeal IOU

  Wanting to get a handle on how these questions of soul and body intersect computer science, I called up the University of New Mexico’s and the Santa Fe Institute’s Dave Ackley, a professor in the field of artificial life.

  “To me,” he says, “and this is one of the rants that I’ve been on, that ever since von Neumann and Turing and the ENIAC guys15 built machines, the model that they’ve used is the model of the conscious mind—one thing at a time, nothing changing except by conscious thought—no interrupts, no communication from the outside world. So in particular the computation was not only unaware of the world; it didn’t realize that it had a body, so the computation was disembodied, in a very real and literal sense. There’s this IOU for a body that we wrote to computers ever since we designed them, and we haven’t really paid it off yet.”

  I end up wondering if we even set out to owe computers a body. With the Platonic/Cartesian ideal of sensory mistrust, it seems almost as if computers were designed with the intention of our becoming more like them—in other words, computers represent an IOU of disembodiment that we wrote to ourselves. Indeed, certain schools of thought seem to imagine computing as a kind of oncoming rapture. Ray Kurzweil (in 2005’s The Singularity Is Near), among several other computer scientists, speaks of a utopian future where we shed our bodies and upload our minds into computers and live forever, virtual, immortal, disembodied. Heaven for hackers.

  To Ackley’s point, most work on computation has not traditionally been on dynamic systems, or interactive ones, or ones integrating data from the real world in real time. Indeed, theoretical models of the computer—the Turing machine, the von Neumann architecture—seem like reproductions of an idealized version of conscious, deliberate reasoning. As Ackley puts it, “The von Neumann machine is an image of one’s conscious mind where you tend to think: you’re doing long division, and you run this algorithm step-by-step. And that’s not how brains operate. And only in various circumstances is that how minds operate.”

  I spoke next with University of Massachusetts theoretical computer scientist Hava Siegelmann, who agreed. “Turing was very [mathematically] smart, and he suggested the Turing machine as a way to describe a mathematician.16 It’s [modeling] the way a person solves a problem, not the way he recognizes his mother.” (Which latter problem, as Sacks suggests, is of the “right hemisphere” variety.)

  For some time in eighteenth-century Europe, there was a sweeping fad of automatons: contraptions made to look and act as much like real people or animals as possible. The most famous and celebrated of these was the “Canard Digérateur”—the “Digesting Duck”—created by Jacques de Vaucanson in 1739. The duck provoked such a sensation that Voltaire himself wrote of it, albeit with tongue in cheek: “Sans … le canard de Vaucanson vous n’auriez rien qui fit ressouvenir de la gloire de la France,” sometimes humorously translated as “Without the shitting duck we’d have nothing to remind us of the glory of France.”

  Actually, despite Vaucanson’s claims that he had a “chemistry lab” inside the duck mimicking digestion, there was simply a pouch of bread crumbs, dyed green, stashed behind the anus, to be released shortly after eating. Stanford professor Jessica Riskin speculates that the lack of attempt to simulate digestion had to do with a feeling at the time that the “clean” processes of the body could be mimicked (muscle, bone, joint) with gears and levers but that the “messy” processes (mastication, digestion, defecation) could not. Is it possible that something similar happened in our approach to mimicking the mind?

  In fact, the field of computer science split, very early on, between researchers who wanted to pursue more “clean,” algorithmic types of structures and those who wanted to pursue more “messy” and gestalt-oriented structures. Though both have made progress, the “algorithmic” side of the field has, from Turing on, completely dominated the more “statistical” side. That is, until recently.

  Translation

  There’s been interest in neural networks and analog computation and more statistical, as opposed to algorithmic, computing since at least the early 1940s, but the dominant paradigm by far was the algorithmic, rule-based paradigm—that is, up until about the turn of the century.

  If you iso
late a specific type of problem—say, the problem of machine translation—you see the narrative clear as day. Early approaches were about building huge “dictionaries” of word-to-word pairings, based on meaning, and algorithms for turning one syntax and grammar into another (e.g., if going to Spanish from English, move the adjectives that come before a noun so that they come after it).

  To get a little more of the story, I spoke on the phone with computational linguist Roger Levy of UCSD. Related to the problem of translation is the problem of paraphrase. “Frankly,” he says, “as a computational linguist, I can’t imagine trying to write a program to pass the Turing test. Something I might do as a confederate is to take a sentence, a relatively complex sentence, and say, ‘You said this. You could also express the meaning with this, this, this, and this.’ That would be extremely difficult, paraphrase, for a computer.” But, he explains, such specific “demonstrations” on my part might backfire: they come off as unnatural, and I might have to explicitly lay out a case for why what I’m saying is hard for a computer to do. “All this depends on the informedness level of the judge,” he says. “The nice thing about small talk, though, is that when you’re in the realm of heavy reliance on pragmatic inferences, that’s very hard for a computer—because you have to rely on real-world knowledge.”

  I ask him for some examples of how “pragmatic inferences” might work. “Recently we did an experiment in real-time human sentence comprehension. I’m going to give you an ambiguous sentence: ‘John babysat the child of the musician, who is arrogant and rude.’ Who’s rude?” I said that to my mind it’s the musician. “Okay, now: ‘John detested the child of the musician, who is arrogant and rude.’ ” Now it sounds like the child is rude, I said. “Right. No system in existence has this kind of representation.”

  It turns out that all kinds of everyday sentences require more than just a dictionary and a knowledge of grammar—compare “Take the pizza out of the oven and then close it” with “Take the pizza out of the oven and then put it on the counter.” To make sense of the pronoun “it” in these examples, and in ones like “I was holding the coffee cup and the milk carton, and just poured it in without checking the expiration date,” requires an understanding of how the world works, not how the language works. (Even a system programmed with basic facts like “coffee and milk are liquids,” “cups and cartons are containers,” “only liquids can be ‘poured,’ ” etc., won’t be able to tell whether pouring the coffee into the carton or the milk into the cup makes more sense.)

  A number of researchers feel that the attempt to break language down with thesauri and grammatical rules is simply not going to crack the translation problem. A new approach abandons these strategies, more or less entirely. For instance, the 2006 NIST machine translation competition was convincingly won by a team from Google, stunning a number of machine translation experts: not a single human on the Google team knew the languages (Arabic and Chinese) used in the competition. And, you might say, neither did the software itself, which didn’t give a whit about meaning or about grammar rules. It simply drew from a massive database of high-quality human translation17 (mostly from the United Nations minutes, which are proving to be the twenty-first century’s digital Rosetta stone), and patched phrases together according to what had been done in the past. Five years later, these kinds of “statistical” techniques are still imperfect, but they have left the rule-based systems pretty firmly in the dust.

  Among the other problems in which statistical, as opposed to rule-based, systems are triumphing? One of our right-hemisphere paragons: object recognition.

  UX

  Another place where we’re seeing the left-hemisphere, totally deliberative, analytical approach erode is with respect to a concept called UX, short for User Experience—it refers to the experience a given user has using a piece of software or technology, rather than the purely technical capacities of that device. The beginnings of computer science were dominated by concerns for the technical capacities, and the exponential growth in processing power18 during the twentieth century made the 1990s, for instance, an exciting time. Still, it wasn’t a beautiful time. My schoolmate brought us over to show us the new machine he’d bought—it kept overheating, so he’d opened the case up and let the processor and motherboard dangle off the edge of the table by the wires, where he’d set up his room fan to blow the hot air out the window. The keyboard keys stuck when you pressed them. The mouse required a cramped, T. rex–claw grip. The monitor was small and tinted its colors slightly. But computationally, the thing could scream.

  This seemed the prevailing aesthetic of the day. My first summer job, in eighth grade—rejected as a busboy at the diner, rejected as a caddy at the golf course, rejected as a summer camp counselor—was at a web design firm, where I was the youngest employee by at least a decade, and the lowest paid by a factor of 500 percent, and where my responsibilities in a given day would range from “Brian, why don’t you restock the toilet paper and paper towels in the bathrooms” to “Brian, why don’t you perform some security testing on the new e-commerce intranet platform for Canon.” I remember my mentor figure at the web design company saying, in no uncertain terms, “function over form.”

  The industry as a whole seemed to take this mantra so far that function began trumping function: for a while, an arms race between hardware and software created the odd situation that computers were getting exponentially faster but no faster at all to use, as software made ever-larger demands on system resources, at a rate that matched and sometimes outpaced hardware improvements. (For instance, Office 2007 running on Windows Vista uses twelve times as much memory and three times as much processing power as Office 2000 running on Windows 2000, with nearly twice as many execution threads as the immediately previous version.) This is sometimes called “Andy and Bill’s Law,” referring to Andy Grove of Intel and Bill Gates of Microsoft: “What Andy giveth, Bill taketh away.” Users were being subjected to the very same lags and lurches on their new machines, despite exponentially increasing computing power, all of which was getting sopped up by new “features.” Two massive companies pouring untold billions of dollars and thousands of man-years into advancing the cutting edge of hardware and software, yet the advances effectively canceled out. The user experience went nowhere.

  I think we’re just in the past few years seeing the consumer and corporate attitude changing. Apple’s first product, the Apple I, did not include a keyboard or a monitor—it didn’t even include a case to hold the circuit boards. But it wasn’t long before they began to position themselves as prioritizing user experience ahead of power—and ahead of pricing. Now they’re known, by admirers and deriders alike, for machines that manage something which seemed either impossible or irrelevant, or both, until a few years ago—elegance.

  Likewise, as computing technology moves increasingly toward mobile devices, product development becomes less about the raw computing horsepower and more about the overall design of the product and its fluidity, reactivity, and ease of use. This fascinating shift in computing emphasis may be the cause, effect, or correlative of a healthier view of human intelligence—not so much that it is complex and powerful, per se, as that it is reactive, responsive, sensitive, nimble. The computers of the twentieth century helped us to see that.

  Centering Ourselves

  We are computer tacked onto creature, as Sacks puts it. And the point isn’t to denigrate one, or the other, any more than a catamaran ought to become a canoe. The point isn’t that we’re half lifted out of beastliness by reason and can try to get even further through force of will. The tension is the point. Or, perhaps to put it better, the collaboration, the dialogue, the duet.

  The word games Scattergories and Boggle are played differently but scored the same way. Players, each with a list of words they’ve come up with, compare lists and cross off every word that appears on more than one list. The player with the most words remaining on her sheet wins. I’ve always fancied this a rather cruel way of keepin
g score. Imagine a player who comes up with four words, and each of her four opponents only comes up with one of them. The round is a draw, but it hardly feels like one … As the line of human uniqueness pulls back ever more, we put the eggs of our identity into fewer and fewer baskets; then the computer comes along and takes that final basket, crosses off that final word. And we realize that uniqueness, per se, never had anything to do with it. The ramparts we built to keep other species and other mechanisms out also kept us in. In breaking down that last door, computers have let us out. And back into the light.

  Who would have imagined that the computer’s earliest achievements would be in the domain of logical analysis, a capacity held to be what made us most different from everything on the planet? That it could drive a car and guide a missile before it could ride a bike? That it could make plausible preludes in the style of Bach before it could make plausible small talk? That it could translate before it could paraphrase? That it could spin half-plausible postmodern theory essays19 before it could be shown a chair and say, as any toddler can, “chair”? We forget what the impressive things are. Computers are reminding us.

  One of my best friends was a barista in high school: over the course of the day she would make countless subtle adjustments to the espresso being made, to account for everything from the freshness of the beans to the temperature of the machine to the barometric pressure’s effect on the steam volume, meanwhile manipulating the machine with octopus-like dexterity and bantering with all manner of customers on whatever topics came up. Then she goes to college, and lands her first “real” job—rigidly procedural data entry. She thinks longingly back to her barista days—a job that actually made demands of her intelligence.

 

‹ Prev