The Most Human Human
Page 8
I think the odd fetishization of analytical thinking, and the concomitant denigration of the creatural—that is, animal—and embodied aspect of life is something we’d do well to leave behind. Perhaps we are finally, in the beginnings of an age of AI, starting to be able to center ourselves again, after generations of living “slightly to one side.”
Besides, we know, in our capitalist workforce and precapitalist-workforce education system, that specialization and differentiation are important. There are countless examples, but I think, for instance, of the 2005 book Blue Ocean Strategy: How to Create Uncontested Market Space and Make the Competition Irrelevant, whose main idea is to avoid the bloody “red oceans” of strident competition and head for “blue oceans” of uncharted market territory. In a world of only humans and animals, biasing ourselves in favor of the left hemisphere might make some sense. But the arrival of computers on the scene changes that dramatically. The bluest waters aren’t where they used to be.
Add to this that humans’ contempt for “soulless” animals, their unwillingness to think of themselves as descended from their fellow “beasts,” is now cut back on all kinds of fronts: growing secularism and empiricism, growing appreciation for the cognitive and behavioral abilities of organisms other than ourselves, and, not coincidentally, the entrance onto the scene of a being far more soulless than any common chimpanzee or bonobo—in this sense AI may even turn out to be a boon for animal rights.
Indeed, it’s entirely possible that we’ve seen the high-water mark of the left-hemisphere bias. I think the return of a more balanced view of the brain and mind—and of human identity—is a good thing, one that brings with it a changing perspective on the sophistication of various tasks.
It’s my belief that only experiencing and understanding truly disembodied cognition, only seeing the coldness and deadness and disconnectedness of something that truly does deal in pure abstraction, divorced from sensory reality, only this can snap us out of it. Only this can bring us, quite literally, back to our senses.
One of my graduate school advisers, poet Richard Kenney, describes poetry as “the mongrel art—speech on song,” an art he likens to lichen: that organism which is actually not an organism at all but a cooperation between fungi and algae so common that the cooperation itself seemed a species. When, in 1867, the Swiss botanist Simon Schwendener first proposed the idea that lichen was in fact two organisms, Europe’s leading lichenologists ridiculed him—including Finnish botanist William Nylander, who had taken to making allusions to “stultitia Schwendeneriana,” fake botanist-Latin for “Schwendener the simpleton.” Of course, Schwendener happened to be completely right. The lichen is an odd “species” to feel kinship with, but there’s something fitting about it.
What appeals to me about this notion—the mongrel art, the lichen, the monkey and robot holding hands—is that it seems to describe the human condition too. Our very essence is a kind of mongrelism. It strikes me that some of the best and most human emotions come from this lichen state of computer/creature interface, the admixture, the estuary of desire and reason in a system aware enough to apprehend its own limits, and to push at them: curiosity, intrigue, enlightenment, wonder, awe.
Ramachandran: “One patient I saw—a neurologist from New York—suddenly at the age of sixty started experiencing epileptic seizures arising from his right temporal lobe. The seizures were alarming, of course, but to his amazement and delight he found himself becoming fascinated by poetry, for the first time in his life. In fact, he began thinking in verse, producing a voluminous outflow of rhyme. He said that such a poetic view gave him a new lease on life, a fresh start just when he was starting to feel a bit jaded.”
Artificial intelligence may very well be such a seizure.
1. When I first read about this as an undergraduate, it seemed ridiculous, the notion that a nonphysical, nonspatial entity like the soul would somehow deign to physicality/locality in order to “attach” itself to the physical, spatial brain at any specific point—it just seemed ridiculous to try to locate something non-localized. But later that semester, jamming an external wireless card into my old laptop and hopping online, I realized that the idea of accessing something vague, indefinite, all surrounding, and un-locatable—my first reaction to my father explaining how he could “go to the World Wide Web” was to say, “Where’s that?”—through a specific physical component or “access point” was maybe not so prima facie laughable after all.
2. Depending on your scientific and religious perspectives, the soul/body interface might have to be a special place where normal, deterministic cause-and-effect physics breaks down. This is metaphysically awkward, and so it makes sense that Descartes wants to shrink that physics-violation zone down as much as possible.
3. !
4. The word “psyche” has, itself, entered English as a related, but not synonymous, term to “soul”—one of the many quirks of history that make philology and etymology so convoluted and frustrating and interesting.
5. Fine indeed. “A piece of your brain the size of a grain of sand would contain one hundred thousand neurons, two million axons, and one billion synapses, all ‘talking to’ each other.”
6. Philolaus’s different but related view was that the soul is a kind of “attunement” of the body.
7. The Stoics had another interesting theory, which foreshadows nicely some of the computer science developments of the 1990s. Plato’s theory of the tripartite soul could make sense of situations where you feel ambivalent or “of two minds” about something—he could describe it as a clash between two different parts of the soul. But the Stoics only had one soul with one set of functions, and they took pains to describe it as “indivisible.” How, then, to explain ambivalence? In Plutarch’s words, it is “a turning of the single reason in both directions, which we do not notice owing to the sharpness and speed of the change.” In the ’90s, I recall seeing an ad on television for Windows 95, where four different animations were playing, one after the other, as a mouse pointer clicked from one to the other. This represented old operating systems. All of a sudden all four animations began running simultaneously: this represented Windows 95, with multitasking. Until around 2007 and onward, when multiprocessor machines became increasingly standard, multitasking was simply—Stoic-style—switching back and forth between processes, just as with the old operating systems the ad disparages, except doing so automatically, and really fast.
8. This is an interesting nuance, because of how crucial the subjective/objective distinction has been to modern philosophy of mind. In fact, subjective experience seems to be the linchpin, the critical defensive piece, in a number of arguments against things like machine intelligence. The Greeks didn’t seem too concerned with it.
9. In Hendrik Lorenz’s words: “When the soul makes use of the senses and attends to perceptibles, ‘it strays and is confused and dizzy, as if it were drunk.’ By contrast, when it remains ‘itself by itself’ and investigates intelligibles, its straying comes to an end, and it achieves stability and wisdom.”
10. The word “or” in English is ambiguous—“Do you want sugar or cream with your coffee?” and “Do you want fries or salad with your burger?” are actually two different types of questions. (In the first, “Yes”—meaning “both”—and “No”—meaning “neither”—are perfectly suitable answers, but in the second it’s understood that you will choose one and exactly one of the options.) We respond differently, and appropriately, to each without often consciously noticing the difference. Logicians, to be more precise, use the terms “inclusive or” and “exclusive or” for these two types of questions, respectively. In Boolean logic, “OR” refers to the inclusive or, which means “either one or the other, or both.” The exclusive or—“either one or the other, but not both”—is written “XOR.”
11. The heart needs the brain just as much as the brain needs the heart. But the heart—with all due respect—is fungible.
12. The book’s sequel, The Upside o
f Irrationality, is much more sanguine about “irrationality” in its title, if somewhat less so in the text itself.
13. Neurologist Antonio Damasio showed him a series of extremely emotionally charged pictures—a severed foot, a naked woman, a burning home—to which he barely reacted. Fans of Blade Runner or Philip K. Dick will recall this as almost the spitting image of the fictitious “Voigt-Kampff test.” Good thing he didn’t live in the Blade Runner universe: Harrison Ford would have decided this man was a “replicant”—and killed him.
14. The ultimate Turing test victory, you might say.
15. John Mauchly and J. Presper Eckert, of the University of Pennsylvania. ENIAC (Electronic Numerical Integrator and Computer), built in 1946 and initially used in the calculations for the hydrogen bomb, was the first fully electronic and fully general-purpose computing machine.
16. Recall Turing: “The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer.”
17. Interestingly, this means that paraphrasing is actually harder for computers than translation, because there aren’t huge paraphrase corpora lying around ready to become statistical fodder. The only examples I can think of off the top of my head would be, ironically, competing translations: of famous works of literature and religious texts.
18. This trend is described by what’s called Moore’s Law, the 1965 prediction of Intel’s co-founder Gordon Moore that the number of transistors in a processor would double every two years.
19. “If one examines capitalist discourse, one is faced with a choice: either reject nihilism or conclude that the goal of the writer is social comment, given that the premise of Foucaultist power relations is valid.” Or, “Thus, the subject is interpolated into a nihilism that includes consciousness as a paradox.” Two sentences of infinitely many at www.elsewhere.org/pomo.
4. Site-Specificity vs. Pure Technique
Introducing ELIZA
SENT: Men are all alike.
RECEIVED: In what way?
SENT: They’re always bugging us about something or other.
RECEIVED: Can you think of a specific example?
SENT: Well, my boyfriend made me come here.
RECEIVED: Your boyfriend made you come here.
SENT: He says I’m depressed much of the time.
RECEIVED: I am sorry to hear you are depressed.
SENT: It’s true. I am unhappy.
RECEIVED: Do you think coming here will help you not to be unhappy?
SENT: I need some help, that much seems certain.
RECEIVED: What would it mean to you if you got some help?
SENT: Perhaps I could learn to get along with my mother.
RECEIVED: Tell me more about your family.
When the Turing test was first proposed in 1950, it was a hypothetical: technology was nowhere near a point at which a practical test was possible. But, as we know, it got here, with the first conversational computer program to attract significant notice and attention being ELIZA,1 written in 1964 and 1965 by Joseph Weizenbaum at MIT. The history of conversational computer programs contains every bit as many colorful “characters” in the programs themselves as it does in the humans that created them, and ELIZA’s story is an interesting one. Modeled after a Rogerian therapist, ELIZA worked on a very simple principle: extract key words from the user’s own language, and pose their statements back to them. (“I am unhappy .” “Do you think coming here will help you not to be unhappy ?”) If in doubt, it might fall back on some completely generic phrases like “Please go on.” This technique of fitting the user’s statements into a set of predefined patterns and responding with a prescribed phrasing of its own—called “template matching”—was ELIZA’s only capacity.
The results were stunning, maybe even staggering, considering that ELIZA was essentially the first chat program ever written, with essentially no memory, no processing power, and written in just a couple hundred lines of code: many of the people who first talked with ELIZA were convinced that they were having a genuine human interaction. In some cases even Weizenbaum’s own insistence was of no use. People would ask to be left alone to talk “in private,” sometimes for hours, and returned with reports of having had a meaningful therapeutic experience. Meanwhile, academics leaped to conclude that ELIZA represented “a general solution to the problem of computer understanding of natural language.” Appalled and horrified, Weizenbaum did something almost unheard of: an immediate about-face of his entire career. He pulled the plug on the ELIZA project, encouraged his own critics, and became one of science’s most outspoken opponents of AI research.
But in some sense the genie was already out of the bottle, and there was no going back. The basic template-matching skeleton and approach of ELIZA have been reworked and implemented in some form or other in almost every chat program since, including the contenders at the Loebner Prize. And the enthusiasm, unease, and controversy surrounding these programs have only grown.
One of the strangest twists to the ELIZA story, however, was the reaction of the medical community, which, too, decided Weizenbaum had hit upon something both brilliant and useful with ELIZA. The Journal of Nervous and Mental Disease, for example, said of ELIZA in 1966: “If the method proves beneficial, then it would provide a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists. Because of the time-sharing capabilities of modern and future computers, several hundred patients an hour could be handled by a computer system designed for this purpose. The human therapist, involved in the design and operation of this system, would not be replaced, but would become a much more efficient man since his efforts would no longer be limited to the one-to-one patient-therapist ratio as now exists.”
Famed scientist Carl Sagan, in 1975, concurred: “No such computer program is adequate for psychiatric use today, but the same can be remarked about some human psychotherapists. In a period when more and more people in our society seem to be in need of psychiatric counseling, and when time sharing of computers is widespread, I can imagine the development of a network of computer psychotherapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we would be able to talk with an attentive, tested, and largely non-directive psychotherapist.”
Incredibly, it wouldn’t be long into the twenty-first century before this prediction—again, despite all possible protestation Weizenbaum could muster—came true. The United Kingdom’s National Institute for Health and Clinical Excellence recommended in 2006 that cognitive-behavioral therapy software (which, in this case, doesn’t pretend it’s a human) be made available in England and Wales as an early treatment option for patients with mild depression.
Scaling Therapy
With ELIZA, we get into some serious, profound, even grave questions about psychology. Therapy is always personal. But does it actually need to be personalized? The idea of having someone talk with a computerized therapist is not really all that much less intimate than having them read a book.2 Take, for instance, the 1995 bestseller Mind over Mood: it’s one-size-fits-all cognitive-behavioral therapy. Is such a thing appropriate?
(On Amazon, one reviewer lashes out against Mind over Mood: “All experiences have meaning and are rooted in a context. There is not [sic] substitute for seeking the support of a well trained, sensitive psychotherapist before using such books to ‘reprogram’ yourself. Remember, you’re a person, not a piece of computer software!” Still, for every comment like this, there are about thirty-five people saying that just following the steps outlined in the book changed their lives.)
There’s a Sting lyric in “All This Time” that’s always broken my heart: “Men go crazy in congregations / They only get better one by one.” Contemporary women, for instance, are all dunked into the same mass-media dye bath of body-image issues, and then each, individually and idiosyncratically and painfully, has to spend s
ome years working through it. The disease scales; the cure does not.
But is that always necessarily so? There are times when our bodies are sufficiently different from others’ bodies that we have to be treated differently by doctors, though this doesn’t frequently go beyond telling them our allergies and conditions. But our minds: How similar are they? How site-specific does their care need to be?
Richard Bandler is the co-founder of the controversial “Neuro-Linguistic Programming” school of psychotherapy and is himself a therapist who specializes in hypnosis. One of the fascinating and odd things about Bandler’s approach—he’s particularly interested in phobias—is that he never finds out what his patient is afraid of. Says Bandler, “If you believe that the important aspect of change is ‘understanding the roots of the problem and the deep hidden inner meaning’ and that you really have to deal with the content as an issue, then probably it will take you years to change people.” He doesn’t want to know, he says; it makes no difference and is just distracting. He’s able to lead the patient through a particular method and, apparently, cure the phobia without ever learning anything about it.
It’s an odd thing, this: we often think of therapy as intimate, a place to be understood, profoundly understood, perhaps better than we ever have been. And Bandler avoids that understanding like—well, like ELIZA.
“I think it’s extremely useful for you to behave so that your clients come to have the illusion that you understand what they are saying verbally,” he says. “I caution you against accepting the illusion for yourself.”
Supplanted by Pure Technique
I had thought it essential, as a prerequisite to the very possibility that one person might help another learn to cope with his emotional problems, that the helper himself participate in the other’s experience of those problems and, in large part by way of his own empathic recognition of them, himself come to understand them. There are undoubtedly many techniques to facilitate the therapist’s imaginative projection into the patient’s inner life. But that it was possible for even one practicing psychiatrist to advocate that this crucial component of the therapeutic process be entirely supplanted by pure technique—that I had not imagined! What must a psychiatrist who makes such a suggestion think he is doing while treating a patient, that he can view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter?