The hypothesis [postulates] that intentional states, such as beliefs and desires, are relations between cognizers [the person or machine that thinks and perceives] and symbolic mental representations that have syntax and semantics analogous to those of natural languages. A natural language such as English or Swahili is opposed to a formal language such as mathematics. It also postulates that intelligent thought (indeed cognition in general) amounts to carrying out algorithmic operations over such representations, i.e., Turing-computable operations that can be specified by formal rules in terms of syntax of the underlying mental representations. The CTM has been the fundamental working hypothesis of most AI research to date, certainly all in the GOFAI tradition.234
CTM became the route to building actual robots, not Dawkins’s metaphorical ones that haul around genes.
In his introduction to What Computers Still Can’t Do (1992), the third edition of a book that was first published in 1972, Hubert Dreyfus pronounced the original aims of GOFAI dead. “Almost half a century ago,” he writes, “computer pioneer Alan Turing suggested that a high-speed digital computer, programmed with rules and facts, might exhibit intelligent behavior . . . the research program based on the assumption that human beings produce intelligence using facts and rules has reached a dead end, and there is no reason to think it could ever succeed.”235 Artificial intelligence has, in fact, run into one dead end after another, although you would never know it from watching movies or reading the newspapers. The cultural fantasy that we are on the brink of living with artificial people with brains like ours, people who not only think like us but move and feel the way we do, continues to exert a powerful hold on the collective imagination.
In 2012, twenty years after Dreyfus declared Turing’s project dead, the Oxford physicist David Deutsch echoed Dreyfus’s sentiments about the history of AI research in an essay, “Creative Blocks: The Very Laws of Physics Imply that Artificial Intelligence Must Be Possible. What’s Holding Us Up?” He writes, “The field of ‘artificial general intelligence’ or AGI . . . has made no progress whatever during the entire six decades of its existence.”236 Despite admitting to the failures of AI, Deutsch has not despaired. In fact, his essay is brimming with confidence that sensual, imaginative, artificial people will be part of our future. The divide between Dreyfus and Deutsch is foundational or paradigmatic. Each of their buildings is standing on a different structure, on different underlying assumptions that drive their work. For Dreyfus, it is obvious that AI has failed because the mind is not a computer carrying out algorithmic operations and, no matter how many rules and facts are fed into the machine, it will not wake up and become like us because that is not how the human mind works. For Dreyfus, the whole body and its movements are necessarily involved in mental operations.
Deutsch believes the original idea is sound because the physics is sound. The problem lies in its execution. In his essay, he takes the reader back to Charles Babbage and Ada Lovelace in the nineteenth century and the plans they made for the Analytical Engine, which some regard as a precursor to Turing’s machine, although, according to Hodges, Turing paid little attention to it for his breakthrough. In the extensive explanatory notes Lovelace made when she translated an article by an Italian scientist on the Analytical Engine in 1843, she explicated a method for calculating specific numbers with the machine. These notes earned her a reputation as the world’s first computer programmer. In Turing’s paper “Computing Machinery and Intelligence” (1950), he specifically addresses Lady Lovelace’s “objection” that machines can’t “originate anything.”237 Deutsch was no doubt conscious that his commentary would resonate with Turing’s. Deutsch writes:
They [Lovelace and Babbage] knew that it could be programmed to do algebra, play chess, compose music, process images and so on . . . But could the Analytical Engine feel the same boredom? Could it feel anything? Could it want to better the lot of humankind (or of Analytical Enginekind)? Could it disagree with its programmer about its programming? Here is where Babbage and Lovelace’s insight failed them. They thought that some cognitive functions of the human brain were beyond the reach of computational universality. As Lovelace wrote, “The Analytical Engine has no pretentions whatever to originate any thing. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.”
And yet “originating things,” “following analysis,” and “anticipating analytical relations and truths” are all behaviours of brains and, therefore, of the atoms of which brains are composed. Such behaviors obey the laws of physics. So it follows inexorably from universality that, with the right program, an Analytical Engine would undergo them too, atom by atom and step by step. True, the atoms in the brain would be emulated by metal cogs and levers rather than organic material—but in the present context, inferring anything substantive from that distinction would be rank racism.238
This passage gives remarkable insight into the thought processes behind a dematerialized theory of mind. Brains are not considered. The universality of computation means that the human mind can theoretically be emulated by “some program on a general-purpose computer, provided it is given enough time and memory” (my italics). It must proceed logically—atom by atom and step by step. The universal laws of physics require it. Deutsch is a sophisticated physicist who has done groundbreaking work on quantum computation, contributions that are indisputable and have already had an impact on scientists doing research in several fields. Deutsch believes in the widely held but impossible-to-prove idea of the multiverse, that there are many universes. Although hardly identical as theory, it makes me think of my childhood fantasies about being inside another person’s dream inside that person’s dream inside yet another person’s dream. Deutsch also believes human beings will be able to replace their bodies with computer simulations sometime in the future. Like Wiener’s fantasy that the pattern of a man will be telegraphed from one place to another, Deutsch is convinced we will become immortal via computation.
The debates about whether minds function the way computers do are ongoing. Attacks and defenses are launched from inside and outside science. Indeed, it makes me wonder what intelligence actually is. Are computers smart or are they stupid? My computer’s program for spelling and grammar is stupid for the simple reason that it is rigid and writing well is not. Among many other failings, every time I use the passive voice, it corrects me. The program cannot have it both ways, but there are times when I want to emphasize what is acted upon as opposed to what acts. My computer has no sense of these nuances because it cannot judge them. On the other hand, I can call up books and papers in an instant on the most abstruse subjects. This still feels miraculous. Does my thinking, writing brain really work like a digital computer?
John Searle has insisted that there is a fundamental difference between the way computers process information and the way brains do it. For the computer there is always an outside agent who codes information and then interprets it syntactically and semantically, a point that reverberates with the ambiguities inherent in the notion of a “genetic program.” But the brain, as he points out, is not “observer relative.” In the case of visual experience, for example, “the biological reality is not that of a bunch of words or symbols being produced by the visual system; rather it is a matter of a concrete specific conscious visual event—this very visual experience.” To rephrase this: the machine is not having any kind of experience. And what about the universal laws of physics? Searle contends, “Computational states are not discovered within the physics, they are assigned to the physics.”239 There have been hundreds of responses to Searle explaining why he is dead wrong. Deutsch, for one, would disagree vehemently. He offers severe criticism of approaches to artificial intelligence and offers remedies. Nevertheless, he believes that through an “inexorable” sequential motion, scientists will build, by cogs and wheels, an intelligent living being. If the AGI project has repeatedly failed to achieve anythi
ng in sixty years, this essential truth remains unmodified.
Dreyfus’s critique echoes Lovelace. In the 1992 introduction, he argues that what came to be called “the common sense knowledge problem” in AI, the seemingly intractable problems researchers had in trying to get machines to be more like human beings, was not a problem of representing common sense symbolically, but rather a problem of what Dreyfus calls human “know-how,” a know-how that does not lend itself to being computed because it involves an implicit bodily relation to our environments.
The problem precisely was that this know-how, along with all the interests, feelings, motivations, and bodily capacities that go to make a human being, would have had to be conveyed to the computer as knowledge—as a huge and complex belief system—and making our inarticulate, preconceptual background understanding of what it is like to be a human being explicit in a symbolic representation seemed to me a hopeless task.240
There is an essential gap between the idea of the human being as a computing machine of symbolic information (the mental processes of which can theoretically be translated or replicated in another, but nonorganic, machine) and as something quite different, an embodied person who knows a great deal about the world preconceptually and nonsymbolically through her experience of moving around in it.
We have highly developed sensory and motor skills that appear not to rely on either concepts or symbols. Thinking about a person this way means getting much closer to actual human experience than physics does. Theoretical physicists are searching for essences, the laws that must be at the bottom of the universe as a whole, but, despite the fact that physics is obviously involved, they are not worrying much anymore about how someone actually gets from the bedroom to the bathroom.
Let us take a simple example of preconceptual, prereflective, or what Michael Polanyi in his book Personal Knowledge: Towards a Post-Critical Philosophy calls “tacit” knowledge, an unarticulated form of knowing, which we share with other animals.241 When I make my way through a dark but familiar room in my house at night, avoiding chairs and tables and then finding a light switch, how do I do it? Is my agility in the dark something that can be represented in symbolic code or is much of what I am doing the product of simply having moved for a long time in that particular space so that my maneuvering must be thought of as a different kind of knowledge, a knowledge that is not born of symbols or even concepts? If the body and its movements play an important role in intelligence, then GOFAI is doomed because it assumes that mental “processes” are independent of our moving bodies. This echoes Descartes: the rational thinking mind is what matters, and the body is at best its tool.
Dreyfus is a philosopher whose insights derive not from Anglo-American analytical philosophy but from phenomenology, from a body that is not regarded as an objective thing but as a lived situation, what Husserl called Leib, the experience of the body from within. Heidegger used the same word, Leib, to refer to lived bodily experience and its horizon—that is, its borders don’t end with the top of one’s head or with one’s feet or fingertips but extend into the space of a person’s actions. Leib is part of one’s larger perceptual and active experience. The French phenomenological philosopher Maurice Merleau-Ponty wrote extensively on the role of the human body in perception and attacked the isolated Cartesian cogito in favor of what he called “lived perception” and “incarnate consciousness.” The body, he argued, is the very condition for our perception of the world and our meaningful understanding of it. “The primary truth is indeed ‘I think,’ ” Merleau-Ponty writes in the Phenomenology of Perception, “but only provided that we understand thereby ‘I belong to myself’ while belonging to the world.”242 In this conception of being in the world, our body is inseparable from and informs our thoughts. It is a dynamic reality lived as a body situation from that body’s perspective.
Following Husserl, Merleau-Ponty expressly links one person’s life to another’s; we are enmeshed in other lives and bodies in intersubjectivity, our fundamental relations with other people, some of which are symbolized and some of which are not. After all, babies do not have symbols and can’t talk, but I believe they are conscious as feeling, sensing bodies dependent on other bodies to stay alive.
Machines, Emotions, and Bodies
As a girl, I loved my dolls. I made them talk, nod, dance, and wave good-bye. They suffered, loved, fought, wept, laughed, and had long soulful and spiteful conversations with one another. In my childhood, I owned one doll that talked. When a string at the back of its neck was pulled and released, it would whine “Mommy” or “I’m hungry” or “Play with me.” These utterances were of no use in the elaborate games I liked to play, but worse, they seemed to emphasize rather than diminish the fact that the doll was just a hollow, plastic, dead thing, and I found its voice unsettling. I was the animator of my inanimate world, and when I was deep in play, that world mingled with the real world and seemed to enchant it with a breath of magic. I used to imagine that my dolls came to life at night, and sometimes I would inspect them very closely in the morning for a sign that they had moved while I was asleep and had had adventures without me. At those moments I half believed they might have moved, and the feeling it gave me was a mixture of longing and dread.
The fantasy of the living doll long predates artificial intelligence, and it is closely related to play, creativity, the imagination, and making art. Daedalus is said to have produced statues so lifelike that they moved. The cold marble flesh of Galatea warms and softens under Pygmalion’s touch. His desire brings her to life. In these stories, the old binary nature and nurture change places. Learned skill is turned into nature itself; artifice is not a copy of life anymore but life itself. Turing’s dream of the machine that “competes with men” has many precursors. There are stories of automata in ancient Egypt, in the third century B.C. in China, as well as in ancient Greece. In his Book of Knowledge of Ingenious Mechanical Devices, the late twelfth-, early thirteenth-century Islamic scholar and engineer al-Jazari described in detail several machines powered by water or by candle heat, including a band of musicians that played their instruments in a boat and the figure of a girl who walked through a door to serve drinks. In the fifteenth century, the Rood of Grace, a figure of Jesus on the cross, rolled its eyes and moved its lips and body for pious pilgrims who came to the Cistercian abbey at Boxley in Kent, until one of Cromwell’s men exposed its wheels and pulleys and an angry crowd burned up the once holy doll. Leonardo da Vinci presented an automaton lion to the king of France in 1515, and he designed a moving knight that could sit, stand, and adjust his visor. In the eighteenth century, Vaucanson’s eating, digesting, defecating duck impressed large audiences with its complicated mechanics. The famous chess-playing Turk, a machine created by Hungarian inventor Wolfgang von Kempelen, toured Europe and repeatedly won against his opponents. The clever clockwork hid a human chess player. Not until Deep Blue beat Kasparov would a machine actually play masterful chess.
We have countless examples of both feeling and unfeeling machine-like beings in contemporary fictions. The computer HAL in the film 2001 is not programmed to feel anything, but over the course of the film he develops emotions and self-consciousness. Mr. Spock is a machine-like, emotion-free alien, a recent edition of the wholly rational man. R2-D2, on the other hand, is a cute little machine. Science fiction teems with examples of cold and warm robots or aliens. Among my favorites are the pod people in the 1956 movie Invasion of the Body Snatchers. One person after the other is “snatched” by extraterrestrial powers and, in a nod to Mendel, copied in a human-sized pea pod, only to emerge drained of all emotion, despite the fact that he or she looks exactly like his or her old self. In the film, being human is synonymous with having feelings for other people, especially love feelings, an idea Hollywood has pressed upon us with nauseating repetitiveness. The inverse of that sentimentality is the unfeeling double, the monster, doll, or robot: the alien other as mirror reflection. If emotion plays an important part in thought, will machin
es ever feel?
There are human beings for whom emotional connections to others never developed, were lost, or have been compromised in some way. Patients with Capgras syndrome suffer from the delusion that a beloved person is a double, a kind of pod person, if you will. This strange affliction may be due to brain damage that causes the person to lose the familiar feeling of intimacy we have for those we love, which is something that can be measured. Galvanic skin response is a simple way to gauge emotional arousal. Autism, an illness that I believe is too broadly defined, is in part characterized by difficulties in reading and understanding the facial expressions and intonations of other people and the nuances of social meanings in general.
Lesions to the brain’s frontal lobe can result in a strange lack of feeling not only for other people but for one’s self. I suspect this is a problem with reflective self-consciousness. What was gained in development is lost in injury. If you have difficulty seeing yourself as a potential object of sympathy, you are bound to have all kinds of problems negotiating the world of other human beings. The psychopath who cheats and lies and even murders without remorse, who seems to lack all empathy for others, but who may hide behind a friendly, perhaps even seductive exterior, holds an enduring fascination in our culture.
The psychopath, the ruthless robot, and the zombie might be said to play similar roles in our stories as hollow simulators of genuine feeling. The zombie, an animated corpse, resembles the empty doll that begins to breathe and yet remains an inhuman, indeed nonanimal, thing. Put a knife in the hand of an innocent-looking doll that can walk, and you have a horror movie. Surely these figures are related to the uncanny sense that in death the “person” has left or departed and what remains is just a thing, dead matter. Aristotle’s form has gone missing. The “he” or “she” is replaced by an “it.” The dead body is carted away, buried or burned; it is waste. The fascination with constructing machines that think and feel is related both to birth and resurrection wishes, to the nonbiological creation of a “real” being and to the reanimation of the dead body, but also, I think, to the artist’s wish to make something that will survive, that will last beyond the grave or beyond incinerated ashes. Can scientists create atom by atom and step by step an intelligent, experiencing, emotional being without an organic body? And will it be Pygmalion’s Galatea or Frankenstein’s monster?
A Woman Looking at Men Looking at Women: Essays on Art, Sex, and the Mind Page 32