A Woman Looking at Men Looking at Women

Home > Literature > A Woman Looking at Men Looking at Women > Page 34
A Woman Looking at Men Looking at Women Page 34

by Siri Hustvedt


  Second, one has to read Breazeal’s passage carefully to glean that she is not claiming that Kismet learns anything and therefore is not socially intelligent in the sense that its synthetic nervous system develops over time through its encounters with others as a human infant’s does. It is “intelligently” responsive to various visual and auditory cues, but it is reliant on engineered programming for its “development.” In her book, Breazeal explains, “In the near future, these interaction dynamics could play an important role in socially situated learning for Kismet.”261 The near future is not now. The fact that a sophisticated scholar such as Elizabeth A. Wilson misses this point in her book Affect and Artificial Intelligence suggests to me that Breazeal’s wishes for future research are easy to conflate with her accomplishments in the present. Her prose is mushy on the question. Wilson misunderstands that Breazeal’s hopes for future robots have merged with the one she has already designed when she writes, “Kismet’s designers used these expressions and interactions they provoke with a human ‘caretaker’ as scaffolding for learning. Kismet’s intelligence was not programmed in, it grew out of Kismet’s embodied, affectively oriented interactions with others.”262 Although I sympathize with Wilson’s confusion, her statement is simply not true. In an interview with the New York Times, a journalist pointedly asked Breazeal if Kismet learned from people. “From an engineering standpoint, Kismet got more sophisticated. As we continued to add more abilities to the robot, it could interact with people in richer ways . . . But I think we learned mostly about people from Kismet”263 (my italics). What, in fact, did they learn?

  People treated Kismet as an interactive animated thing, which is exactly what it is, but does this tell us anything new about people? Human beings engage emotionally with fictive beings of all kinds, not just toys or robots but characters in novels, figures on canvases, the imaginary people actors embody in films and onstage, and avatars in many forms of interactive and virtual games. Are the feelings people have for a responsive mechanical head such as Kismet, whose author is Breazeal and her team, qualitatively different from the ones they have for, say, Jane Eyre or Elizabeth Bennet or Raskolnikov? This is not a rhetorical question. Arguably, in terms of complex emotional experiences, a good novel outstrips the rewards offered by Kismet. But then, no character on the page will nod or babble back if you talk to her or him.

  There is a peculiar form of myopia displayed by many people locked inside their own fields. Their vision has become so narrow they can no longer make the most obvious distinctions or connections. Human beings are responding to fictive beings all the time. Of course people will respond to a cute mechanical head that imitates human facial expressions! Human responsiveness to Kismet does not lend it actual feelings, allow it to learn, or bring the machine any closer to that desired state. The question of what is real and what is simulated or virtual, however, is an ongoing one, treated alternately with paranoia and celebration but only rarely thought through with, well, any “intelligence.”

  Brooks and Breazeal have employed embodied models for their robots, and it would be preposterous to say that they have not enjoyed success. Their creatures are marvels. They are impressive automatons. Do their robots feel anything more than my talking doll or the eighteenth-century defecating duck? Can human sentience be simulated? The gap between science fiction and reality has not been closed. What we can be sure of is that dreams of HAL and C-3PO have infected the mobots and the baby-like Kismet. Are feeling, sentient robots with “real emotions” just around the corner? Can silicon and aluminum and electrical wiring and cameras and software when cleverly designed mimic the motor-sensory-affective states that appear to be necessary for human learning and dynamic development in organic creatures, even very simple ones?

  In an essay first published in the Frankfurter Allgemeine Zeitung, David Gelernter, a professor of computer science at Yale and chief scientist at Mirror Worlds Technologies, who has identified himself as an “anti-cognitivist,” argues, “No computer will be creative unless it can simulate all the nuances of human emotion.” In the course of a few paragraphs, Gelernter refers to Rimbaud, Coleridge, Rilke, Blake, Kafka, Dante, Büchner, Shelley (Percy), and T. S. Eliot. In opposition to the computational model, he maintains, “The thinker and his thought stream are not separate.”264 This idea is very close to William James, who was the first to use the expression “stream of consciousness” and identify it with a self. Gelernter resembles a Pragmatist, not a Cartesian. He does not believe human thought can be equated with symbol manipulation. You need the whole person and her feelings. He is interested in creativity, in the continuums of human consciousness, and he is fairly optimistic that with time artificial intelligence may solve many problems involved in the imitation of human processes, including emotion. He does not believe, however, that an “intelligent” computer will ever experience anything. It will never be conscious. “It will say,” he writes, “ ‘that makes me happy,’ but it won’t feel happy. Still: it will act as if it did.”265 The distinction, it seems to me, is vital.

  I have no doubt that these artificial systems will become increasingly animate and complex and that researchers will draw on multiple theoretical models of human and animal development to achieve their goals, but it is obvious to me that the disagreements about what can be done in artificial intelligence rest not only on different philosophical paradigms but also on the imaginative flights one scientist takes as opposed to another, on what each one hopes to achieve and believes he or she can achieve, even when the road ahead is enveloped in heavy fog. Brooks, for example, does not tell us how he will create “real emotions” as opposed to “simulated” ones, but he tells us confidently that it is part of the future plan. Gelernter does not believe software will ever produce subjectivity or consciousness, but he believes simulations will continue apace. They are in fundamental disagreement, a disagreement that is shaped by their learning, their interests, and their fantasies about the future.

  I must emphasize that my discussion here does not deny the aid offered by increasingly complex computers that organize data in ways never dreamed of before in many fields. There are machines that produce answers to calculations much faster than any human being. There are robots programmed to mimic human expressions and interact with us, machines that beat the best of us at chess, and machines that can now scramble over rocks and onto Martian terrain. Nor am I arguing that mathematical or computational models should be banned from thinking about biology. Computers are now used to create beautiful, colorful simulations of biological processes, and these simulations can provide insights into how complex systems of many kinds work. Computational forms have become strikingly diverse and their applications are myriad. These stunning technological achievements should not prohibit us from making distinctions, however. And they should not lead to a confusion of a model for reality with reality itself. The fact that planes and birds fly does not destroy the differences between them, and those differences remain even when we recognize that fluid mechanics are applicable in both cases.

  Anyone who has pressed “translate” while doing research on the Internet knows that computer translations from one language to another are egregious. The garbled sentences that appear in lieu of a “translation” deserve consideration. Here is a sample from a Google translation from French to English taken from a short biography of the philosopher Simone Weil: “The strength and clarity of his thought mania paradox grows logical rigor and meditation in then ultimate consequences.” To argue that the nuances of semantic meanings have escaped this program’s “mind” is an understatement. Language may use rules, but it also involves countless ineffable factors that scientists have been unable to fix in a computational mode. Indeed, if language were a logic-based system of signs with a universal grammar that could be understood mathematically, then we should have beautiful computer translations, shouldn’t we?

  But computers do not feel meanings the way a human translator or interpreter does. From my perspective, thi
s failure is revelatory. Language is not a disembodied code that machines can process easily. Words are at once outside us in the world and inside us, and their meanings shift over time and place. A word means one thing in one context and another thing elsewhere. A wonderful example of a contextual error involves a story I was told about the French translation of one of Philip Roth’s novels. A baseball game is described in the book. A player “runs home.” In English, this means that the runner touches home plate, a designated corner of the diamond, with his foot. In the French, however, the player took off for his own house.

  Newspaper articles can now be generated by computers—reports on sports events, fires, and weather disasters. The ones I have read are formulaic and perfectly legible, and they surely mark an advance in basic computer composition. They are stunningly dull, however. After reading three or four of these brief reports, I felt as if I had taken a sedative. Whether they could be well translated into other languages by computers is another question. Words accrue and lose meaning through a semantic mobility dependent on the community in which they thrive, and these meanings cannot be divorced from bodily sensation and emotion. Slang emerges among a circle of speakers. Irony requires double consciousness, reading one meaning and understanding another. Elegant prose involves a feeling for the rhythms and the music of sentences, a product of the sensual pleasure a writer takes in the sounds of words and the varying metric beats of sentences. Creative translation must take all this into account. If a meaning is lost in one sentence, it might be gained or added to the next one. Such considerations are not strictly logical. They do not involve a step-by-step plan but come from the translator’s felt understanding of the two languages involved.

  Rodney Brooks is right that the distance between real machines and the fictional HAL had not been breached in 2002, and it has not been breached now. In AI the imaginative background has frequently become foreground. Fantasies and fictions infect beliefs and ideas. Human beings are remembering and imaginative creatures. Wishes and dreams belong to science, just as they belong to art and poetry, and this is decidedly not a bad thing.

  An Aside on Niels Bohr and Søren Kierkegaard

  I offer a well-known comment by the physicist Niels Bohr, who made important discoveries about the character of the atom and early contributions to quantum theory: “When it comes to atoms,” Bohr wrote, “language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images and establishing mental connections.”266 This remark links the poet and the physicist as imaginative beings. Bohr’s education is no doubt behind the fact that he links his own work to the poet’s work. The physicist loved poetry, especially the poems of Goethe, and he strongly identified himself with the great German artist and intellectual. Bohr continually quoted literary artists he admired. He read Dickens passionately and liked to conjure vivid pictures for entities in physics—electrons as billiard balls or atoms as plum puddings with jumping raisins—a proclivity that no doubt lies behind his idea that images serve physics as well as poetry.267 I find that images are extremely helpful for understanding ideas, and for many people a plum pudding with animated raisins is more vivid than images of tapes of code or hardware and software. It is not surprising either that Bohr felt a kinship with his fellow Dane Søren Kierkegaard, a philosopher who was highly critical of every totalizing intellectual system and of science itself when it purported to explain everything. For Kierkegaard, objectivity as an end in itself was wrongheaded, because it left out the single individual and subjective experience.

  In Concluding Unscientific Postscript, Kierkegaard’s pseudonym, Climacus, strikes out at the hubris of science. In his “introduction,” which follows his “preface,” Climacus lets an ironic arrow fly: “Honor be to learning and knowledge; praised be the one who masters the material with the certainty of knowledge, with the reliability of autopsy.”268 Although Concluding Unscientific Postscript is a work of critical philosophy, it also incorporates high parody of flocks of assistant professors who churn out one deadly paragraph and parenthesis after another, written for “paragraph gobblers” who are held sway under the “tyranny of sullenness and obtuseness and rigidity.”269 Kierkegaard’s pseudonym is right that the so-called intellectual life in every field often kills the objects it studies. Nature, human history, and ideas themselves are turned into corpses for dissection. Mastery of material, after all, implies subjugation, not interaction, and a static object, not a bouncing one. After reading Stages on Life’s Way, Bohr wrote in a letter, “He [Kierkegaard] made a powerful impression on me when I wrote my dissertation at a parsonage on Funen, and I read his works day and night . . . His honesty and willingness to think the problems through to their very limit is what is great. And his language is wonderful, often sublime.”270

  Kierkegaard did take questions to their limit, to the very precipice of comprehension, and, at the edge of that cliff, he understood a jump was needed, a jump into faith. The difference between this philosopher and many other philosophers is that he knew a leap had to be made. He did not disguise the leap with arguments that served as systematic bridges. Kierkegaard was a Christian but one of a very particular kind. Like Descartes, he was impatient with received ideas. Unlike Descartes, he did not think one could reason one’s way into ultimate truths. Bohr’s close friend, the professor of philosophy Harald Høffding, wrote a précis on Kierkegaard and argued that no theory is complete, that contradictions are inevitable. “Neither [a secure fact nor a complete theory] is given in experience,” Høffding wrote, “nor can either be adequately supplied by our reason; so that, above and below, thought fails to continue, and terminates against an ‘irrational.’ ”271 Arguably, without what J. L. Heilbron calls Bohr’s “high tolerance for ambiguity,” he might not have made the leap in thought he had to make, a creative, imaginative leap that could encompass a paradoxical truth.272 Quantum theory would, after all, turn Newton’s clockwork into a weird, viscous, unpredictable dual state of waves and particles that was dependent on an observer. I do not pretend to understand how physicists arrived at quantum theory. As one kind young physicist said to me after I had exhausted him with questions, “Siri, it’s true that the metaphysics of physics is easier than the physics.” This statement is debatable, but its wit charms me nevertheless.

  I am also well aware that there are any number of contemporary scientists who look askance at the comments made by Bohr and other physicists of the time in a metaphysical vein. In Physics and Philosophy, Bohr’s friend Werner Heisenberg wrote, “Both science and art form in the course of the centuries a human language by which we can speak about the more remote parts of reality, and the coherent sets of concepts as well as the different styles of art are different words or groups of words in this language.”273 Thoughts, whether in philosophy or in science or in art, cannot be separated from the thinker, but they cannot be separated from the community of thinkers and speakers either. What articulated thoughts could we possibly have if we weren’t among other thinkers and speakers?

  Wet or Dry?

  The artificial intelligence project, as Turing saw it, was about reproduction, about reproducing the human in the machine, making autonomous brains or whole beings in a new way. He understood that interest in food, sex, and sports couldn’t be easily reproduced in such a machine, and he understood that sensation and locomotion played a role that might make the enterprise extremely difficult. Turing pursued embryology for a time, and, in 1952, he proposed a mathematical model for the growing embryo, one that continues to be debated among people in the field.274 Turing was keenly aware of the nature of models, and he knew he had stepped outside his field of expertise by creating one for biology. “This model,” he wrote, “will be a simplification and an idealization, and consequently a falsification. It is to be hoped that the features retained for discussion are those of greatest importance in the present state of knowledge.”275 Heisenberg once said, “We have to remember that what we observe is not natu
re herself, but nature exposed to our method of questioning.”276 A model, whether mathematical or otherwise, is useful to frame a question about the world, but that does not mean the model is the world.

  In science there are ways of testing models. Some work and others don’t. Some models, such as string theory in physics, may be proven at a future date but at present remain purely theoretical. Mind as information-processing machine has passionate advocates and opponents. In a textbook titled From Computer to Brain: Foundations of Computational Neuroscience (2002), William Lytton explains the benefits of models. Like McCulloch and Pitts, von Neumann, and Turing, Lytton argues for simplification. Lytton’s book rests on the unquestioned assumption that the mind is a computer, a hypothesis that over the course of half a century has atrophied into a truth he knows will not be questioned by his computational peers. Again, isn’t this what Goethe worried about? Notice how different Lytton’s tone is from that of all the earlier scientists mentioned above, who were quick to say they knew actual brain processes were far more complex than their models. Notice, too, the cutting, paring metaphor Lytton uses to describe the process.

  We show how concept neurons differ from real neurons. Although a recitation of these differences makes it look like these are lousy models, they are not. The concept neurons are attempts to get to the essence of neural processing by ignoring irrelevant detail and focusing only on what is needed to do a computational task. The complexities of the neuron must be aggressively pared in order to cut through the biological subtleties and really understand what is going on277 (my italics).

 

‹ Prev