Love and Sex with Robots_The Evolution of Human-Robot Relationships
Page 13
This experiment found that older people get more pleasure from the responses of the robot cat (its “meows”) than do younger people when they touch it. This was attributed to the fact that younger people use cell phones, computers, and household devices more intensively than their elders do and generally experience a greater enjoyment of technology. Another finding was that men get more pleasure than do women from playing with NeCoRo, generally experiencing more excitement when the cat turns its head, opens and closes its eyes, and changes its posture. This bias seems likely to be a symptom of the fact that men, more than women, enjoy interaction with computers, though further research is necessary to test this assumption. Similarly, further experiments will be needed to explain another of the Libins’ results: that the American subjects in their experiment enjoyed touching the cat more and obtained more pleasure from the way the cat cuddled them when they were stroking it than did the Japanese subjects. This could be because cats are more popular as pets in American homes than they are in Japan, an explanation given credence by yet another of the Libins’ experimental findings, that the degree to which someone likes pets influences the way that they interact with the robotic cat and the enjoyment received from picking it up and stroking it.
Experimental results such as these will help guide robopsychologists toward a greater understanding of human-computer and human-robot interactions, by providing data to assist the robot designers of the future in their goal of making robots increasingly acceptable as friends and partners for humans. As the human and artificial worlds continue to merge, it will become ever more important to study and understand the psychology of human-robot interaction. The birth of this new area of study is a natural consequence of the development of robot science. Our daily lives bring us more frequent interaction with different kinds of robots, whether they be Tamagotchis, robot lawn mowers, or soccer-playing androids. These robots are being designed to satisfy different human needs, to help in tasks such as education and therapy, tasks hitherto reserved for humans. It is therefore important to study the behavior of robots from a psychological perspective, in order to help robot scientists improve the interactions of their virtual creatures with humans.
Much of the early research in this field has been carried out with children, as this age group is more immediately attracted to robot pets than are their parents and grandparents. One of the first findings from this research was intuitively somewhat obvious but nevertheless interesting and useful in furthering good relations between robots and humans. It was discovered that children in the three-to-five age group are more motivated to learn from a robot that moves and has a smiling face than from a machine that neither moves nor smiles. As a result of recognizing these preferences, the American toy giant Hasbro launched a realistic-looking animatronic robot doll called My Real Baby that had soft, flexible skin and other humanlike features. It could exhibit fifteen humanlike emotions by changing its facial expressions—moving its lips, cheeks, and forehead—blinking, sucking its thumb, and so forth. By virtue of these features, it could frown, smile, laugh, and cry.
The appeal to children of My Real Baby lies in its compatibility with them, a compatibility that breeds companionship. And the shape and appearance of a robot can have a significant effect on the level of this compatibility. A study at the Sakamoto Laboratory at Ochanomizu University in Japan investigated people’s perceptions of different robots—the AIBO robotic dog and the humanoid robots ASIMO and PaPeRo—and explored how these perceptions compared with the way the same group of people perceive humans, animals, and inanimate objects. One conclusion of the study was that appearance and shape most definitely matter—people feel more comfortable when in the company of a friendly-shaped, humanlike robot than when they are with a robotic dog.
In chapter 3 we discussed the use of ethology, the study of animals in their natural setting, as a basis for the design and programming of robot animals. Since humans are also a species of animal, it would seem logical to base the design and programming of humanoid robots on the ethology of the human species, but unfortunately the ethological literature for humans is nowhere near as rich as it is for dogs, and what literature there is on human ethology is mainly devoted to child behavior. For this reason the developers of Sony’s SDR humanoid robot have adapted the ethological architecture used in the design of AIBO, an architecture that contains components for perception, memory, and the generation of animal-like behavior patterns, adding to it a thinking module* to govern its behavior. SDR also incorporates a face-recognition system that enables the robot to identify the face of a particular user from all the faces it has encountered, a large-vocabulary speech recognition system that allows it to recognize what words are being spoken to it, and a text-to-speech† synthesizer allowing it to converse using humanlike speech.
Emotions in Humans and in Robots
Building a robot sufficiently convincing to be almost completely indistinguishable from a human being—a Stepford wife, but without her level of built-in subservience—is a formidable task that will require a combination of advanced engineering, computing, and artificial-intelligence skills. Such robots must not only look human, feel human, talk like humans, and react like humans, they must also be able to think, or at least to simulate thinking, at a human level. They should have and should be able to express their own (artificial) emotions, moods, and personalities, and they should recognize and understand the social cues that we exhibit, thereby enabling them to measure the strengths of our emotions, to detect our moods, and to appreciate our personalities. They should be able to make meaningful eye contact with us and to understand the significance of our body language. From the perspective of engendering satisfying social interaction with humans, a robot’s social skills—the use of its emotional intelligence—will probably be even more important than its being physically convincing as a replica human.
Lest I be accused of glossing over a fundamental objection that some people have to the very idea that machines can have emotions, I shall here summarize what I consider to be the most important argument supporting this notion.* Certainly there are scholars whose views on this subject create doubts in the minds of many: How can a machine have feelings? If a machine does not have feelings, what value can we place on its expressions of emotion? What is the effect on people when machines “pretend” to empathize with their emotions? All of these doubts and several others have attracted the interest of philosophers for more than half a century, helping to create something of a climate of skepticism.
To my mind all such doubts can be assuaged by applying a complementary approach like that of Alan Turing when he investigated the question, “Can machines think?”† Turing is famous in the history of computing for contributions ranging from leading the British team that cracked the German codes during World War II to coming up with the solution to a number of fundamental issues on computability. But it was his exposition of what has become known as the “Turing test” that has made such a big impact on artificial intelligence and which enables us, in my view, to answer all the skeptics who pose questions such as, “Do machines have feelings?”
The Turing test was proposed as a method of determining whether a machine should be regarded as intelligent. The test requires a human interrogator to conduct typed conversations with two entities and then decide which of the two is human and which is a computer program. If the interrogator is unable to identify the computer program correctly, the program should be regarded as intelligent. The logical argument behind Turing’s test is easy to follow—conversation requires intelligence; ergo, if a program can converse as well as a human being, that program should be regarded as intelligent.
To summarize Turing’s position, if a machine gives the appearance of being intelligent, we should assume that it is indeed intelligent. I submit that the same argument can equally be applied to other aspects of being human: to emotions, to personality, to moods, and to behavior. If a robot behaves in a way that we would consider uncouth in a human, th
en by Turing’s standard we should describe that robot’s behavior as uncouth. If a robot acts as though it has an extroverted personality, then with Turing we should describe it as being an extrovert. And if, like a Tamagotchi, a robot “cries” for attention, then the robot is expressing its own form of emotion in the same way as a baby does when it cries for its mother. The robot that gives the appearance, by its behavior, of having emotions should be regarded as having emotions, the corollary of this being that if we want a robot to appear to have emotions, it is sufficient for it to behave as though it does. Of course, a robot’s programmed emotions might differ in some ways from human emotions, and robots might even evolve their own emotions, ones that are very different from our own. In such cases, instead of understanding, through empathy and experience, the relationship of a human emotion to the underlying causes, we might understand nothing about robotic emotions except that on the surface they resemble our own. Some people will not be able to empathize with a robot that is frowning or grinning—they will be people who interpret the robot’s behavior as nothing more than an act, a performance. But as we come to recognize the various virtual emotions and experiences that lie behind a robot’s behavior, we will feel less and less that a robot’s emotions are artificial.
Our emotions are inextricably entwined with everything we say and do, and they are therefore at the very core of human behavior. For robots to interact with us in ways that we appreciate, they, too, must be endowed with emotions, or at the very least they must be made to behave as though they have emotions. Sherry Turkle has found that children deem simple toys, such as Furby, to be alive if they believe that the toy loves them and if they love the toy. On this basis the perception of life in a humanoid robot is likely to depend partly on the emotional attitude of the user. If users believe that their robot loves them, and that they in turn love their robot, the robot is more likely to be seen as alive. And if a robot is deemed to be alive, it is more likely that its owner will develop increased feelings of love for the robot, thereby creating an emotional snowball. But before robot designers can mimic emotional intelligence in their creations, they must first understand human emotions.
Human emotions are exhibited in various ways—in the changes in our voice, in the changes to our skin color when we blush, in the way we make or break eye contact—and robots therefore need similar cues to help express their emotions. Just as face and sound are used as a matter of course, instinctively and subconsciously, by humans communicating with other humans, so similar forms of communication are being exhibited by emotionally expressive robots to communicate their simulated emotions to their human users.
Many studies have shown that the activity of the facial muscles in humans is related to our emotional responses. The muscle that draws up the corners of the lips when we smile* is associated with positive experiences, while the muscle that knits and lowers the brows when we frown† is associated with negative ones. Much of today’s research into the use of facial expression in computer images and robots stems from a coding system developed during the 1970s by Paul Ekman, a psychologist at the University of California at San Francisco. Ekman classified dozens of movements of the facial muscles into forty-four “action units”—components of emotional expression—each combination of these action units corresponding to a different variation on a basic facial expression such as anger, fear, joy, or surprise. It has been shown as a result of Ekman’s work that the creation of emotive facial expressions is relatively easy to simulate in an animated character or a robot, while research at MIT has revealed that humans are capable of distinguishing even simple emotions in an animated character by observing the character’s facial expressions. The recognition, by a machine, of these various action units can therefore be converted to the recognition of a human emotional state. And the simulation of a combination of action units becomes the simulation, in a robot or on a computer screen, of a human emotion. Yes, this is an act on the part of the robot, but as time goes on, the act will become increasingly convincing, until it is so good that we cannot tell the difference.
The study of emotions and other psychological processes is a field that predates the electronic computer, providing researchers in robotics with a pool of research into which they can tap for ideas on how best to simulate these processes in robots. If we understand how a particular psychological process works in humans, we will be able to design robots that can exhibit that same process. And just as being human endows us with the potential to form companionable relationships, this same potential will be designed into robots to help make them sociable. Some would argue that robot emotions cannot be “real” because they have been designed and programmed into the robots. But is this very different from how emotions work in people? We have hormones, we have neurons, and we are “wired” in a way that creates our emotions. Robots will merely be wired differently, with electronics and software replacing hormones and neurons. But the results will be very similar, if not indistinguishable.
An example of a robot in which theories from human psychology have been synthesized is Feelix, a seventy-centimeter-tall humanoid robot designed at the University of Århus and built with Lego bricks. The manner in which a user interacts with Feelix is by touching its feet. One or two short presses on the feet make Feelix surprised if they immediately follow a period of inactivity, but when the presses become more intense and shorter, Feelix becomes afraid, whereas a moderate level of stimulation, achieved by gentle, long presses on its feet makes Feelix happy. But if the long presses become more intense and sustained, Feelix becomes angry, reverting to a happier state and a sense of relief only when the anger-making stimulation ceases.
Feelix was endowed with five of the six “basic emotions” identified by Paul Ekman: anger, fear, happiness, sadness, and surprise.* All five emotions have the advantage that they are associated with distinct corresponding facial expressions that are universally recognized, making it possible to exhibit the robot’s emotions partly by simulating those facial expressions. Anger, for example, is exhibited by having Feelix raise its eyebrows and moderately opening its mouth with its upper lip curved downward and its lower lip straight, while happiness is shown by straight eyebrows and a wide closed mouth with the lips bent upward. When it feels no emotion—that is, when none of its emotions are above their threshold level, Feelix displays a neutral face. But when it is stimulated in various ways, Feelix becomes emotional and displays the appropriate facial expression.
In order to determine how well humans can recognize emotional expressions in a robot’s face, Feelix was tested on two groups of participants, one made up of children in the nine-to-ten age range and one with adults aged twenty-four to fifty-seven. The tests revealed that the adults correctly recognized Feelix’s emotion from its facial expression in 71 percent of the tests, with the children slightly less successful at 66-percent recognition. These results match quite well the recognition levels demonstrated in earlier tests, using photographs of facial expressions, that had been reported in the literature on emotion recognition, providing evidence that the simulation of expression of the basic emotions is not something from science fiction but can already be designed into robots. Accepting that an acted-out emotion is just that, an act, will make it difficult to believe that the acted emotion is being experienced by the robot. But again, as the “acting” improves, so any disbelief will evaporate.
Robot Recognition of Human Emotions
To interact meaningfully with humans, social robots must be able to perceive the world as humans do, sensing and interpreting the same phenomena that humans observe. This means that in addition to the perception required for physical functions such as knowing where they are and avoiding obstacles, social robots must also possess relationship-oriented perceptual abilities similar to those of humans, perception that is optimized specifically for interacting with humans and on a human level. These perceptual abilities include being able to recognize and track bodies, hands, and other human features; being capable
of interpreting human speech; and having the capacity to recognize facial expressions, gestures, and other forms of human activity.
Even more important than its physical appearance and other physical attributes in engendering emotional satisfaction in humans will be a robot’s social skills. Possibly the most essential capability in robots for developing and sustaining a satisfactory relationship with a human is the recognition of human emotional cues and moods. This capability must therefore be programmed into any robot that is intended to be empathetic. People are able to communicate effectively about their emotions by putting on a variety of facial expressions to reflect their emotional reactions and by changing their voice characteristics to express surprise, anger, and love, so an empathetic robot must be able to recognize these emotional cues.
Robots who possess the capability of recognizing and understanding human emotion will be popular with their users. This is partly because, in addition to the natural human desire for happiness, a user might have other emotional needs: the need to feel capable and competent, to maintain control, to learn, to be entertained, to feel comfortable and supported. A robot should therefore be able to recognize and measure the strength of its user’s emotional state in order to understand a user’s needs and recognize when they are being satisfied and when they are not.
Communicating our emotions is a process called “affect,” or “affective communication,” a subject that has been well investigated by psychologists. It is also a subject of great importance in the design of computer systems and robots that detect and even measure the strength of human emotions and in systems that can communicate their own virtual emotions to humans. The Media Lab at MIT has been investigating effective communication since the mid-1990s, in research led by Rosalind Picard, whose book Affective Computing has become a classic in this field. Affective computing involves giving robots the ability to recognize our emotional expressions (and the emotional expressions of other robots), to measure various physiological characteristics in the human body, and from these measurements to know how we are feeling.