Emotional Design

Home > Other > Emotional Design > Page 18
Emotional Design Page 18

by Donald A. Norman


  If robots don’t have to move—such as drink, dishwasher, or pantry robots—they need not have any means of locomotion, neither legs nor wheels. If the robot is a coffeemaker, it should look like a coffeemaker, modified to allow it to connect to the dishwasher and pantry. Robot vacuum cleaners and lawn mowers already exist, and their appearance is perfectly suited to their tasks: small, squat devices, with wheels (see figure 6.3). A robot car should look like a car. It is only the general-purpose home servant robots that are apt to look like animals or humans. The robot dining room table envisioned by Brooks would be especially bizarre, with a large central column to house the dishes and dishwashing equipment (complete with electric power, water and sewer connections). The top of the table would have places for the robot arms to manipulate the dishes and probably some stalk to hold the cameras that let the arms know where to place and retrieve the dishes and cutlery.

  Should a robot have legs? Not if it only has to maneuver about on smooth surfaces—wheels will do for this; but if it has to navigate irregular terrain or stairs, legs would be useful. In this case, we can expect the first legged robots to have four or six legs: balancing is far simpler for four- and six-legged creatures than for those with only two legs.

  If the robot is to wander about a home and pick up after the occupants, it probably will look something like an animal or a person: a body to hold the batteries and to support the legs, wheels, or tracks for locomotion; hands to pick up objects; and cameras (eyes) on top where they can better survey the environment. In other words, some robots will look like an animal or human, not because this is cute, but because it is the most effective configuration for the task. These robots will probably look something like R2D2 (figure 6.1): a cylindrical or rectangular body on top of some wheels, tracks, or legs; some form of manipulable arm or tray; and sensors all around to detect obstacles, stairs, people, pets, other robots, and, of course the objects they are supposed to interact with. Except for pure entertainment value, it is difficult to understand why we would ever want a robot that looked like C3PO.

  FIGURE 6.3

  What should a robot look like?

  The Roomba is a vacuum cleaner, its shape appropriate for running around the floor and maneuvering itself under the furniture. This robot doesn’t look like either a person or an animal, nor should it: its shape fits the task.

  (Courtesy of iRobot Inc.)

  In fact, making a robot humanlike might backfire, making it less acceptable. Masahiro Mori, a Japanese roboticist, has argued that we are least accepting of creatures that look very human, but that perform badly, a concept demonstrated in film and theater by the terrifying nature of zombies and monsters (think of Frankenstein’s monster) that take on human form, but with inhuman movement and ghastly appearance. We are not nearly so dismayed—or frightened—by nonhuman shapes and forms. Even perfect replicas of humans might be problematic, for even if the robot could not be distinguished from humans, this very lack of distinction can lead to emotional angst (a theme explored in many a science fiction novel, especially Philip K. Dick’s Do Androids Dream of Electric Sheep? and, in movie version, Blade Runner). According to this line of argument, C3PO gets away with its humanoid form because it is so clumsy, both in manner and behavior, that it appears more cute or even irritating than threatening.

  Robots that serve human needs—for example, robots as pets—should probably look like living creatures, if only to tap into our visceral system, which is prewired to interpret human and animal body language and facial expressions. Thus, an animal or a childlike shape together with appropriate body actions, facial expressions, and sounds will be most effective if the robot is to interact successfully with people.

  Affect and Emotion in Robots

  What emotions will a robot need to have? The answer depends upon the sort of robot we are thinking about, the tasks it is to perform, the nature of the environment, and what its social life is like. Does it interact with other robots, animals, machines, or people? If so, it will need to express its own emotional state as well as to assess the emotions of the people and animals it interacts with.

  Think about the average, everyday home robot. These don’t yet exist, but some day the house will become populated with robots. Some home robots will be fixed in place, specialized, such as kitchen robots: for example, the pantry, dishwasher, drink dispenser, food dispenser, coffeemaker, or cooking unit robots. And, of course, clothes washer, drier, iron, and clothes-folding robots, perhaps coupled to wardrobe robots. Some will be mobile, but also specialized, such as the robots that vacuum the floors and mow the lawn. But probably we will also have at least one general-purpose robot: the home servant robot, that brings us coffee, cleans up, does simple errands, and looks after and supervises the other robots. It is the home robot that is of most interest, because it will have to be the most flexible and advanced.

  Servant robots will need to interact with us and with the other robots of the house. For the other robots, they could use wireless communication. They could discuss the jobs they were doing, whether or not they were overloaded or idle. They could also state when they were running low on supplies and when they sensed difficulties, problems, or errors and call upon one another for help. But what about when robots interact with people? How will this happen?

  Servant robots need to be able to communicate with their owners. Some way of issuing commands is needed, some way of clarifying the ambiguities, changing a command in midstream (“Forget the coffee, bring me a glass of water instead”), and dealing with all of the complexities of human language. Today, we can’t do that, so robots that are built now will have to rely upon very simple commands or even some sort of remote controller, where a person pushes the appropriate buttons, generates a well-structured command, or selects actions from a menu. But the time will come when we can interact in speech, with the robots understanding not just the words but the meanings behind them.

  When should a robot volunteer to help its owners? Here, robots will need to be able to assess the emotional state of people. Is someone struggling to do a task? The robot might want to volunteer to help. Are the people in the house arguing? The robot might wish to go to some other room, out of the way. Did something bring pleasure? The robot might wish to remember that, so it could do it again when appropriate. Was an action poorly done, so the person showed disappointment? Perhaps the action could be improved, so that next time the robot would produce better results. For all these reasons, and more, the robot will need to be designed with the ability to read the emotional state of its owners.

  A robot will need to have eyes and ears (cameras and microphones) to read facial expressions, body language, and the emotional components of speech. It will have to be sensitive to tones of voice, the tempo of speech, and its amplitude, so that it can recognize anger, delight, frustration, or joy. It needs to be able to recognize scolding voices from praising ones. Note that all of these states can be recognized just by their sound quality without the need to recognize the words or language. Notice that you can determine other people’s emotional states just by the tone of voice alone. Try it: Make believe you are in any one of those states—angry, happy, scolding, or praising—and express yourself while keeping your lips firmly sealed. You can do it entirely with the sounds, without speaking a word. These are universal sound patterns.

  Similarly, the robot should display its emotional state, much as a person does (or, perhaps more appropriately, as a pet dog or child does), so that the people with whom it is interacting can tell when a request is understood, when it is something easy to do, difficult to do, or perhaps even when the robot judges it to be inappropriate. Similarly, the robot should show pleasure and displeasure, an energetic appearance or exhaustion, confidence or anxiety when appropriate. If it is stuck, unable to complete a task, it should show its frustration. It will be as valuable for the robot to display its emotional state as it is for people to do so. The expressions of the robot will allow us humans to understand the state of the r
obot, thereby learning which tasks are appropriate for it, which are not. As a result, we can clarify instructions or even offer help, eventually learning to take better advantage of the robot’s capabilities.

  Many people in the robotics and computer research community believe that the way to display emotions is to have a robot decide whether it is happy or sad, angry or upset, and then display the appropriate face, usually an exaggerated parody of a person in those states. I argue strongly against this approach. It is fake, and, moreover, it looks fake. This is not how people operate. We don’t decide that we are happy, and then put on a happy face, at least not normally. This is what we do when we are trying to fool someone. But think about all those professionals who are forced to smile no matter what the circumstance: they fool no one—they look just like they are forcing a smile, as indeed they are.

  The way humans show facial expression is by automatic innervation of the large number of muscles involved in controlling the face and body. Positive affect leads to relaxation of some muscle groups, automatic pulling up of many facial muscles (hence the smile, raised eyebrows and cheeks, etc.), and a tendency to open up and draw closer to the positive event or thing. Negative affect has the opposite impact, causing withdrawal, to push away. Some muscles are tensed, and some of the facial muscles pull downward (hence the frown). Most affective states are complex mixtures of positive and negative valence, at differing levels of arousal, with some residue of the immediately previous states. The resulting expressions are rich and informative. And real.

  Fake emotions look fake: we are very good at detecting false attempts to manipulate us. Thus, many of the computer systems we interact with—the ones with cute, smiling helpers and artificially sweet voices and expressions—tend to be more irritating than useful. “How do I turn this off?” is a question often asked of me, and I have become adept at disabling them, both in my own computers or those of others who seek to be released from the irritation.

  I have argued that machines should indeed both have and display emotions, the better for us to interact with them. This is precisely why the emotions need to appear as natural and ordinary as human emotions. They must be real, a direct reflection of the internal states and processing of a robot. We need to know when a robot is confident or confused, secure or worried, understanding our queries or not, working on our request or ignoring us. If the facial and body expressions reflect the underlying processing, then the emotional displays will seem genuine precisely because they are real. Then we can interpret their state, they can interpret ours, and the communication and interaction will flow ever more smoothly.

  I am not the only person to have reached this conclusion. MIT Professor Rosalind Picard once said, talking about whether robots should have emotions, “I wasn’t sure they had to have emotions until I was writing up a paper on how they would respond intelligently to our emotions without having their own. In the course of writing that paper, I realized it would be a heck of a lot easier if we just gave them emotions.”

  Once robots have emotions, then they need to be able to display them in a way that people can interpret—that is, as body language and facial expressions similar to human ones. Thus, the robot’s face and body should have internal actuators that act and react like human muscles according to the internal states of the robot. People’s faces are richly endowed with muscle groups in chin, lips, nostrils, eyebrows, forehead, cheeks, and so on. This complex of muscles makes for a sophisticated signaling system, and if robots were created in a similar way, the features of the face will naturally smile when things are going well and frown when difficulties arise. For this purpose, robot designers need to study and understand the complex workings of human expressions, with its very rich set of muscles and ligaments tightly intertwined with the affective system.

  Displaying full facial emotions is actually very difficult. Figure 6.4 shows Leonardo, Professor Cynthia Breazeal’s robot at the MIT Media Laboratory, designed to control a vast array of facial features, neck, body, and arm movements, all the better to interact socially and emotionally with us. There is a lot going on inside our bodies, and much the same complexity is required within the faces of robots.

  But what of the underlying emotional states? What should these be? As I’ve discussed, at the least, the robot should be cautious of heights, wary of hot objects, and sensitive to situations that might lead to hurt or injury. Fear, anxiety, pain, and unhappiness might all be appropriate states for a robot. Similarly, it should have positive states, including pleasure, satisfaction, gratitude, happiness and pride, which would enable it to learn from its actions, to repeat the positive ones and improve, where possible.

  FIGURE 6.4 The complexity of robot facial musculature.

  MIT Professor Cynthia Breazeal with her robot Leonardo.

  (Photograph by author.)

  Surprise is probably essential. When what happens is not what is expected, the surprised robot should interpret this as a warning. If a room unexpectedly gets dark, or maybe the robot bumps into something it didn’t expect, a prudent response is to stop all movement and figure out why. Surprise means that a situation is not as anticipated, and that planned or current behavior is probably no longer appropriate—hence, the need to stop and reassess.

  Some states, such as fatigue, pain, or hunger, are simpler, for they do not require expectations or predictions, but rather simple monitoring of internal sensors. (Fatigue and hunger are technically not affective states, but they can be treated as if they were.) In the human, sensors of physical states signal fatigue, hunger, or pain. Actually, in people, pain is a surprisingly complex system, still not well understood. There are millions of pain receptors, plus a wide variety of brain centers involved in interpreting the signals, sometimes enhancing sensitivity, sometimes suppressing it. Pain serves as a valuable warning system, preventing us from damaging ourselves and, if we are injured, acting as a reminder not to stress the damaged parts further. Eventually it might be useful for robots to feel pain when motors or joints were strained. This would lead robots to limit their activities automatically, and thus protect themselves against further damage.

  Frustration would be a useful affect, preventing servant robots from getting stuck doing a task to the neglect of its other duties. Here is how it would work. I ask the servant robot to bring me a cup of coffee. Off it goes to the kitchen, only to have the coffee robot explain that it can’t give any because it lacks clean cups. Then the coffeemaker might ask the pantry robot for more cups, but suppose that it, too, didn’t have any. The pantry would have to pass on the request to the dishwasher robot. And now suppose that the dishwasher didn’t have any dirty ones it could wash. The dishwasher would ask the servant robot to search for dirty cups so that it could wash them, give them to the pantry, which would feed them to the coffeemaker, which in turn would give the coffee to the servant robot. Alas, the servant would have to decline the dishwasher’s request to wander about the house: it is still busy at its main task—waiting for coffee.

  This situation is called “deadlock.” In this case, nothing can be done because each machine is waiting for the next, and the final machine is waiting for the first. This particular problem could be solved by giving the robots more and more intelligence, learning how to solve each new problem, but problems always arise faster than designers can anticipate them. These deadlock situations are difficult to eliminate because each one arises from a different set of circumstances. Frustration provides a general solution.

  Frustration is a useful affect for both humans and machines, for when things reach that point, it is time to quit and do something else. The servant robot should get frustrated waiting for the coffee, so it should temporarily give up. As soon as the servant robot gives up the quest for coffee, it is free to attend to the dishwasher’s request, go off and find the dirty coffee cups. This would automatically solve the deadlock: the servant robot would find some dirty cups, deliver them to the dishwasher, which would eventually let the coffeemaker make the co
ffee and let me get my coffee, although with some delay.

  Could the servant robot learn from this experience? It should add to its list of activities the periodic collection of dirty dishes, so that the dishwasher/pantry would never run out again. This is where some pride would come in handy. Without pride, the robot doesn’t care: it has no incentive to learn to do things better. Ideally, the robot would take pride in avoiding difficulties, in never getting stuck at the same problem more than once. This attitude requires that robots have positive emotions, emotions that make them feel good about themselves, that cause them to get better and better at their jobs, to improve, perhaps even to volunteer to do new tasks, to learn new ways of doing things. Pride in doing a good job, in pleasing their owners.

  Machines That Sense Emotion

  The extent to which emotional upsets can interfere with mental life is no news to teachers. Students who are anxious, angry, or depressed don’t learn; people who are caught in these states do not take in information efficiently or deal with it well.

  —Daniel Goleman, Emotional Intelligence

  Suppose machines could sense the emotions of people. What if they were as sensitive to the moods of their users as a good therapist might be? What if an electronic, computer-controlled educational system could sense when the learner was doing well, was frustrated, or was proceeding appropriately? Or what if the home appliances and robots of the future could change their operations according to the moods of their owners? What then?

 

‹ Prev