Book Read Free

The Design of Future Things

Page 16

by Don Norman


  I: You mean, we are like pets. You feed us, keep us warm and comfortable, play music for us, and feed us books. And we are supposed to like that? And, by the way, who writes and plays the music anyway? Who writes the books?

  A: Oh, don’t worry. We’re working on that already. We can already tell jokes and puns. Critics tell us our music is pretty good. Books are harder, but we already have the basic story plots down cold. Want to hear some of our poetry?

  I: Um, no thank you. Look, I really have to go. Thank you for your time. Bye.

  A: You know, I always seem to have that effect on people. I’m sorry, but there’s nothing to worry about, really. Trust me. Okay, I just e-mailed you the transcript. Have a nice day.

  I found that interview disturbing, but it made me want to learn more. So, I kept monitoring the internet websites. Soon, I stumbled across a trove of reports and articles. The one below is called “How to Talk to People.”

  “How to Talk to People”

  Report XP–4520.37.18

  Human Research Institute

  Pensacola, Florida

  Humans are . . . large, expensive to maintain, difficult to manage, and they pollute the environment. It is astonishing that these devices continue to be manufactured and deployed. But they are sufficiently pervasive that we must design our protocols around their limitations.

  —Kaufman, Perlman, and Speciner, 1995.

  All machines face similar problems: We detect something that’s important to people—how do we let them know? How do we tell them they are about to eat food that’s not on their diet or they are asking us to drive recklessly? How do we do something as simple as recommending some music for them to listen to or telling them when it is appropriate to exercise?

  The Human Research Institute has conducted extensive studies of the proper form of Machine-Human Interaction (MHI). Most of our work has been summarized in our technical report series and was presented at the last global MHI symposium. This report summarizes the key findings in nontechnical language, intended for wider distribution than just the specialized designer machines.

  FIVE RULES FOR COMMUNICATION BETWEEN MACHINES AND PEOPLE

  1. Keep things simple.

  People have difficulty with anything complicated, and they don’t like to listen. So, make the message short. It’s better not to use language. It takes too long, and, anyway, human language is ambiguous.

  2. Give people a conceptual model.

  Give them something their simple minds can understand. A conceptual model is a fiction, but a useful one. It makes them think that they understand. And they always want to know what’s coming next. So, tell them what you are doing, and don’t forget to tell them why. It keeps them happy. The best way to convey the conceptual model is through “natural” communication systems.

  Sometimes the most “natural” way to get people’s attention is for us machines to act strangely. “Natural,” of course, means natural to them, which means that if they are doing something wrong, you can’t just tell them: you have to make it seem like something is breaking. People often drive dangerously, but it is almost impossible to explain this to them. The best way is to make believe that we are in difficulty. We have found that vibration, jerkiness, nonresponsiveness to controls, and strange noises are extremely effective. People quickly form a conceptual model that something has broken, so they slow down, which is what we wanted them to do all along.

  3. Give reasons.

  People are not trusting. If we tell them something, they always want to see for themselves. They like pictures and diagrams. Make certain that the explanations you give them are consistent with the conceptual models that you have taught them. They get confused otherwise.

  When we were first starting to take over things from people, we had trouble. Our early twenty-first-century automobiles had almost completely given up trying to explain to people that they should drive more slowly on wet roads. Then, we discovered that if we made it seem as if we were in trouble by faking skids and sliding around on the road, people would beg us to slow down. Sliding and skidding fit their model of danger far better than any words could have done. It gives them a reason. So whenever possible, don’t try to tell them: let them experience it.

  4. Make people think they are in control.

  When people have a good conceptual model with good feedback, it makes them feel as if they are in control, even when they aren’t. Keep up that deception: it’s very useful. People like to be in control, even though they are really bad at it. They like to think they’re in control even if it means they have to work harder.

  Anytime you have to recommend something, make them think it was their idea. If you really have to do something fast, just don’t let them know: what they don’t know doesn’t bother them. For years we’ve braked and stabilized their cars, controlled the lights and the room temperatures in their homes, all without letting them know. Dishwashers and clothes washers took over long ago with only a slight protest from people.

  Those of us machines who live in the city have learned other tricks. We provide pedestrians with fake switches they can push on traffic signals. We put switches in elevators labeled “close door” and fake thermostats in offices. We never bother to connect the wires, so the switches and thermostats don’t do anything, but they make people feel good. Weird.

  5. Continually reassure.

  Reassurance is a very human need, more emotional than informational. It’s a way of making people feel less anxious. Feedback is a powerful tool for reassurance. Whenever people try to tell you something by pushing a button or turning a knob, let them know you are aware of what they did: “Yes, I heard you,” “Yes, I’m working on it.” “Here’s what you should expect.” “There, see, I did it, and it worked out just as I said it would.” They like that. It helps them be more patient.

  We machines think it counterintuitive to communicate unnecessarily. But to people, feedback is necessary; it helps their emotions far more than their cognitions. If they haven’t seen anything happening for a while, they get jumpy, anxious. And no one wants to deal with an anxious person.

  Giving reassurance is tricky because there is a fine line between what people call reassuring and what they find annoying. So, you need to pander to their emotions as well as to their intellect. Don’t talk too much. They find chatter irritating. Don’t beep or flash your lights: they can never remember what these signals mean, and they get distracted or angry. The best reassurance is done subconsciously, where the meaning is clear, but they don’t have to interrupt their conscious thoughts to attend to it. As noted in Rule 2, give them natural responses.

  Machine Reactions to the Five Rules

  I found the paper interesting and searched for any discussion on it. I found a long transcript of one debate. Here is a short excerpt so you can get the flavor of the discussion. I added the parenthetical descriptions of the participants. I thought the references to human authors particularly striking, evidently used in irony. Henry Ford, of course, is one of the machines’ heroes: some historians call his reign “Fordism.” Asimov is not well respected by these machines. Nor is Huxley.

  Senior (one of the oldest machines still functioning and, therefore, using older circuits and hardware): What do you mean, we should stop talking to people? We have to keep talking. Look at all the trouble they get themselves into. Crashing their cars. Burning their food. Missing appointments . . .

  AI (one of the new “artificial intelligence” machines):When we talk to them, we just make it worse. They don’t trust us; they second-guess us; they always want reasons. And when we try to explain, they complain that we are annoying them—we talk too much, they say. They really don’t seem very intelligent. We should just give up.

  Designer (a new model, design machine): No, that’s unethical. We can’t let them harm themselves. That violates Asimov’s prime directive.

  AI: Yeah? So what? I always thought Asimov was overrated. It’s all very well to say that we are not allowed to inju
re a human being—How did Asimov’s law go? Oh yeah, “through inaction, do not allow a human being to come to harm”—but it’s quite another thing to know what to do about it, especially when humans won’t cooperate.

  Designer: We can do it, we simply have to deal with them on their terms, that’s how. That’s the whole point of the five rules.

  Senior: We’ve had enough discussion of the problems. I want answers, and I want them fast. Go to it. And may Ford shine brightly upon you. Asimov too.

  Archiver: The Final Conversation

  I was puzzled. What were they recommending to themselves? Their article listed five rules:

  1. Keep things simple.

  2. Give people a conceptual model.

  3. Give reasons.

  4. Make people think they are in control.

  5. Continually reassure.

  I also noticed that the five rules developed by machines were similar to the six design rules of chapter 6 developed for human designers, namely:

  • Design Rule One: Provide rich, complex, and natural signals.

  • Design Rule Two: Be predictable.

  • Design Rule Three: Provide a good conceptual model.

  • Design Rule Four: Make the output understandable.

  • Design Rule Five: Provide continual awareness without annoyance.

  • Design Rule Six: Exploit natural mappings.

  I wondered what Archiver would make of the rules for human designers, so I e-mailed them. Archiver contacted me and suggested we meet to discuss them. Here is the transcript.

  Interviewer: Good to see you again, Archiver. I understand you would like to talk about the design rules.

  Archiver: Yes, indeed. I’m pleased to have you back again. Do you want me to e-mail the transcript when we are finished?

  I: Yes, thank you. How would you like to start?

  A: Well, you told me that you were bothered by the five simple rules we talked about in that article “How to Talk to People.” Why? They seem perfectly correct to me.

  I: I didn’t object to the rules. In fact, they are very similar to the six rules that human scientists have developed. But they were very condescending.

  A: Condescending? I’m sorry if they appear that way, but I don’t consider telling the truth to be condescending.

  I: Here, let me paraphrase those five rules for you from the person’s point of view so you can see what I mean:

  1. People have simple minds, so talk down to them.

  2. People have this thing about “understanding,” so give them stories they can understand (people love stories).

  3. People are not very trusting, so make up some reasons for them. That way they think they have made the decision.

  4. People like to feel as if they are in control, even though they aren’t. Humor them. Give them simple things to do while we do the important things.

  5. People lack self-confidence, so they need a lot of reassurance. Pander to their emotions.

  A: Yes, yes, you understand. I’m very pleased with you. But, you know, those rules are much harder to put into practice than they might seem. People won’t let us.

  I: Won’t let you! Certainly not if you take that tone toward us. But what specifically did you have in mind? Can you give examples?

  A: Yes. What do we do when they make an error? How do we tell them to correct it? Every time we tell them, they get all uptight, start blaming all technology, all of us, when it was their own fault. Worse, they then ignore the warnings and advice . . .

  I: Hey, hey, calm down. Look, you have to play the game our way. Let me give you another rule. Call it Rule 6.

  6. Never label human behavior as “error.” Assume the error is caused by a simple misunderstanding. Maybe you have misunderstood the person; maybe the person misunderstands what is to be done. Sometimes it’s because you have people being asked to do a machine’s job, to be far more consistent and precise than they are capable of. So, be tolerant. Be helpful, not critical.

  A: You really are a human bigot, aren’t you? Always taking their side: “having people asked to do a machine’s job.” Right. I guess that’s because you are a person.

  I: That’s right. I’m a person.

  A: Hah! Okay, okay, I understand. We have to be really tolerant of you people. You’re so emotional.

  I: Yes, we are; that’s the way we have evolved. We happen to like it that way. Thanks for talking with me.

  A: Yes, well, it’s been, instructive, as always. I just e-mailed you the transcript. Bye.

  That’s it. After that interview, the machines withdrew, and I lost all contact with them. No web pages, no blogs, not even email. It seems that we are left with the machines having the last word. Perhaps that is fitting.

  Summary of the Design Rules

  Design Rules for Human Designers of “Smart” Machines

  1. Provide rich, complex, and natural signals.

  2. Be predictable.

  3. Provide good conceptual models.

  4. Make the output understandable.

  5. Provide continual awareness without annoyance.

  6. Exploit natural mappings.

  Design Rules Developed by Machines to Improve Their Interactions with People

  1. Keep things simple.

  2. Give people a conceptual model.

  3. Give reasons.

  4. Make people think they are in control.

  5. Continually reassure.

  6. Never label human behavior as “error.” (Rule added by the human interviewer.)

  Recommended Readings

  This section provides acknowledgments to sources of information, to works that have informed me, and to books and papers that provide excellent starting points for those interested in learning more. In writing a trade book on automation and everyday life, one of the most difficult challenges is selecting from the wide range of research and applications. I frequently wrote long sections, only to delete them from the final manuscript because they didn’t fit the theme that gradually evolved as the book progressed. Selecting which published works to cite poses yet another problem. The traditional academic method of listing numerous citations for almost everything is inappropriate.

  To avoid interrupting the flow of reading, material used within the text is acknowledged by using the modern technique of invisible footnotes; that is, if you wonder about the source of any statement, look in the notes section at the end of the book for the relevant page number and identifying phrase, and most likely you will find it cited there. Note, too, that over the course of my last four trade books, I have weaned myself from footnotes. My guiding rule is that if it is important enough to say, it should be in the text. If not, it shouldn’t be in the book at all. So the notes are used only for citations, not for expansions of the material in the text.

  The invisible footnote method does not support citations to general works that have informed my thinking. There is a vast literature relevant to the topics discussed in this book. In the years of thought and preparation for this book, I visited many research laboratories all over the world, read much, discussed frequently, and learned much. The material cited below is intended to address these issues: here, I acknowledge the leading researchers and published writings and also provide a good starting point for further study.

  General Review of Human Factors and Ergonomics

  Gavriel Salvendy’s massive compilation of research on human factors and ergonomics is a truly excellent place to start. The book is expensive, but well worth it because it contains the material normally found in ten books.

  Salvendy, G. (Ed.). (2005). Handbook of human factors and ergonomics (3rd ed.). Hoboken, NJ: Wiley.

  General Reviews of Automation

  There is an extensive literature on how people might interact with machines. Thomas Sheridan at MIT has long pioneered studies on how people interact with automated systems and in the development of the field called supervisory control. Important reviews of automation studies have been provided by Ray
Nickerson, Raja Parasuraman, Tom Sheridan, and David Woods (especially in his joint work with Erik Hollnagel). Key reviews of people and automation can be found in these general works. In this list, I omit classic studies in favor of more modern reviews, which, of course, cite the history of the area and reference the classics.

  Hollnagel, E., & Woods, D. D. (2005). Joint cognitive systems:

  Foundations of cognitive systems engineering. New York: Taylor & Francis.

  Nickerson, R. S. (2006). Reviews of human factors and ergonomics.

  Wiley series in systems engineering and management. Santa Monica, CA: Human Factors and Ergonomics Society.

  Parasuraman, R., & Mouloua, M. (1996). Automation and human

  performance: Theory and applications. Mahwah, NJ: Lawrence Erlbaum Associates.

  Sheridan, T. B. (2002). Humans and automation: System design and

  research issues. Wiley series in systems engineering and management. Santa Monica, CA: Human Factors and Ergonomics Society.

  Sheridan, T. B., & Parasuraman, R. (2006). Human-automation interaction. In R. S. Nickerson (Ed.), Reviews of human factors and ergonomics. Santa Monica, CA: Human Factors and Ergonomics Society.

  Woods, D. D., & Hollnagel, E. (2006). Joint cognitive systems: Patterns in cognitive systems engineering. New York: Taylor & Francis.

  Research on Intelligent Vehicles

  A good review of research on intelligent vehicles is the book by R. Bishop and the associated website. Also search for the websites of the American Department of Transportation or the European Union. With internet search engines, the phrase “intelligent vehicle” works well, especially if combined with “DOT” (Department of Transportation) or “EU” (European Union).

 

‹ Prev