The Design of Future Things

Home > Other > The Design of Future Things > Page 13
The Design of Future Things Page 13

by Don Norman


  The system first performed a mathematical transformation on the handwriting strokes of each word into a location within an abstract, mathematical space of many dimensions, then matched what the user had written to its database of English words, picking the word that was closest in distance within this abstract space. If this sentence confuses you about how Newton recognized words, then you understand properly. Even sophisticated users of Newton could not explain the kinds of errors that it made. When the word recognition system worked, it was very, very good, and when it failed, it was horrid. The problem was caused by the great difference between the sophisticated mathematical multidimensional space that it was using and a person’s perceptual judgments. There seemed to be no actual relationship between what was written and what the system produced. In fact, there was a relationship, but it was in the realm of sophisticated mathematics, invisible to the person who was trying to make sense of its operation.

  The Newton was released with great fanfare. People lined up for hours to be among the first to get one. Its ability to recognize handwriting was touted as a great innovation. In fact, it failed miserably, providing rich fodder for cartoonist Garry Trudeau, an early adopter. He used his comic strip, Doonesbury, to poke fun at the Newton. Figure 6.1 shows the best-known example, a strip that detractors and fans of the Newton labeled “Egg Freckles.” I don’t know if writing the words “Catching on?” would really turn into “Egg freckles?” but given the bizarre output that the Newton often produced, it’s entirely possible.

  FIGURE 6.1

  Garry Trudeau’s Doonesbury strip “Egg Freckles,” widely credited with dooming the success of Apple Computer’s Newton. The ridicule was deserved, but the public forum of this popular comic strip was devastating. The real cause? Completely unintelligible feedback. Doonesbury © 1993 G. B. Trudeau.

  (Reprinted with permission of Universal Press Syndicate

  All rights reserved)

  The point of this discussion is not to ridicule the Newton but rather to learn from its shortcomings. The lesson is about human-machine communication: always make sure a system’s response is understandable and interpretable. If it isn’t what the person expected, there should be an obvious action the person can take to get to the desired response.

  Several years after “Egg Freckles,” Larry Yeager, working in Apple’s Advanced Technology Group, developed a far superior method for recognizing handwriting. Much more importantly, however, the new system, called “Rosetta,” overcame the deadly flaw of the Paragraph system: errors could now be understood. Write “hand” and the system might recognize “nand”: people found this acceptable because the system got most of the letters right, and the one it missed, “h,” does look like an “n.” If you write “Catching on?” and get “Egg freckles?” you blame the Newton, deriding it as “a stupid machine.” But if you write “hand” and get “nand,” you blame yourself: “Oh, I see,” you might say to yourself, “I didn’t make the first line on the ‘h’ high enough so it thought it was an ‘n’.”

  Notice how the conceptual model completely reverses the notion of where blame is to be placed. Conventional wisdom among human-centered designers is that if a device fails to deliver the expected results, it is the device or its design that should be blamed. When the machine fails to recognize handwriting, especially when the reason for the failure is obscure, people blame the machine and become frustrated and angry. With Rosetta, however, the situation is completely reversed: people are quite happy to place the blame on themselves if it appears that they did something wrong, especially when what they are required to do appears reasonable. Rather than becoming frustrated, they simply resolve to be more careful next time.

  This is what really killed the Newton: people blamed it for its failure to recognize their handwriting. By the time Apple released a sensible, successful handwriting recognizer, it was too late. It was not possible to overcome the earlier negative reaction and scorn. Had the Newton featured a less accurate, but more understandable, recognition system from the beginning, it might have succeeded. The early Newton is a good example of how any design unable to give meaningful feedback is doomed to failure in the marketplace.

  When Palm released their personal digital assistant (initially called the “Palm Pilot”) in 1996, they used an artificial language, “Graffiti,” that required the user to learn a new way of writing. Graffiti used artificial letter shapes, similar to the normal printed alphabet, but structured so as to make the machine’s task as easy as possible. The letters were similar enough to everyday printing that they could be learned without much effort. Graffiti didn’t try to recognize whole words, it operated letter by letter, and so, when it made an error, it was only on a single letter, not the entire word. In addition, it was easy to find a reason for the error in recognition. These understandable, sensible errors made it easy for people to see what they might have done wrong and provided hints as to how to avoid the mistake the next time. The errors were actually reassuring, helping everyone develop a good mental model for how the recognition worked, helping people gain confidence, and helping them improve their handwriting. Newton failed; Palm succeeded.

  Feedback is essential to the successful understanding of any system, essential for our ability to work in harmony with machines. Today, we rely too much on alarms and alerts that are too sudden, intrusive, and not very informative. Signals that simply beep, vibrate, or flash usually don’t indicate what is wrong, only that something isn’t right. By the time we have figured out the problem, the opportunity to take corrective action may have passed. We need a more continuous, more natural way of staying informed of the events around us. Recall poor Prof. M: without feedback, he couldn’t even figure out if his own system was working.

  What are some ways to provide better methods of feedback? The foundation for the answer was laid in chapter 3, “Natural Interaction”: implicit communication, natural sounds and events, calm, sensible signals, and the exploitation of natural mappings between display devices and our interpretations of the world.

  Natural, Deliberate Signals

  Watch someone helping a driver maneuver into a tight space. The helper may stand beside the car, visible to the driver, holding two hands apart to indicate the distance remaining between the car and the obstacle. As the car moves, the hands move closer together. The nice thing about this method of guidance is that it is natural: it does not have to be agreed upon beforehand; no instruction or explanation is needed.

  Implicit signals can be intentional, either deliberately created by a person, as in the example above, or deliberately created in a machine by the designer. There are natural ways to communicate with people that convey precise information without words and with little or no training. Why not use these methods as a way of communicating between people and machines?

  Many modern automobiles have parking assistance devices that indicate how close the auto is to the car ahead or behind. An indicator emits a series of beeps: beep (pause), beep (pause), beep. As the car gets closer to the obstacle, the pauses get shorter, so the rate of beeping increases. When the beeps become continuous, it is time to stop: the car is about to hit the obstacle. As with the hand directions, this natural signal can be understood by a driver without instruction.

  Natural signals, such as the clicks of the hard drive after a command or the familiar sound of water boiling in the kitchen, keep people informed about what is happening in the environment. These signals offer just enough information to provide feedback, but not enough to add to cognitive workload. Mark Weiser and John Seely Brown, two research scientists working at what was then the Xerox Corporation’s Palo Alto Research Center, called this “calm technology,” which “engages both the center and the periphery of our attention, and in fact moves back and forth between the two.” The center is what we are attending to, the focal point of conscious attention. The periphery includes all that happens outside of central awareness, while still being noticeable and effective. In the words of Weiser and
Brown:

  We use “periphery” to name what we are attuned to without attending to explicitly. Ordinarily when driving our attention is centered on the road, the radio, our passenger, but not the noise of the engine. But an unusual noise is noticed immediately, showing that we were attuned to the noise in the periphery, and could come quickly to attend to it. . . . A calm technology will move easily from the periphery of our attention, to the center, and back. This is fundamentally encalming, for two reasons.

  First, by placing things in the periphery, we are able to attune to many more things than we could if everything had to be at the center. Things in the periphery are attuned to by the large portion of our brains devoted to peripheral (sensory) processing. Thus, the periphery is informing without overburdening.

  Second, by recentering something formerly in the periphery, we take control of it.

  Note the phrase “informing without overburdening.” That is the secret of calm, natural communication.

  Natural Mappings

  In The Design of Everyday Things, I explain how what I call “natural mappings” can be used to lay out the controls for appliances. For example, stoves traditionally have four burners, arranged in a two-dimensional rectangle. Yet, the controls invariably are laid out in a one-dimensional line. As a result, people frequently turn on or off the wrong burner, even if the controls are labeled, in part because there is no natural relationship between controls and burners, in part because each stove model seems to use a different rule to map controls to burners. Human factors professionals have long demonstrated that if the controls were laid out in a rectangular array, no labels would be needed: each control would match the corresponding spatial position of the appropriate burner. Some stove manufacturers do this well. Others do it badly. And some do it well for one model, but badly for another.

  The scientific principles for proper mapping are clear. In the case of the spatial arrangement of controls, lights, and burners, I define natural mapping to mean that controls should be laid out in a manner spatially analogous to the layout of the devices they control and, as much as possible, on the same plane. But why restrict natural mappings to spatial relationships? The principle can be extended to numerous other domains.

  Sound has been discussed at length because it is such an important source of feedback. Sound clearly plays a valuable role in keeping us naturally informed about the state of things. Vibration plays an equally important role. In the early days of aviation, when an airplane was about to stall, the lack of lift would cause the control stick to vibrate. Today, with larger airplanes and automatic control systems, pilots no longer can feel these natural warning signals, but they have been reintroduced artificially. When the airplane computes that it is approaching a stall, the system warns by shaking the control stick. “Stick Shaker,” this function is called, and it provides a valuable warning of stall conditions.

  When power steering was first introduced into automobiles, augmenting the driver’s efforts with hydraulic or electric power, drivers had difficulty controlling the vehicle: without feedback from the road, driving skills are badly diminished. So, modern vehicles carefully control how much effort is required and reintroduced some of the road vibrations. “Road feel” provides essential feedback.

  Rumble strips on highways warn drivers when they are drifting off the road. When they were first introduced, the only tool available to the engineers was the road itself, so they cut slots into the roadway, causing a “rumble” when the car’s wheels went over them. This same principle is used as a speed warning: a series of slots is placed perpendicular to the road where the driver should slow down or stop. The strips get closer and closer together, so if the driver fails to slow sufficiently, resulting rumble increases in frequency. Even though these rumble strip cues are artificially induced, they have proven effective.

  Some researchers have experimented successfully with vibrators in the automobile seat, vibrating the right part of the seat when the car is drifting to the right, the left when the car drifts left, mimicking the effect of rumble strips. Similarly, the front of the car or of the seat can vibrate when the car gets too close to the car ahead or exceeds safe speed limits. These signals are effective in informing the driver of the location of the vehicle relative to the road and other cars. They illustrate two different principles: natural mapping and continual awareness (without annoyance). The seat vibrations provide a natural mapping between the position at which the vibration is felt and the position of the nearby vehicles. Because the seat continually vibrates (gently) to the presence of surrounding vehicles, the information is continually available. Yet, the vibrations are subtle, nonintrusive, just like the sounds surrounding us—always letting us know what is happening, but never demanding full attention, therefore never intruding upon consciousness. This is continual information without annoyance.

  Natural signals provide effective communication. The lessons of these chapters can be summarized in six succinct rules, all of which focus on the nature of the communication between people and machines. When people interact with one another, they follow a wide range of conventions and protocols, often subconsciously. The rules of interaction have evolved over tens of thousands of years as a fundamental natural component of human social interaction and culture. We don’t have the luxury of waiting thousands of years for a similar richness of interaction between us and our machines, but fortunately, we do not need to wait. We already know many of the rules. Here they are, spelled out explicitly so that designers and engineers can implement them in the innards of machines:

  • Design Rule One: Provide rich, complex, and natural signals.

  • Design Rule Two: Be predictable.

  • Design Rule Three: Provide a good conceptual model.

  • Design Rule Four: Make the output understandable.

  • Design Rule Five: Provide continual awareness, without annoyance.

  • Design Rule Six: Exploit natural mappings to make interaction understandable and effective.

  As more and more automation enters all aspects of our lives, the challenge for designers is to keep people engaged, to provide the correct amount of natural, environmental information so that people can take advantage of automation to free themselves to do other things, yet can take control when the conditions require it.

  When it comes to intelligent systems, there are problems in maintaining this balance. Foremost is the lack of common ground between people and machines, a problem I believe is fundamental. This is not something that can be cured by new designs: it will take decades of research to understand these issues fully. Someday we may make intelligent agents that are much more animate, more complete. Then, we can start to add sophistication, establish common ground, and allow real conversation to take place. We are a long way away from developing machines that can do this.

  For effective interaction with machines, the machines must be predictable and understandable. People must be able to understand their state, their actions, and what is about to happen. People need to be able to interact in a natural manner. And the awareness and understanding of the machines’ states and activities should be generated in a way that is continuous, unobtrusive, and effective. That’s the bottom line. This demanding set of requirements has not really been achieved by today’s machines. It is the goal to strive for.

  CHAPTER SEVEN

  The Future of

  Everyday Things

  “What if the everyday objects around us came to life? What if they could sense our presence, our focus of attention, and our actions, and could respond with relevant information, suggestions, and actions?” Would you like that? Professor Pattie Maes at MIT’s Media Laboratory hopes you will. She is trying to develop just such devices. “For example,” she says, “we are creating technologies that make it possible for the book you are holding to tell you what passages you may be particularly interested in . . . and the picture of your grandmother on the wall keeps you abreast of how she is doing when you glance up at it.” />
  “Mirror, mirror on the wall, Who’s the fairest of them all?” Snow White’s cruel stepmother posed this question to a wondrous, magical mirror that always told the truth, no matter how much it might pain the listener. Today’s technologists are contemplating mirrors that are more considerate and that answer easier questions:

  Mirror, mirror, on the wall,

  Does this clothing match at all?

  The mirror of tomorrow will do things Snow White’s mirror never even dreamed of: share your image with loved ones, sending it to cell phones and computers for them to critique. The modern magical mirror will do more than answer questions or show you off to others. It will change your image: make you look slimmer or drape new clothes over your image so you can see what they look like on you without the bother of trying them on. It will even be able to change your hairstyle.

  Brown and blue are not for you.

  Try this jacket. Use this shoe.

  Smart technologies have the capacity to enhance pleasure, simplify lives, and add to our safety. If only they could really work flawlessly; if only we could learn how to use them.

  Once upon a time, in a different century and a faraway place, I wrote about people who had trouble working their microwave ovens, setting the time on their appliances, turning on and off the correct burners on their stoves, and even opening and shutting doors. The faraway time was the 1980s; the faraway place was England. And the people were just plain ordinary people, children and adults, undereducated and overeducated. I started my book—originally titled The Psychology of Everyday Things and renamed The Design of Everyday Things—with a quotation about the distinguished founder and CEO of a major computer company, who confessed that he couldn’t figure out how to heat a cup of coffee in his company’s microwave oven.

 

‹ Prev