Alan Cooper, Robert Reinmann, David Cronin - About Face 3- The Essentials of Interaction Design (pdf)
Page 72
Here’s another example, this time from the Mac: When you download a file from the Internet, the downloading file appears on the desktop as an icon with a small dynamically updating progress bar, indicating visually what percentage has downloaded.
A final example of RVMF is from the computer gaming world: Sid Meier’s Civilization. This game provides dozens of examples of RVMF in its main interface, which is a map of the historical world that you, as a leader of an evolving civilization, are trying to build and conquer. Civilization uses RVMF to indicate a half-dozen things about a city, all represented visually. If a city is more advanced, its architecture is more modern. If it is larger, the icon is larger and more embellished. If there is civil unrest, smoke rises from the city. Individual troop and civilian units also show status visually, by way of tiny meters showing unit health and strength. Even the landscape has RVMF: Dotted lines marking spheres of influence shift as units move and cities grow. Terrain changes as roads are laid, forests are cleared, and mountains are mined. Although dialogs exist in the game, much of the information needed to understand what is going on is communicated clearly with no words or dialogs whatsoever.
31_084113 ch25.qxp 4/3/07 6:12 PM Page 546
546
Part III: Designing Interaction Details
Figure 25-8 This pane from a Cooper design for a long-term health-care information system is a good example of RVMF. The diagram is a representation of all the rooms in the facility. Color-coding indicates male, female, empty, or mixed-gender rooms; numbers indicate empty beds; tiny boxes between rooms indicate shared bathrooms. Black triangles indicate health issues, and a tiny “H”
means a held bed. This RVMF is supplanted with ToolTips, which show room number and names of the occupants of the room, and highlight any important notices about the room or the residents. A numeric summary of rooms, beds, and employees is given at the top. This display has a short learning curve. Once mastered, it allows nurses and facility managers to understand their facility’s status at a glance.
Imagine if all the objects that had pertinent status information on your desktop or in your application were able to display their status in this manner. Printer icons could show how near they were to completing your print job. Disks and removable media icons could show how full they were. When an object was selected for drag and drop, all the places that could receive it would visually highlight to announce their receptiveness.
Think about the objects in your application, what attributes they have — especially dynamically changing ones — and what kind of status information is critical for your users. Figure out how to create a representation of this. After a user notices and learns this representation, it tells him what is going on at a glance. (There should also be a way to get fully detailed information if the user requests it.) Put this information into main application windows in the form of RVMF and see how many dialogs you can eliminate from routine use!
One important point does need to be made about rich modeless visual feedback. It isn’t for beginners. Even if you add ToolTips to textually describe the details of any visual cues you add (which you should), it requires users to perform work to
31_084113 ch25.qxp 4/3/07 6:12 PM Page 547
Chapter 25: Errors, Alerts, and Confirmations
547
discover it and decode its meaning. RVMF is something that users will begin to use over time. When they do, they’ll think it’s amazing; but, in the meantime, they will need support of menus and dialogs to find what they’re looking for. This means that RVMF used to replace alerts and warnings of serious trouble must be extraordinarily clear to users. Make sure that this kind of status is visually emphasized over less critical, more informational RVMF.
Audible feedback
In data-entry environments, clerks sit for hours in front of computer screens entering data. These users may well be examining source documents and typing by touch instead of looking at the screen. If a clerk enters something erroneous, he needs to be informed of it via both auditory and visual feedback. The clerk can then use his sense of hearing to monitor the success of his inputs while he keeps his eyes on the document.
The kind of auditory feedback we’re proposing is not the same as the beep that accompanies an error message box. In fact, it isn’t a beep at all. The auditory indicator we propose as feedback for a problem is silence. The problem with much current audible feedback is the still prevalent idea that, rather than positive audible feedback, negative feedback is desirable.
Negative audible feedback: Announcing user failure
People frequently counter the idea of audible feedback with arguments that users don’t like it. Users are offended by the sounds that computers make, and they don’t like to have their computer beeping at them. Despite the fact that Microsoft and Apple have tried to improve the quality of alert sounds by hiring sound designers (including the legendary Brian Eno for Windows 95), all the warm ambience in the world doesn’t change the fact that they are used to convey negative, often insulting messages.
Emitting noise when something bad happens is called negative audible feedback.
On most systems, error message boxes are normally accompanied by a shrill beep, and audible feedback has thus become strongly associated them. That beep is a public announcement of a user’s failure. It explains to all within earshot that you have done something execrably stupid. It is such a hateful idiom that most software developers now have an unquestioned belief that audible feedback is bad and should never again be considered as a part of interface design. Nothing could be further from the truth. It is the negative aspect of the feedback that presents problems, not the audible aspect.
31_084113 ch25.qxp 4/3/07 6:12 PM Page 548
548
Part III: Designing Interaction Details
Negative audible feedback has several things working against it. Because the negative feedback is issued at a time when a problem is discovered, it naturally takes on the characteristics of an alarm. Alarms are designed to be purposefully loud, dis-cordant, and disturbing. They are supposed to wake sound sleepers from their slumbers when their house is on fire and their lives are at stake. They are like insurance: We hope that they will never be heard. Unfortunately, users are constantly doing things that programs can’t handle, so these actions have become part of the normal course of interaction. Alarms have no place in this normal relationship, the same way we don’t expect our car alarms to go off whenever we accidentally change lanes without using our turn indicators. Perhaps the most damning aspect of negative audible feedback is the implication that success must be greeted with silence.
Humans like to know when they are doing well. They need to know when they are doing poorly, but that doesn’t mean that they like to hear about it. Negative feedback systems are simply appreciated less than positive feedback systems.
Given the choice of no noise versus noise for negative feedback, people will choose the former. Given the choice of no noise versus soft and pleasant noises for positive feedback, however, many people will choose the feedback. We have never given our users a chance by putting high-quality, positive audible feedback in our programs, so it’s no wonder that people associate sound with bad interfaces.
Positive audible feedback
Almost every object and system outside the world of software offers sound to indicate success rather than failure. When we close the door, we know that it is latched when we hear the click, but silence tells us that it is not yet secure. When we converse with someone and they say, “Yes” or “Uh-huh,” we know that they have, at least minimally, registered what was said. When they are silent, however, we have reason to believe that something is amiss. When we turn the key in the ignition and get silence, we know we’ve got a problem. When we flip the switch on the copier and it stays coldly silent instead of humming, we know that we’ve got trouble. Even most equipment that we consider silent makes some noise: Turning on the stovetop returns a hiss of gas and a gratifying “whoomp” a
s the pilot ignites the burner. Electric ranges are inherently less friendly and harder to use because they lack that sound — they require indicator lights to tell us of their status.
When success with our tools yields a sound, it is called positive audible feedback.
Our software tools are mostly silent; all we hear is the quiet click of the keyboard.
Hey! That’s positive audible feedback. Every time you press a key, you hear a faint but positive sound. Keyboard manufacturers could make perfectly silent keyboards, but they don’t because we depend on audible feedback to tell us how we are doing. The feedback doesn’t have to be sophisticated — those clicks don’t tell us
31_084113 ch25.qxp 4/3/07 6:12 PM Page 549
Chapter 25: Errors, Alerts, and Confirmations
549
much — but they must be consistent. If we ever detect silence, we know that we have failed to press the key. The true value of positive audible feedback is that its absence is an extremely effective problem indicator.
The effectiveness of positive audible feedback originates in human sensitivity.
Nobody likes to be told that they have failed. Error message boxes are negative feedback, telling the user that he has done something wrong. Silence can ensure that the user knows this without actually being told of the failure. It is remarkably effective, because the software doesn’t have to insult the user to accomplish its ends.
Our software should give us constant, small, audible cues just like our keyboards.
Our applications would be much friendlier and easier to use if they issued barely audible but easily identifiable sounds when user actions are correct. The program could issue a reassuring click every time the user enters valid input to a field, and an affirming tone when a form has been successfully completed. If an application doesn’t understand some input, it should remain silent, subtly informing the user of the problem, allowing her to correct the input without embarrassment or ego-bruising.
Whenever a user starts to drag an icon, the computer could issue a low-volume sound reminiscent of sliding as the object is dragged. When it is dragged over pliant areas, an additional percussive tap could indicate this collision. When the user finally releases the mouse button, he is rewarded with a soft, cheerful “plonk” from the speakers for a success or with silence if the drop was not meaningful.
As with visual feedback, computer games tend to excel at positive audio feedback.
Mac OS X also does a good job with subtle positive audio feedback for activities like document saves and drag and drop. Of course, the audible feedback must be at the right volume for the situation. Windows and the Mac offer a standard volume control, so one obstacle to beneficial audible feedback has been overcome, but audible feedback should also not overpower music playing on the computer.
Rich modeless feedback is one of the greatest tools at the disposal of interaction designers. Replacing annoying, useless dialogs with subtle and powerful modeless communication can make the difference between a program users will despise and one they will love. Think of all the ways you might improve your own applications with RVMF and other mechanisms of modeless feedback!
31_084113 ch25.qxp 4/3/07 6:12 PM Page 550
32_084113 ch26.qxp 4/3/07 6:12 PM Page 551
26
Designing for Different
Needs
As we discussed in Part I, personas and scenarios help us focus our design efforts on the goals, behaviors, needs, and mental models of real users. In addition to the specific focus that personas can give a design effort, there are some consistent and generalizable patterns of user needs that should inform the way our products are designed. In this chapter, we’ll explore some strategies for serving these well-known needs.
Command Vectors and Working Sets
Two concepts are particularly useful in sorting out the needs of users with different levels of experience: Command vectors and working sets. Command vectors are distinct techniques for allowing users to issue instructions to the program. Direct manipulation handles, drop-down and pop-up menus, toolbar controls, and keyboard accelerators are all examples of command vectors.
Good user interfaces provide multiple command vectors, where critical application functions are provided in the form of menu commands, toolbar commands, keyboard accelerators, and direct manipulation controls, each with the parallel capability to invoke a particular command. This redundancy enables users of
32_084113 ch26.qxp 4/3/07 6:12 PM Page 552
552
Part III: Designing Interaction Details
different skill sets and attitudes to command the program according to their abilities and inclinations.
Immediate and pedagogic vectors
Direct manipulation controls, like pushbuttons and toolbar controls, are immediate vectors. There is no delay between clicking a button and seeing the results of the function. Direct manipulation also has an immediate effect on the information without any intermediary. Neither menus nor dialog boxes have this immediate property. Each one requires an intermediate step, sometimes more than one.
Some command vectors offer more support to new users. Typically, menus and dialog boxes offer the most, which is why we refer to them as pedagogic vectors.
Beginners avail themselves of the pedagogy of menus as they get oriented in a new program, but perpetual intermediates often want to leave them behind to find more immediate and efficient vectors.
Working sets and personas
Because each user unconsciously memorizes commands that are used frequently, perpetual intermediates memorize a moderate subset of commands and features, a working set. The commands that comprise any user’s working set are unique to that individual, although it will likely overlap significantly with the working sets of other users who exhibit similar use patterns. In Excel, for example, almost every user will enter formulas and labels, specify fonts, and print; but Sally’s working set might include graphs, whereas Elliot’s working set includes linked spreadsheets.
Although, strictly speaking, there is no such thing as a standard working set that will cover the needs of all users, research and modeling of users and their use patterns can yield a smaller subset of functions that designers can be reasonably confident are accessed frequently by most users. This minimal working set can be determined via Goal-Directed Design methods: by using scenarios to discover the functional needs of your personas. These needs translate directly to the contents of the minimal working set.
The commands in any person’s working set are those they most often use. Users want those commands to be especially quick and easy to invoke. This means that the designer must, at least, provide immediate command vectors for the minimal working set of the most likely users of the application.
Although a program’s minimal working set is almost certainly part of each user’s full working set, individual user preferences and job requirements will dictate
32_084113 ch26.qxp 4/3/07 6:12 PM Page 553
Chapter 26: Designing for Different Needs
553
which additional features are included. Even custom software written for corporate operations can offer a range of features from which each user can pick and choose.
This means that the designer must, while providing immediate access to the minimal working set, also provide means for promoting other commands to immediate vectors. Similarly, immediate commands also require more pedagogic vectors to enable beginners to learn the interface. This implies that most functions in the interface should have multiple command vectors.
There is an exception to the rule of multiple vectors: Dangerous commands (like Erase All, Clear, Abandon Changes, and so on) should not have easy, parallel command vectors. Instead, they need to be protected within menus and dialog boxes (in keeping with our design principle from Chapter 10: Hide the ejector seat levers).
Graduating Users from Beginners to
Intermediates
Donald Norman provides another useful perspec
tive on command vectors. In The Design of Everyday Things, Norman uses the phrases, information in the world and information in the head to refer to different ways that users access information.
When he talks about information in the world, Norman refers to situations in which there is sufficient information available in an environment or interface to accomplish something. A kiosk showing a printed map of downtown, for example, is information in the world. We don’t have to bother remembering exactly where the Transamerica Building is, because we can find it by reading a map. Opposing this is information in your head, which refers to knowledge that you have learned or memorized, like the back-alley shortcut that isn’t printed on any map. Information in your head is much faster and easier to use than information in the world, but you are responsible for ensuring that you learn it, that you don’t forget it, and that it stays up to date. Information in the world is slower and more cumbersome, but very dependable.
World vectors and head vectors
A pedagogic vector is necessarily filled with information in the world, which is why it is a world vector. Conversely, keyboard accelerators constitute a head vector because using them requires a user to have filled his head with information about the functions and their keyboard equivalents. World vectors are required by beginners and by more experienced users accessing advanced or seldom-used functions.
Head vectors are used extensively by intermediates and even more so by experts.
32_084113 ch26.qxp 4/3/07 6:12 PM Page 554
554
Part III: Designing Interaction Details
For example, when you first moved into your neighborhood, you probably had to use a map — a world vector. After living there a couple of days, you abandoned the map because you had learned how to get home — a head vector. On the other hand, even though you know your house intimately, when you have to adjust the temperature setting on the water heater, you need to read the instructions — a world vector — because you didn’t bother to memorize them when you moved in.