by Steve Krug
For now let’s talk about delight, learnability, and memorability and how they apply to mobile apps.
Delightful is the new black
What is this “delight” stuff, anyway?
Delight is a bit hard to pin down; it’s more one of those “I’ll know it when I feel it” kind of things. Rather than a definition, it’s probably easier to identify some of the words people use when describing delightful products: fun, surprising, impressive, captivating, clever, and even magical.6
6 My personal standard for a delightful app tends to be “does something you would have been burned at the stake for a few hundred years ago.”
Delightful apps usually come from marrying an idea about something people would really enjoy being able to do, but don’t imagine is possible, with a bright idea about how to use some new technology to accomplish it.
SoundHound is a perfect example.
Not only can it identify that song that you hear playing wherever you happen to be, but it can display the lyrics and scroll them in sync with the song.
And Paper is not your average drawing app. Instead of dozens of tools with thousands of options, you get five tools with no options. And each one is optimized to create things that look good.
Building delight into mobile apps has become increasingly important because the app market is so competitive. Just doing something well isn’t good enough to create a hit; you have to do something incredibly well. Delight is sort of like the extra credit assignment of user experience design.
Making your app delightful is a fine objective. Just don’t focus so much attention on it that you forget to make it usable, too.
Apps need to be learnable
One of the biggest problems with apps is that if they have more than a few features they may not be very easy to learn.
Take Clear, for example. It’s an app for making lists, like to-do lists. It’s brilliant, innovative, beautiful, useful, and fun to use, with a clean minimalist interface. All of the interactions are elegantly animated, with sophisticated sound effects. One reviewer said, “It’s almost like I’m playing a pinball machine while I’m staying productive.”
The problem is that one reason it’s so much fun to use is that they’ve come up with innovative interactions, gestures, and navigation, but there’s a lot to learn.
With most apps, if you get any instructions at all it’s usually one or two screens when you first launch the app that give a few essential hints about how the thing works. But it’s often difficult or impossible to find them again to read later.
And if help exists at all (and you can find it), it’s often one short page of text or a link to the developer’s site with no help to be found or a customer support page that gives you the email address where you can send your questions.
This can work for apps that are only doing a very few things, but as soon as you try to create something that has a lot of functionality—and particularly any functions that don’t follow familiar conventions or interface guidelines—it’s often not enough.
The people who made Clear have actually done a very good job with training compared to most apps. The first time you use it, you tap your way through a nicely illustrated ten-screen quick tour of the main features.
This is followed by an ingenious tutorial that’s actually just one of their lists.
Each item in the list tells you something to try, and by the time you’re done you’ve practiced using almost all of the features.
But when I’ve used it to do demo usability tests during my presentations, it hasn’t fared so well.
I give the participant/volunteer a chance to learn about the app by reading the description in the app store, viewing the quick tour, and trying the actions in the tutorial. Then I ask them to do the type of primary task the app is designed for: create a new list called “Chicago trip” with three items in it — Book hotel, Rent car, and Choose flight.
So far, no one has succeeded.
Even though it’s shown in the slide show on the way in, people don’t seem to get the concept that there are levels: the level of lists, the level of items in lists, and the level of settings. And even if they remember seeing it, they still can’t figure out how to navigate between levels. And if you can’t figure that out, you can’t get to the Help screens. Catch-22.
That’s not to say that no one in the real world learns how to use it. It gets great reviews and is consistently a best seller. But I have to wonder how many people who bought it have never mastered it, or how many more sales they could make if it were easier to learn.
And this is a company that’s put a lot of effort into training and help. Most don’t.
You need to do better than most, and usability testing will help you figure out how.
Apps need to be memorable, too
There’s one more attribute that’s important: memorability. Once you’ve figured out how to use an app, will you remember how to use it the next time you try or will you have to start over again from scratch?
I don’t usually talk much about memorability because I think the best way to make things easy to relearn is to make them incredibly clear and easy to learn in the first place. If it’s easy to learn the first time, it’s easy to learn the second time.
But it’s certainly a serious problem with some apps.
One of my favorite drawing apps is ASketch. I love this app because no matter what you try to draw and how crudely you draw it, it ends up looking interesting.
But for months, each time I opened it I couldn’t remember how to start a new drawing.
In fact, I couldn’t remember how to get to any of the controls. To maximize the drawing space there weren’t any icons on the screen.
I’d try all the usual suspects: double tap, triple tap, tap near the middle at the top or bottom of the screen, various swipes and multi-finger taps, and finally I’d hit on it. But by the next time I went to use it I’d forgotten what the trick was again.
Memorability can be a big factor in whether people adopt an app for regular use. Usually when you purchase one, you’ll be willing to spend some time right away figuring out how to use it. But if you have to invest the same effort the next time, it’s unlikely to feel like a satisfying experience. Unless you’re very impressed by what it does, there’s a good chance you’ll abandon it—which is the fate of most apps.
Life is cheap (99 cents) on mobile devices.
Usability testing on mobile devices
For the most part, doing usability testing on mobile devices is exactly the same as the testing I described in Chapter 9.
You’re still making up tasks for people to do and watching them try to do them. You still prompt them to say what they’re thinking while they work. You still need to keep quiet most of the time and save your probing questions for the end. And you should still try to get as many stakeholders as possible to come and observe the tests in person.
Almost everything that’s different when you’re doing mobile testing isn’t about the process; it’s about logistics.
The logistics of mobile testing
When you’re doing testing on a personal computer, the setup is pretty simple:
The facilitator looks at the same screen as the participant.
Screen sharing software allows the observers to see what’s happening.
Screen recording software creates a video of the session.
But if you’ve ever tried doing tests on mobile devices, you know that the setup can get very complicated: document cameras, Webcams, hardware signal processors, physical restraints (well, maybe not physical restraints, but “Don’t move the device beyond this point” markers to keep the participant within view of a camera), and even things called sleds and goosenecks.
Here are some of the issues you have to deal with:
Do you need to let the participants use their own devices?
Do they need to hold the device naturally, or can it be sitting on a table or propped up on a stand?
What do the observers need to see (e.g., just the screen, or both the screen and the participant’s fingers so they can see their gestures)? And how do you display it in the observation room?
How do you create a recording?
One of the main reasons why mobile testing is complicated is that some of the tools we rely on for desktop testing don’t exist yet for mobile devices. As of this writing, robust mobile screen recording and screen sharing apps aren’t available, mainly because the mobile operating systems tend to prohibit background processes. And the devices don’t really have quite enough horsepower to run them anyway.
I expect this to change before long. With so many mobile sites and apps to test, there are already a lot of companies trying to come up with solutions.
My recommendations
Until better technology-based solutions come along, here’s what I’d lean toward:
Use a camera pointed at the screen instead of mirroring. Mirroring is the same as screen sharing: It displays what’s on the screen. You can do it with software (like Apple’s Airplay) or hardware (using the same kind of cable you use to play a video from your phone or tablet on a monitor or TV).
But mirroring isn’t a good way to watch tests done on touch screen devices, because you can’t see the gestures and taps the participant is making. Watching a test without seeing the participant’s fingers is a little like watching a player piano: It moves very fast and can be hard to follow. Seeing the hand and the screen is much more engaging.
If you’re going to capture fingers, there’s going to be a camera involved. (Some mirroring software will shows dots and streaks on the screen, but it’s not the same thing.)
Attach the camera to the device so the user can hold it naturally. In some setups, the device sits on a table or desk and can’t be moved. In others, the participant can hold the device, but they’re told to keep it inside an area marked with tape. The only reason for restricting movement of the device is to make it easier to point a camera at it and keep it in view.
If you attach the camera to the device, the participant can move it freely and the screen will stay in view and in focus.
Don’t bother with a camera pointed at the participant. I’m really not a fan of the face camera. Some observers like seeing the participant’s face, but I think it’s actually a distraction. I’d much rather have observers focus on what’s happening on the screen, and they can almost always tell what the user is feeling from their tone of voice anyway.
Adding a second camera inevitably makes the configuration much more complicated, and I don’t think it’s worth the extra complexity. Of course, if your boss insists on seeing faces, show faces.
Proof of concept: My Brundleyfly7 camera
7 Brundlefly is the word Jeff Goldblum’s character (Seth Brundle) in The Fly uses to describe himself after his experiment with a teleportation device accidentally merges his DNA with that of a fly.
Out of curiosity, I built myself a camera rig by merging a clip from a book light with a Webcam. It weighs almost nothing and captures the audio with its built-in microphone. Mine cost about $30 in parts and took about an hour to make. I’m sure somebody will manufacture something similar—only much better—before long. I’ll put instructions for building one yourself online at rocketsurgerymadeeasy.com.
Lightweight webcam + Lightweight clamp and Gooseneck = Brundlefly
Attaching a camera to the device creates a very easy-to-follow view. The observers get a stable view of the screen even if the participant is waving it around.
I think it solves most of the objections to other mounted-camera solutions:
They’re heavy and awkward. It weighs almost nothing and barely changes the way the phone feels in your hand.
They’re distracting. It’s very small (smaller than it looks in the photo) and is positioned out of the participant’s line of sight, which is focused on the phone.
Nobody wants to attach anything to their phone. Sleds are usually attached to phones with Velcro or double-sided tape. This uses a padded clamp that can’t scratch or mar anything but still grips the device firmly.
One limitation of this kind of solution is that it is tethered: It requires a USB extension cable running from the camera to your laptop. But you can buy a long extension inexpensively.
The rest of the setup is very straightforward:
Connect the Brundlefly to the facilitator’s laptop via USB.
Open something like AmCap (on a PC) or QuickTime Player (on a Mac) to display the view from the Brundlefly. The facilitator will watch this view.
Share the laptop screen with the observers using screen sharing (GoToMeeting, WebEx, etc.)
Run a screen recorder (e.g., Camtasia) on the computer in the observation room. This reduces the burden on the facilitator’s laptop.
That’s it.
Finally...
In one form or another, it seems clear that mobile is where we’re going to live in the future, and it provides enormous opportunities to create great user experiences and usable things. New technologies and form factors are going to be introduced all the time, some of them involving dramatically different ways of interacting.8
8 Personally, I think talking to your computer is going to be one of the next big things. Recognition accuracy is already amazing; we just need to find ways for people to talk to their devices without looking, sounding, and feeling foolish. Someone who’s seriously working on the problems should give me a call; I’ve been using speech recognition software for 15 years, and I have a lot of thoughts about why it hasn’t caught on.
Just make sure that usability isn’t being lost in the shuffle. And the best way to do this is by testing.
Chapter 11. Usability as common courtesy
WHY YOUR WEB SITE SHOULD BE A MENSCH1
1 Mensch: a German-derived Yiddish word originally meaning “human being.” A person of integrity and honor; “a stand-up guy”; someone who does the right thing.
Sincerity: that’s the hard part. If you can fake that, the rest is easy.
—OLD JOKE ABOUT A HOLLYWOOD AGENT
Some time ago, I was booked on a flight to Denver. As it happened, the date of my flight also turned out to be the deadline for collective bargaining between the airline I was booked on and one of its unions.
Concerned, I did what anyone would do: (a) Start checking Google News every hour to see if a deal had been reached, and (b) visit the airline’s Web site to see what they were saying about it.
I was shocked to discover that not only was there nothing about the impending strike on the airline’s Home page, but there wasn’t a word about it to be found anywhere on the entire site. I searched. I browsed. I scrolled through all of their FAQ lists. Nothing but business as usual. “Strike? What strike?”
Now, on the morning of a potential airline strike, you have to know that there’s really only one frequently asked question related to the site, and it’s being asked by hundreds of thousands of people who hold tickets for the coming week: What’s going to happen to me?
I might have expected to find an entire FAQ list dedicated to the topic:
Is there really going to be a strike?
What’s the current status of the talks?
If there is a strike, what will happen?
How will I be able to rebook my flight?
What will you do to help me?
Nothing.
What was I to take away from this?
Either (a) the airline had no procedure for updating their Home page for special circumstances, (b) for some legal or business reason they didn’t want to admit that there might be a strike, (c) it hadn’t occurred to them that people might be interested, or (d) they just couldn’t be bothered.
No matter what the real reason was, they did an outstanding job of depleting my goodwill towards both the airline and their Web site. Their brand—which they spend hundreds of millions of dollars a year polishing—had definitely lost some of its luster for me.
&n
bsp; Most of this book has been about building clarity into Web sites: making sure that users can understand what it is they’re looking at—and how to use it—without undue effort. Is it clear to people? Do they “get it”?
But there’s another important component to usability: doing the right thing—being considerate of the user. Besides “Is my site clear?” you also need to be asking, “Does my site behave like a mensch?”
The reservoir of goodwill
I’ve always found it useful to imagine that every time we enter a Web site, we start out with a reservoir of goodwill. Each problem we encounter on the site lowers the level of that reservoir. Here, for example, is what my visit to the airline site might have looked like:
I enter the site.
My goodwill is a little low, because I’m not happy that their negotiations may seriously inconvenience me.
I glance around the Home page.
It feels well organized, so I relax a little. I’m confident that if the information is here, I’ll be able to find it.
There’s no mention of the strike on the Home page.
I don’t like the fact that it feels like business as usual.
There’s a list of five links to News stories on the Home page but none are relevant.
I click on the Press Releases link at the bottom of the list.