by Jeff Gothelf
The mockups worked great on the iPads: customers tapped, swiped, and chatted with us about the new offering. Three days later, we returned to New York City with feedback written on every sticky note and scrap of paper we could find.
We sorted the notes into groups, and some clear themes emerged. Customer feedback let us conclude that although there was merit to this new business plan, we would need further differentiation from existing products in the marketplace if we were going to succeed.
All told, we spent eight business days developing our hypotheses, creating our MVP, and getting market feedback. This put us in a great position to pivot our position and refine the product to fit our market segment more effectively.
Wrapping Up
In this chapter, we defined the Minimum Viable Product as the smallest thing you can make to learn whether your hypothesis is valid. In addition, we discussed the various forms an MVP can take and took a closer look at prototyping.
Remember that the entire point of an MVP is to learn. If you focus on what you’re trying to learn and apply your team’s creativity to learning quickly, you’ll be well on your way to creating great MVPs.
In Chapter 6, we take a look at various types of research that you can use to make sure your designs are hitting the mark. We also take a look at how your team can make sense of all the feedback your research will generate.
Chapter 6. Feedback and Research
Research is formalized curiosity. It is poking and prying with a purpose.
—Zora Neale Hurston
It’s now time to put your Minimum Viable Product (MVP) to the test. All of our work up to this point has been based on assumptions; now we must begin the validation process. We use lightweight, continuous, and collaborative research techniques to do this.
Figure 6-1. The Lean UX cycle
Research with users is at the heart of most approaches to User Experience (UX) design. Too often though, teams outsource research work to specialized research teams. And, too often, research activities take place on rare occasions—either at the beginning of a project or at the end. Lean UX solves the problems these tactics create by making research both continuous and collaborative. Let’s dig in to see how to do that.
In this chapter, we cover the following:
Collaborative research techniques with which you can build shared understanding with your team
Continuous research techniques that you can use to build small, informal, qualitative research studies into every iteration
How to use small units of regular research to build longitudinal research studies
How to reconcile contradictory feedback from multiple sources
What artifacts to test and what results you can expect from each of these tests
How to incorporate the voice of the customer throughout the Lean UX cycle
Continuous and Collaborative Research
Lean UX takes basic UX research techniques and overlays two important ideas. First, Lean UX research is continuous. This means you build research activities into every sprint. Instead of being a costly and disruptive “big bang” process, we make it bite-sized so that we can fit it into our ongoing process. Second, Lean UX research is collaborative. This means that you don’t rely on the work of specialized researchers to deliver learning to your team. Instead, research activities and responsibilities are distributed and shared across the entire team. By eliminating the handoff between researchers and team members, we increase the quality of our learning. Our goal in all of this is to create a rich shared understanding across the team.
Collaborative Discovery
Collaborative discovery is the process of working together as a team to test ideas in the market. It is one of the two main cross-functional techniques that create shared understanding on a Lean UX team. (Collaborative design, covered in Chapter 4, is the other.) Collaborative discovery is an approach to research that gets the entire team out of the building—literally and figuratively—to meet with and learn from customers. It gives everyone on the team a chance to see how the hypotheses are tested and, most important, multiplies the number of perspectives the team can use to gather customer insight.
It’s essential that you and your team conduct research together; that’s why we call it collaborative discovery. Outsourcing research dramatically reduces its value: it wastes time, it limits team-building, and it filters the information through deliverables, handoffs, and interpretation. Don’t do it.
Researchers sometimes feel uneasy about this approach. As trained professionals, they are right to point out that they have special knowledge that is important to the research process. We agree. That’s why you should include a researcher on your team if you can. Just don’t outsource the work to that person. Instead, use the researcher as an expert guide to help your team plan their work and lead the team through their research activities. In the same way that Lean UX encourages designers to take a more facilitative approach, Lean UX asks the same of the researcher. Use your expertise to help the team plan good research, ask good questions, and select the right methods for the job. Just don’t do all the research for them.
Collaborative Discovery in the Field
Collaborative discovery is simply a way to get out into the field with your team. Here’s how you do it:
As a team, review your questions, assumptions, hypotheses, and MVPs. Decide as a team what you need to learn.
Working as a team, decide who you’ll need to speak to and observe to address your learning goals.
Create an interview guide (see the sidebar “The Interview Guide”) that you can all use to guide your conversations.
Break your team into research pairs, mixing up the various roles and disciplines within each pair (i.e., try not to have designers paired with designers). If you are doing this research over a number of days, try to mix up the interview pairs each day so that people have a chance to share experiences with various team members.
Arm each pair with a version of the MVP.
Send each team out to meet with customers/users.
One team member interviews while the other takes notes.
Begin with questions, conversations, and observations.
Demonstrate the MVP later in the session, and allow the customer to interact with it.
Collect notes as the customer provides feedback.
When the lead interviewer is done, switch roles to give the note taker a chance to ask follow-up questions.
At the end of the interview, ask the customer for referrals to other people who might also provide useful feedback.
The Interview Guide
To prepare for field work, create a small cheat sheet that will fit into your notebook. On your cheat sheet, write the questions and topics that you’ve decided to cover. This way you’ll always be prepared to move the interview along.
When planning your questions, think about a sequential funnel:
First, try to identify if the customer is in your target audience.
Then, try to confirm any problem hypotheses you have for this segment.
Finally, if you have a prototype or mockup with you, show this last to avoid limiting the conversation to your vision of the solution.
A Collaborative Discovery Example
A team we worked with at PayPal set out with an Axure prototype to conduct a collaborative discovery session. The team was made up of two designers, a UX researcher, four developers, and a product manager; they split into teams of two and three. They paired each developer with a nondeveloper. Before setting out, they brainstormed what they’d like to learn from their prototype and used the outcome of that session to write brief interview guides. Their product was targeted at a broad consumer market, so they decided to just head out to the local shopping malls scattered around their office. Each pair targeted a different mall. They spent two hours in the field, stopping strangers, asking them questions, and demonstrating their prototypes. To build
up their skillset, they changed roles (from lead to note taker) an hour into their research.
When they reconvened, each pair read their notes to the rest of the team. Almost immediately they began to see patterns emerge, proving some of their assumptions and disproving others. Using this new information, they adjusted the design of their prototype and headed out again later that afternoon. After a full day of field research, it was clear where their idea had legs and where it needed pruning. When they began the next sprint the following day, every member of the team was working from the same baseline of clarity, having established a shared understanding by means of collaborative discovery the day before.
Continuous Learning
A critical best practice in Lean UX is building a regular cadence of customer involvement. Regularly scheduled conversations with customers minimize the time between hypothesis creation, experiment design, and user feedback—giving you the opportunity to validate your hypotheses quickly. In general, knowing you’re never more than a few days away from customer feedback has a powerful effect on teams. It takes the pressure off of your decision making because you know that you’re never more than a few days from getting meaningful data from the market.
Continuous Learning in the Lab: Three Users Every Thursday
Although you can create a standing schedule of fieldwork based on the aforementioned techniques, it’s much easier to bring customers into the building—you just need to be a little creative to get the entire team involved.
We like to use a weekly rhythm to schedule research, as demonstrated in Figure 6-2. We call this “Three, twelve, one,” because it’s based on the following guidelines: three users; by 12 noon; once a week.
Figure 6-2. The Three, twelve, one activity calendar
Here’s how the team’s activities break down:
Monday: Recruiting and planning
Decide, as a team, what will be tested this week. Decide who you need to recruit for tests and start the recruiting process. Outsource this job if at all possible: it’s very time-consuming (see the sidebar “A Word About Recruiting Participants”).
Tuesday: Refine the components of the test
Based on what stage your MVP is in, begin refining the design, the prototype, or the product to a point that will allow you to tell at least one complete story when your customers see it.
Wednesday: Continue refining, write the script, and finalize recruiting
Put the final touches on your MVP. Write the test script that your moderator will follow with each participant. (Your moderator should be someone on the team if at all possible.) Finalize the recruiting and schedule for Thursday’s tests.
Thursday: Test!
Spend the morning testing your MVP with customers. Spend no more than an hour with each customer. Everyone on the team should take notes. The team should plan to watch from a separate location. Review the findings with the entire project team immediately after the last participant is done.
Friday: Plan
Use your new insight to decide whether your hypotheses were validated and what you need to do next.
Simplify Your Test Environment
Many firms have established usability labs in-house—and it used to be you needed one. These days, you don’t need a lab—all you need is a quiet place in your office and a computer with a network connection and a webcam. It used to be necessary to use specialized usability testing products to record sessions and connect remote observers. These days, you don’t even need that. We routinely run tests with remote observers using nothing more exotic than Google Hangouts.
The ability to connect remote observers is a key element. It makes it possible for you to bring the test sessions to team members and stakeholders who can’t be present. This has an enormous impact on collaboration because it spreads understanding of your customers deep into your organization. It’s hard to overstate how powerful this is.
Who Should Watch?
The short answer is your entire team. Like almost every other aspect of Lean UX, usability testing should be a group activity. With the entire team watching the tests, absorbing the feedback, and reacting in real time, you’ll find the need for subsequent debriefings reduced. The team will learn first-hand where their efforts are succeeding and failing. Nothing is more humbling (and motivating) than seeing a user struggle with the software you just built.
A Word About Recruiting Participants
Recruiting, scheduling, and confirming participants is time intensive. Save your team from this additional overhead by offloading the work to a dedicated recruiter. Some companies have hired internal recruiters to do this work, others outsource the work to a third party. In either case, the cost is worth it. The recruiter does the work and gets paid for each participant she brings in. In addition, your recruiter takes care of the screening, scheduling, and replacement of no-shows on testing day. Third-party recruiters typically charge for each participant they recruit. You’ll also have to budget for whatever compensation you offer to the participants themselves.
Case Study: Three Users Every Thursday at Meetup
One company that has taken the concept of “three users every Thursday” to a new level is Meetup. Based in New York City and under the guidance of Chief Strategy Officer Andres Glusman, Meetup started with a desire to test each and every one of their new features and products.
After pricing some outsourced options, they decided to keep things in-house and take an iterative approach in their search for what they called their MVP—minimal viable process. Initially, Meetup tried to test with the user, moderator, and team all in the same room. They got some decent results from this approach—the company learned a lot about the products they were testing but found the test participants could feel uncomfortable with so many folks in the room.
Over time Meetup evolved to having the testing in one room with only the moderator joining the user. The rest of the team would watch the video feed from a separate conference room or at their desks. (Meetup originally used Morae to share the video. Today they use GoToMeeting.)
Meetup doesn’t write testing scripts, because they’re not sure what will be tested each day. Instead, product managers and designers talk with the moderator before a test to identify key assumptions and key focus areas for the test. Then, the moderator and team interact with test moderators using instant messaging to help guide the conversations with users. The team debriefs immediately after the tests are complete and are able to move forward quickly.
Meetup recruited directly from the Meetup community from day one. For participants outside of their community, the team used a third-party recruiter. Ultimately though, they decided to bring this responsibility in-house, assigning the work to the dedicated researcher the company hired to handle all testing.
The team scaled up from three users once a week to testing every day except Monday. Their core objective was to minimize the time between concept and customer feedback.
Meetup’s practical minimum viable process orientation can be seen in their approach to mobile testing, as well. As their mobile usage numbers grew, Meetup didn’t want to delay testing on mobile platforms while waiting for fancy mobile testing equipment. Instead, the company built their own—for $28 (see Figure 6-3).
Over time, Meetup scaled their minimum viable usability testing process to an impressive program. The company runs approximately 400 test sessions per year at a total cost of about $30,000 (not including staffing costs). This includes 100 percent video and notes coverage for every session. This is truly amazing when you consider that this is roughly equivalent to the cost of running one major outsourced usability study.
Figure 6-3. An early version of Meetup’s mobile usability testing rig (it’s been refined since then)
Making Sense of the Research: A Team Activity
Whether your team does fieldwork or labwork, research generates a lot of raw data. Making sense of this can be time-consuming and frustrating—so the process is often handed over to specialists who a
re asked to synthesize research findings. You shouldn’t do this. Instead, work as hard as you can to make sense of the data as a team.
As soon as possible after the research sessions are over—preferably the same day, if not then the following day—gather the team together for a review session. When the team has reassembled, ask everyone to read their findings to one another. One really efficient way to do this is to transcribe the notes people read out loud onto index cards or sticky notes, and then sort the notes into themes. This process of reading, grouping, and discussing gets everyone’s input out on the table and builds the shared understanding that you seek. With themes identified, you and your team can then determine the next steps for your MVP.
Confusion, Contradiction, and (Lack of) Clarity
As you and your team collect feedback from various sources and try to synthesize your findings, you will inevitably come across situations in which your data presents you with contradictions. How do you make sense of it all? Here are a couple of ways to maintain your momentum and ensure that you’re maximizing your learning:
Look for patterns