by Jeff Gothelf
As you review the research, keep an eye out for patterns in the data. These patterns reveal multiple instances of user opinion that represent elements to explore. If something doesn’t fall into a pattern, it is likely an outlier.
Place your outliers in a “parking lot”
Tempting as it is to ignore outliers (or try to serve them in your solution), don’t do it. Instead, create a parking lot or backlog. As your research progresses over time (remember: you’re doing this every week), you might discover other outliers that match the pattern. Be patient.
Verify with other sources
If you’re not convinced the feedback you’re seeing through one channel is valid, look for it in other channels. Are the customer support emails reflecting the same concerns as your usability studies? Is the value of your prototype echoed with customers inside and outside your office? If not, your sample might have been disproportionately skewed.
Identifying Patterns Over Time
Typical UX research programs are structured to get a conclusive answer. Typically, you will plan to do enough research to conclusively answer a question or set of questions. Lean UX research puts a priority on being continuous—which means that you are structuring your research activities very differently. Instead of running big studies, you are seeing a small number of users every week. This means that you might discover some questions remain open over a couple of weeks. The opposite effect, though, is that interesting patterns can reveal themselves over time.
For example, over the course of regular test sessions from 2008 to 2011, the team at the TheLadders watched an interesting change in their customers’ attitudes over time. In 2008, when they first began meeting with job seekers on a regular basis, they would discuss various ways to communicate with employers. One of the options they proposed was SMS. In 2008, the audience, made up of high-income earners in their late 40s and early 50s, showed a strong disdain for SMS as a legitimate communication method. To them, it was something their kids did (and that perhaps they did with their kids), but it was certainly not a “proper” way to conduct a job search.
By 2011 though, SMS messages had taken off in the United States. As text messaging gained acceptance in business culture, audience attitudes began to soften. Week after week, as they sat with job seekers, they began to see opinions about SMS change. The team saw job seekers become far more likely to use SMS in a mid-career job search than they would have been just a few years earlier.
The team at TheLadders would never have recognized this as an audience-wide trend were it not for two things. First, they were speaking with a sample of their audience week in and week out. Additionally, though, the team took a systematic approach to investigating long-term trends. As part of their regular interaction with customers, they always asked a regular set of level-setting questions to capture the “vital signs” of the job seeker’s search—no matter what other questions, features, or products they were testing. By doing this, the team was able to establish a baseline and address bigger trends over time. The findings about SMS would not have changed the team’s understanding of their audience if they’d represented just a few anecdotal data points. But aggregated over time, these data points became part of a very powerful dataset.
When planning your research, it’s important to consider not just the urgent questions—the things you want to learn over the next few weeks. You should also consider the big questions. You still need to plan big standalone studies to get at some of these questions. But with some planning, you should be able to work a lot of long-term learning into your weekly studies.
Test What You’ve Got
To maintain a regular cadence of user testing, your team must adopt a “test what you got” policy. Whatever is ready on testing day is what goes in front of the users. This policy liberates your team from rushing toward testing day deadlines. Instead, you’ll find yourself taking advantage of your weekly test sessions to get insight on whatever is ready, and this will create insight for you at every stage of design and development. You must, however, set expectations properly for the type of feedback you’ll be able to generate with each type of artifact.
Sketches
Feedback collected on sketches helps you validate the value of your concept (Figure 6-4). They’re great conversation prompts to support interviews, and they help to make abstract concepts concrete, which helps generate shared understanding. What you won’t get from sketches is detailed, step-by-step feedback on the process, insight about specific design elements, or even meaningful feedback on copy choices. You won’t be able to learn much (if anything) about the usability of your concept.
Figure 6-4. Example of a sketch that can be used with customers
Static wireframes
Showing test participants wireframes (Figure 6-5) lets you assess the information hierarchy and layout of your experience. In addition, you’ll get feedback on taxonomy, navigation, and information architecture.
You’ll receive the first trickles of workflow feedback, but at this point your test participants are focused primarily on the words on the page and the selections they’re making. Wireframes provide a good opportunity to begin testing copy choices.
Figure 6-5. Example of a wireframe
High-fidelity visual mockups (not clickable)
Moving into high-fidelity visual-design assets, you receive much more detailed feedback. Test participants will be able to respond to branding, aesthetics, and visual hierarchy, as well as aspects of figure/ground relationships, grouping of elements, and the clarity of your calls to action. Your test participants will also (almost certainly) weigh in on the effectiveness of your color palette. (See Figure 6-6.)
Nonclickable mockups still don’t let your customers interact naturally with the design or experience the workflow of your solution. Instead of watching your users click, tap, and swipe, you need to ask them what they would expect and then validate those responses against your planned experience.
Figure 6-6. Example of mockup from Skype in the Classroom (design by Made By Many)
Clickable mockups
Clickable mockups, like that shown in Figure 6-6, increase the fidelity of the interaction by linking together a set of static assets into a simulation of the product experience. Visually, they can be high, medium, or even low fidelity. The value here is not so much the visual polish, but rather the ability to simulate workflow and to observe how users interact with your designs.
Designers used to have limited tool choices for creating clickable mockups, but in recent years, we’ve seen a huge proliferation of tools. Some tools are optimized for making mobile mockups, others are for the web, and still others are platform-neutral. Most have no ability to work with data, but with some (like Axure), you can create basic data-driven or conditional logic-driven simulations. Additionally, design tools such as Sketch and Adobe’s XD include “mirror” features with which you can see your design work in real time on mobile devices and link screens together to create prototypes without special prototyping tools.
Coded prototypes
Coded prototypes are useful because they have the best ability to deliver high fidelity in terms of functionality. This makes for the closest-to-real simulation that you can put in front of your users. It replicates the design, behavior, and workflow of your product. You can test with real data. You can integrate with other systems. All of this makes coded prototypes very powerful; it also makes them the most complex to produce. But because the feedback you gain is based on such a close simulation, you can treat that feedback as more authoritative than the feedback you gain from other simulations.
Monitoring Techniques for Continuous and Collaborative Discovery
In the preceding discussions, we looked at ways to use qualitative research on a regular basis to evaluate your hypotheses. However, as soon as you launch your product or feature, your customers will begin giving you constant feedback—and not only on your product. They will tell you about themselves, about the market, about
the competition. This insight is invaluable—and it comes in to your organization from every corner. Seek out these treasure troves of customer intelligence within your organization and harness them to drive your ongoing product design and research, as depicted in Figure 6-7.
Figure 6-7. Customers can provide feedback through many channels
Customer Service
Customer support agents talk to more customers on a daily basis than you will talk to over the course of an entire project. There are multiple ways to harness their knowledge:
Reach out to them and ask them what they’re hearing from customers about the sections of the product on which you’re working.
Hold regular monthly meetings with them to understand the trends. What do customers love this month? What do they hate?
Tap into their deep product knowledge to learn how they would solve the challenges your team is working on. Include them in design sessions and design reviews.
Incorporate your hypotheses into their call scripts—one of the cheapest ways to test your ideas is to suggest it as a fix to customers calling in with a relevant complaint.
In the mid-2000s, Jeff ran the UX team at a mid-sized tech company in Portland, Oregon. One of the ways that team prioritized the work they did was by regularly checking the pulse of the customer base. The team did this with a standing monthly meeting with customer service representatives. Each month Customer Service would provide the UX team with the top 10 things customers were complaining about. The UX team then used this information to focus their efforts and to subsequently measure the efficacy of their work. At the end of the month, the next conversation with Customer Service gave the team a clear indication of whether or not their efforts were bearing fruit. If the issue was not receding in the top-10 list, the solutions had not worked.
This approach generated an additional benefit. The Customer Service team realized there was someone listening to their insights and began proactively sharing customer feedback above and beyond the monthly meeting. The dialogue that created provided the UX team with a continuous feedback loop to inform and test product hypotheses.
On-Site Feedback Surveys
Set up a feedback mechanism in your product with which customers can send you their thoughts regularly. Here are a few options:
Simple email forms
Customer support forums
Third-party community sites
You can repurpose these tools for research by doing things like the following:
Counting how many inbound emails you’re getting from a particular section of the site
Participating in online discussions and testing some of your hypotheses
Exploring community sites to discover and recruit hard-to-find types of users
These inbound customer feedback channels provide feedback from the point of view of your most active and engaged customers. Here are a few tactics for getting other points of view.
Search logs
Search terms are clear indicators of what customers are seeking on your site. Search patterns indicate what they’re finding and what they’re not finding. Repeated queries with slight variations show a user’s challenge in finding certain information.
One way to use search logs for MVP validation is to launch a test page for the feature you’re planning. Following the search, logs will inform you as to whether the test content (or feature) on that page is meeting the user’s needs. If users continue to search on variations of that content, your experiment has failed.
Site usage analytics
Site usage logs and analytics packages—especially funnel analyses—show how customers are using the site, where they’re dropping off, and how they try to manipulate the product to do the things they need or expect it to do. Understanding these reports provides real-world context for the decisions the team needs to make.
In addition, use analytics tools to determine the success of experiments that have launched publicly. How has the experiment shifted usage of the product? Are your efforts achieving the outcome you defined? These tools provide an unbiased answer.
If you’re just starting to build a product, build usage analytics into it from day one. Third-party products like Kiss Metrics and MixPanel make it easy and inexpensive to implement this functionality, and provide invaluable information to support continuous learning.
A/B testing
A/B testing is a technique, originally developed by marketers, to gauge which of two (or more) relatively similar concepts achieve the defined goal more effectively. When applied in the Lean UX framework, A/B testing becomes a powerful tool to determine the validity of your hypotheses. Applying A/B testing is relatively straightforward after your ideas evolve into working code. Here’s how it works:
Take the proposed solution and release it to your audience. However, instead of letting every customer see it, release it only to a small subset of users.
Measure the performance of your solution for that audience. Compare it to the other group (your control cohort) and note the differences.
Did your new idea move the needle in the right direction? If it did, you’ve got a winning idea.
If not, you’ve got an audience of customers that might make good targets for further research. What did they think of the new experience? Would it make sense to reach out to them for some qualitative research?
The tools for A/B testing are widely available and can be inexpensive. There are third-party commercial tools like Optimizely. There also are open source A/B testing frameworks available for every major platform. Regardless of the tools you choose, the trick is to make sure the changes you’re making are small enough and the population you select large enough that any change in behavior can be attributed with confidence to the change you’ve made. If you change too many things, any behavioral change cannot be directly attributed to your exact hypothesis.
Wrapping Up
In this chapter, we covered many ways to validate your hypotheses. We looked at collaborative discovery and continuous learning techniques. We discussed how to build a weekly Lean testing process and covered what you should test and what to expect from those tests. We looked at ways to monitor your customer experience in a Lean UX context and we touched on the power of A/B testing.
These techniques, used in conjunction with the processes outlined in Chapter 3, Chapter 4, and Chapter 5, make up the full Lean UX process loop. Your goal is to get through this loop as often as possible, refining your thinking with each iteration.
In the next section, we move away from process and take a look at how to integrate Lean UX into your organization. We’ll cover the organizational shifts you’ll need to make to support the Lean UX approach, whether you’re a startup, large company, or a digital agency.
Part III. Lean UX in Your Organization
Integrating design into Agile development is never easy. Sometimes, it causes a lot of pain and heartache. Jeff learned that first-hand when he was at TheLadders. After spending some time trying to integrate UX work with an Agile process, Jeff was feeling pretty good—until one morning when his UX team delivered the diagram shown in Figure III-1. This diagram visualized all of the challenges the team was facing as they tried to integrate its practice into the Agile environment. It served, initially, as a large slice of humble pie. Ultimately, though, it provided the beginning of conversations that helped Jeff, his UX team, and the rest of TheLadders’ product development staff build an integrated, collaborative practice.
Figure III-1. The UX team at TheLadders expressed their feelings about our Agile/UX integration efforts
In the years since this diagram was created, we’ve been fortunate to work at a consulting firm that we helped found. At Neo, the work we did with companies spanned a broad range of industries, company sizes, and cultures. We helped media organizations figure out new ways to deliver and monetize their content. We built new, mobile-first sales tools for a commercial furniture manufacturer. We consulted with fas
hion retailers, automotive services companies, and large banks to help them build Lean UX practices. We worked with nonprofits to create new service offerings. And we trained countless teams.
Each of these projects provided us a bit more insight into how Lean UX works in that environment. We used that insight to make each subsequent project that much more successful. We’ve built up a body of knowledge over the past five years that has given us a clear sense of what needs to happen—at the team and at the organization level—for Lean UX to succeed. This is the focus of Part III.
Chapter 7 discusses how Lean UX fits into an Agile environment.
Chapter 8 digs into the specific organizational changes that you need to make to support this way of working. It’s not just software developers and designers who need to find a way to work together: your entire product development engine is going to need to change if you want to create a truly Agile organization.
Chapter 9 presents a set of case studies that showcase how these tactics and organizational shifts have succeeded at a variety of companies.
Chapter 7. Integrating Lean UX and Agile