INSPIRED

Home > Other > INSPIRED > Page 20
INSPIRED Page 20

by Marty Cagan


  The other environment that works really well is your customer's office. It can be time consuming to do, but even 30 minutes in their office can tell you a lot. They are masters of their domain and often very talkative. In addition, all the cues are there to remind them of how they might use the product. You can also learn from seeing what their office looks like. How big is their monitor? How fast is their computer and network connectivity? How do they communicate with their colleagues on their work tasks?

  There are tools for doing this type of testing remotely, and I encourage that, but they are primarily designed for usability testing and not for the value testing that will usually follow. So, I view the remote usability testing as a supplement rather than a replacement.

  Testing Your Prototype

  Now that you've got your prototype ready, lined up your test subjects, and prepared the tasks and questions, here are a set of tips and techniques for administering the actual test.

  We want to learn whether the user or customer really has the problems we think they have, and how they solve those problems today, and what it would take for them to switch.

  Before you jump in, we want to take the opportunity to learn how they think about this problem today. If you remember the key questions from the Customer Interview Technique, we want to learn whether the user or customer really has the problems we think they have, and how they solve those problems today, and what it would take for them to switch.

  When you first start the actual usability test, make sure to tell your subject that this is just a prototype, it's a very early product idea, and it's not real. Explain that she won't be hurting your feelings by giving her candid feedback, good or bad. You're testing the ideas in the prototype, you're not testing her. She can't pass or fail—only the prototype can pass or fail.

  One more thing before you jump into your tasks: See if they can tell from the landing page of your prototype what it is that you do, and especially what might be valuable or appealing to them. Again, once they jump into tasks, you'll lose that first‐time visitor context, so don't waste the opportunity. You'll find that landing pages are incredibly important to bridging the gap between expectations and what the product does.

  When testing, you'll want to do everything you can to keep your users in use mode and out of critique mode. What matters is whether users can easily do the tasks they need to do. It really doesn't matter if the user thinks something on the page is ugly or should be moved or changed. Sometimes misguided testers will ask users questions like “What three things on the page would you change?” To me, unless that user happens to be a product designer, I'm not really interested in that. If users knew what they really wanted, software would be a lot easier to create. So, watch what they do more than what they say.

  During the testing, the main skill you have to learn is to keep quiet. When we see someone struggle, most of us have a natural urge to help the person out. You need to suppress that urge. It's your job to turn into a horrible conversationalist. Get comfortable with silence—it's your friend.

  There are three important cases you're looking for: (1) the user got through the task with no problem at all and no help; (2) the user struggled and moaned a bit, but he eventually got through it; or (3) he got so frustrated he gave up. Sometimes people will give up quickly, so you may need to encourage them to keep trying a bit longer. But, if he gets to the point that you believe he would truly leave the product and go to a competitor, then that's when you note that he truly gave up.

  In general, you'll want to avoid giving any help or leading the witness in any way. If you see the user scrolling the page up and down and clearly looking for something, it's okay to ask the user what specifically she's looking for, as that information is very valuable to you. Some people ask users to keep a running narration of what they're thinking, but I find this tends to put people in critique mode, as it's not a natural behavior.

  Act like a parrot. This helps in many ways. First, it helps avoid leading. If they're quiet and you really can't stand it because you're uncomfortable, tell them what they're doing: “I see that you're looking at the list on the right.” This will prompt them to tell you what they're trying to do, what they're looking for, or whatever it may be. If they ask a question, rather than giving a leading answer, you can play back the question to them. They ask, “Will clicking on this make a new entry?” and you ask in return, “You're wondering if clicking on this will make a new entry?” Usually, they will take it from there because they'll want to answer your question: “Yeah, I think it will.” Parroting also helps avoid leading value judgments. If you have the urge to say, “Great!” instead you can say, “You created a new entry.” Finally, parroting key points also helps your note taker because she has more time to write down important things.

  Fundamentally, you're trying to get an understanding of how your target users think about this problem and to identify places in your prototype where the model the software presents is inconsistent or incompatible with how the user is thinking about the problem. That's what it means to be counterintuitive. Fortunately, when you spot this, it is not usually hard to fix, and it can be a big win for your product.

  You will find that you can tell a great deal from body language and tone. It's painfully obvious when they don't like your ideas, and it's also clear when they genuinely do. They'll almost always ask for an e‐mail when the product is out if they like what they see. And, if they really like it, they'll try to get it early from you.

  Summarizing the Learning

  The point is to gain a deeper understanding of your users and customers and, of course, to identify the friction points in the prototype so you can fix them. It might be nomenclature, flow, visual design issues, or mental model issues, but as soon as you think you've identified an issue, just fix it in the prototype. There's no law that says you have to keep the test identical for all of your test subjects. That kind of thinking stems from misunderstanding the role this type of qualitative testing plays. We're not trying to prove anything here; we're just trying to learn quickly.

  After each test subject, or after each set of tests, someone—usually either the product manager or the designer—writes up a short summary e‐mail of key learnings and sends it out to the product team. But forget big reports that take a long time to write, are seldom read, and are obsolete by the time they're delivered because the prototype has already progressed so far beyond what was used when the tests were done. They really aren't worth anyone's time.

  The point is to gain a deeper understanding of your users and customers and, of course, to identify the friction points in the prototype so you can fix them.

  CHAPTER 51

  Testing Value

  Customers don't have to buy our products, and users don't have to choose to use a feature. They will only do so if they perceive real value. Another way to think about this is that just because someone can use our product doesn't mean they will choose to use our product. This is especially true when you are trying to get your customers or users to switch from whatever product or system they were using before to your new product. And, most of the time, our users and customers are switching from something—even if that something is a homegrown solution.

  So many companies and product teams think all they need to do is match the features (referred to as feature parity), and then they don't understand why their product doesn't sell, even at a lower price.

  The customer must perceive your product to be substantially better to motivate them to buy your product and then wade through the pain and obstacles of migrating from their old solution.

  All of this is a long way of saying that good product teams spend most of their time on creating value. If the value is there, we can fix everything else. If it's not, how good our usability, reliability, or performance is doesn't matter.

  Just because someone can use our product doesn't mean they will choose to use our product.

  There are several elements of value, and there are techniques
for testing all of them.

  Testing Demand

  Sometimes it's unclear if there's demand for what we want to build. In other words, if we could come up with an amazing solution to this problem, do customers even care about this problem? Enough to buy a new product and switch to it? This concept of demand testing applies to entire products, down to a specific feature on an existing product.

  We can't just assume there's demand, although often the demand is well established because most of the time our products are entering an existing market with demonstrated and measurable demand. The real challenge in that situation is whether we can come up with a demonstrably better solution in terms of value than the alternatives.

  Testing Value Qualitatively

  The most common type of qualitative value testing is focused on the response, or reaction. Do customers love this? Will they pay for it? Will users choose to use this? And most important, if not, why not?

  Testing Value Quantitatively

  For many products, we need to test efficacy, which refers to how well this solution solves the underlying problem. In some types of products, this is very objective and quantitative. For example, in advertising technology, we can measure the revenue generated and easily compare that to other advertising technology alternatives. In other types of products, such as games, it's much less objective.

  CHAPTER 52

  Demand Testing Techniques

  One of the biggest possible wastes of time and effort, and the reason for countless failed startups, is when a team designs and builds a product—testing usability, testing reliability, testing performance, and doing everything they think they're supposed to do—yet, when they finally release the product, they find that people won't buy it.

  Even worse, it's not like they sign up for a trial in significant numbers, but then for some reason don't decide to buy. We can usually recover from that. It's that they don't even want to sign up for the trial. That's a tremendous and often fatal problem.

  You might experiment with pricing, positioning, and marketing, but you eventually conclude that this is just not a problem people are concerned enough about.

  The worst part of this scenario is that, in my experience, it's so easily avoided.

  The problem I just described can happen at the product level, such as an all‐new product from a startup, or at the feature level. The feature example is depressingly common. Every day, new features get deployed that don't get used. And, this case is even easier to prevent.

  One of the biggest possible wastes of time and effort, and the reason for countless failed startups, is when a team designs and builds a product, yet, when they finally release the product, they find that people won't buy it.

  Suppose you were contemplating a new feature, perhaps because a large customer is asking for it or maybe because you saw that a competitor has the feature or maybe it's your CEO's pet feature. You talk about the feature with your team, and your engineers point out to you that the implementation cost is substantial. Not impossible but not easy either—enough that you don't want to take the time to build this only to find out later it wasn't used.

  The demand‐testing technique is called a fake door demand test. The idea is that we put the button or menu item into the user experience exactly where we believe it should be. But, when the user clicks that button, rather than taking the user to the new feature, it instead takes the user to a special page that explains that you are studying the possibility of adding this new feature, and you are seeking customers to talk to about this. The page also provides a way for the user to volunteer (by providing their e‐mail or phone number, for example).

  What's critical for this to be effective is that the users not have any visible indication that this is a test until after they click that button. The benefit is that we can quickly collect some very helpful data that will allow us to compare the click‐through rate on this button with our expectations or with other features. And then we can follow up with customers to get a better understanding of what they would expect.

  The same basic concept applies to entire products. Rather than a button on a page, we set up the landing page for the new offering's product funnel. This is called a landing page demand test. We describe that new offering exactly as we would if we were really launching the service. The difference is that if the user clicks the call to action, rather than signing up for the trial (or whatever the action might be), the user sees a page that explains that you are studying the possibility of adding this new offering, and you'd like to talk with them about that new offering if they're willing.

  With both forms of demand testing, we can show the test to every user (in the case of an early startup) or we can show it to just a very small percentage of users or within in a specific geography (in the case of a larger company).

  Hopefully, you can see that this is very easy to do, and you can quickly collect two very useful things: (1) some good evidence on demand and (2) a list of users who are very ready and willing to talk with you about this specific new capability.

  In practice, the demand is usually not the problem. People do sign up for our trial. The problem is that they try out our product and they don't get excited about it—at least not excited enough to switch from what they currently use. And dealing with that is the purpose of the qualitative and quantitative techniques in the chapters that follow.

  Discovery Testing in Risk‐Averse Companies

  Much has been written about how to do product discovery in startups—by me and by many others. There are many challenges for startups, but most important is survival.

  One of the real advantages to startups from a product point of view is that there's no legacy to drag along, no revenue to preserve, and no reputation to safeguard. This allows us to move very quickly and take significant risks without much downside.

  However, once your product develops to the point that it can sustain a viable business (congratulations!), you now have something to lose, and it's not surprising that some of the dynamics of product discovery need to change. My goal here is to highlight these differences and to describe how the techniques are modified in larger, enterprise companies.

  Others have also been writing about how to apply these techniques in enterprises, but on the whole, I have not been particularly impressed with the advice I've seen. Too often, the suggestion is to carve out a protected team and provide them some air cover so they can go off and innovate. First of all, what does this say about the people not on these special innovation teams? What does this say about the company's existing products? And, even when something does get some traction, how well do you think the existing product teams will accept this learning? These are some of the reasons I'm not an advocate of so‐called corporate innovation labs.

  The most important point for technology companies: If you stop innovating, you will die.

  I have long argued that the techniques of product discovery and rapid test and learn absolutely apply to large enterprise companies, and not just to startups. The best product companies—including Apple, Amazon, Google, Facebook, and Netflix—are great examples where this kind of innovation is institutionalized. In these companies, innovation is not something that just a few people get permission to pursue. It is the responsibility of all product teams.

  But before I go any further, I want to emphasize the most important point for technology companies: If you stop innovating, you will die. Maybe not immediately, but if all you do is optimize your existing solutions, and you stop innovating, it is only a matter of time before you are someone else's lunch.

  That said, we need to do this in a responsible way.

  I believe it's a non‐negotiable that we simply must continue to move our products forward, and deliver increased value to our customers.

  That said, we need to do this in a responsible way. This really means doing two really big things—protect your revenue and brand, and protect your employees and customers.

  Protect Revenue and Brand

&nbs
p; The company has built a reputation and has earned revenue, and it is the job of the product teams to do product discovery in ways that protect this reputation and this revenue. We've got more techniques than ever to do this, including many techniques for creating very low‐cost and low‐risk prototypes, and for proving things work with minimal investment and limited exposure. We love live‐data prototypes and A/B testing frameworks.

  Many things do not pose a risk to brand or revenue, but for the things that do, we utilize techniques to mitigate this risk. Most of the time an A/B test with 1 percent or less of the customers exposed is fine for this.

  Sometimes, however, we need to be even more conservative. In such cases, we'll do an invite‐only live‐data test, or we'll utilize our customer discovery program customers that are under NDA. There are any number of other techniques in the same spirit of test and learn in a responsible way.

  Protect Employees and Customers

  In addition to protecting revenue and brand, we also need to protect our employees and our customers. If our customer service, professional services, or sales staff are blindsided by constant change, it makes it very hard for them to do their jobs and take good care of customers.

  Similarly, customers that feel like your product is a moving target that they have to constantly relearn won't be happy customers for long.

 

‹ Prev