INSPIRED

Home > Other > INSPIRED > Page 19
INSPIRED Page 19

by Marty Cagan


  In terms of lessons learned, I have seen many teams proceed to delivery without adequately considering the feasibility risk. Whenever you hear stories of product teams that grossly underestimated the amount of work required to build and deliver something, this is usually the underlying reason.

  It may be that the engineers were simply too inexperienced with their estimates, that the engineers and product manager had an insufficient understanding of what was going to be needed, or that the product manager did not give the engineers sufficient time to truly investigate.

  CHAPTER 47

  User Prototype Technique

  A user prototype—one of the most powerful tools in product discovery—is a simulation. Smoke and mirrors. It's all a façade. There is nothing behind the curtain. In other words, if you have a user prototype of an e‐commerce site, you can enter your credit card information as many times as you want—you won't actually be buying anything.

  There is a wide range of user prototypes.

  At one end of the spectrum are low‐fidelity user prototypes. A low‐fidelity user prototype doesn't look real—it is essentially an interactive wireframe. Many teams use these as a way to think through the product among themselves, but there are other uses as well.

  Low‐fidelity user prototypes, however, represent only one dimension of your product—the information and the workflow—there's nothing there about the impact of visual design or the differences caused by the actual data, to mention just a couple of important examples.

  At the other end of the spectrum are high‐fidelity user prototypes. A high‐fidelity user prototype is still a simulation; however, now it looks and feels very real. In fact, with many good high‐fidelity user prototypes, you need to look close to see that it's not real. The data you see is very realistic, but it's not real either—mostly meaning it's not live.

  A user prototype is key to several types of validation and is also one of our most important communication tools.

  For example, if in my e‐commerce user prototype example, I do a search for a particular type of mountain bike, it always comes back with the same set of mountain bikes. But if I look closely, they're not the actual bikes I asked for. And I notice that every time I search, it's always the same set of bikes no matter what price or style I specify.

  If you are trying to test the relevance of the search results, this would not be the right tool for the job. But if you are trying to come up with a good overall shopping experience or figure out how people want to search for mountain bikes, this is probably more than adequate, and it's very quick and easy to create.

  There are many tools for creating user prototypes—for every type of device, and for every level of fidelity. The tools are mainly developed for product designers. In fact, your product designer almost certainly already has one or more favorite user prototyping tools.

  It's also the case that some designers prefer to hand‐code their high‐fidelity user prototypes, which is fine so long as they are fast, and they are willing to treat the prototype as disposable.

  The big limitation of a user prototype is that it's not good for proving anything—like whether or not your product will sell.

  Where a lot of novice product people go sideways is when they create a high‐fidelity user prototype and they put it in front of 10 or 15 people who all say how much they love it. They think they've validated their product, but unfortunately, that's not how it works. People say all kinds of things and then go do something different.

  We have much better techniques for validating value, so it's important that you understand what a user prototype is not appropriate for.

  This is one of the most important techniques for product teams, so it is well worth developing your team's skills and experience in creating user prototypes at all levels of fidelity. As you'll see in the coming chapters, a user prototype is key to several types of validation and is also one of our most important communication tools.

  CHAPTER 48

  Live‐Data Prototype Technique

  Sometimes, in order to address a major risk identified in discovery, we need to be able to collect some actual usage data. But we need to collect this evidence while in discovery, well before taking the time and expense of building an actual scalable and shippable product.

  Some of my favorite examples of this are when applying game dynamics, search result relevance, many social features, and product funnel work.

  This is the purpose of a live‐data prototype.

  A live‐data prototype is a very limited implementation. It typically has none of the productization that's normally required, such as the full set of use cases, automated tests, full analytics instrumentation, internationalization and localization, performance and scalability, SEO work, and so forth.

  The live‐data prototype is substantially smaller than the eventual product, and the bar is dramatically lower in terms of quality, performance, and functionality. It needs to run well enough to collect data for some very specific use cases, and that's about it.

  The key is to be able to send some limited amount of traffic, and to collect analytics on how this live‐data prototype is being used.

  When creating a live‐data prototype, our engineers don't handle all the use cases. They don't address internationalization and localization work, they don't tackle performance or scalability, they don't create the automated tests, and they only include instrumentation for the specific use cases we're testing.

  A live‐data prototype is just a small fraction of the productization effort (in my experience, somewhere between 5 and 10 percent of the eventual delivery productization work), but you get big value from it. There are two big limitations you do have to keep in mind, however:

  First, this is code, so engineers must create the live‐data prototype, not your designers.

  Second, this is not a commercially shippable product, it's not ready for primetime, and you can't run a business on it. So, if the live‐data tests go well, and you decide to move forward and productize, you will need to allow your engineers to take the time required to do the necessary delivery work. It is definitely not okay for the product manager to tell the engineers that this is “good enough.” That judgment is not up to the product manager. And the product manager does need to make sure key executives and stakeholders understand the limitations as well.

  Today, the technology for creating live‐data prototypes is so good that we can often get what we need in just a couple days to a week. And once we have it we can iterate very quickly.

  Later, we'll discuss the quantitative‐validation techniques and you'll see the different ways we can utilize this live‐data prototype. But for now, know that the key is to be able to send some limited amount of traffic, and to collect analytics on how this live‐data prototype is being used.

  What's important is that actual users will use the live‐data prototype for real work, and this will generate real data (analytics) that we can compare to our current product—or to our expectations—to see if this new approach performs better.

  CHAPTER 49

  Hybrid Prototype Technique

  So far, we've explored user prototypes—which are pure simulations—feasibility prototypes for addressing technical risks, and live‐data prototypes designed to be able to collect evidence, or even statistically significant proof, as to the effectiveness of a product or an idea.

  While these three categories of prototypes handle most situations well, a wide variety of hybrid prototypes also combine different aspects of each of these in different ways.

  One of my favorite examples of a hybrid prototype—and an exceptionally powerful tool for learning quickly in product discovery—is today often referred to as a Wizard of Oz prototype. A Wizard of Oz prototype combines the front‐end user experience of a high‐fidelity user prototype but with an actual person behind the scenes performing manually what would ultimately be handled by automation.

  A Wizard of Oz prototype is absolutely not scalable, and we woul
d never send any significant amount of traffic to this. But the benefit from our perspective is that we can create this very quickly and easily, and from the user's perspective, it looks and behaves like a real product.

  These types of hybrids are great examples of the build things that don't scale philosophy of product discovery.

  For example, imagine that today you have some sort of live chat–based help for your customers, but it's only available during the hours when your customer service staff is in the office. You know that your customers use your product from all around the world at all hours, so you would like to develop an automated chat‐based system that provides helpful answers anytime.

  You could (and should) talk to your customer service staff about the types of inquiries they routinely get and how they respond (a concierge test could help you learn that quickly). Soon you will want to tackle the challenges of this sort of automation.

  One way to learn very quickly and test out several different approaches is to create a Wizard of Oz prototype that provides a simple, chat‐based interface. However, behind the scenes it is literally you as product manager, or another member of your team, who is receiving the requests and composing responses. Soon we begin to experiment with system‐generated responses, perhaps even using a live‐data prototype of our algorithm.

  These types of hybrids are great examples of the build things that don't scale philosophy of product discovery. By being a little clever, we can quickly and easily create tools that let us learn very quickly. Admittedly, it's mainly qualitative learning, but that's often where our biggest insights come from anyway.

  Discovery Testing Techniques

  Overview

  In product discovery, we're essentially trying to quickly separate the good ideas from the bad as we work to try to solve the business problems assigned to us. But what does that really mean?

  We think of four types of questions we're trying to answer during discovery:

  Will the user or customer choose to use or buy this? (Value)

  Can the user figure out how to use this? (Usability)

  Can we build this? (Feasibility)

  Is this solution viable for our business? (Business viability)

  Remember that for many of the things we work on, most or all of these questions are very straightforward and low risk. Your team is confident. They have been there and done this many times before, and so we will proceed to delivery.

  The main activity of discovery is when these answers are not so clear.

  There is no prescribed order to answering these questions. However, many teams follow a certain logic.

  First, we will usually assess value. This is often the toughest—and most important—question to answer, and if the value isn't there, not much else matters. We likely will need to address usability before the user or customer can even recognize the value. In either case, we usually assess usability and value with the same users and customers at the same time.

  In product discovery, we're essentially trying to quickly separate the good ideas from the bad as we work to try to solve the business problems assigned to us.

  Once we have something that our customers believe is truly valuable, and we have designed it in a way that we believe our users can figure out how to use, then we'll typically review the approach with the engineers to make sure this is doable from their technical feasibility perspective.

  If we're also good on feasibility, then we'll show it to key parts of the business where there may be concerns (think legal, marketing, sales, CEO, etc.). We'll often address these business risks last because we don't want to stir up the organization unless we're confident it's worthwhile. Also, sometimes the ideas that survive are not so similar to the original ideas that we started with, and those original ideas may have come from a business stakeholder. It's much more effective to be able to show that stakeholder some evidence of what did and didn't work with customers and why and how you ended up where you are.

  CHAPTER 50

  Testing Usability

  Usability testing is typically the most mature and straightforward form of discovery testing, and it has existed for many years. The tools are better and teams do much more of this now than they used to, and this is not rocket science. The main difference today is that we do usability testing in discovery—using prototypes, before we build the product—and not at the end, where it's really too late to correct the issues without significant waste or worse.

  If your company is large enough to have its own user research group, by all means secure as much of their time for your team as you absolutely can. Even if you can't get much of their time, these people are often terrific resources, and if you can make a friend in this group, it can be a huge help to you.

  If your organization has funds earmarked for outside services, you may be able to use one of many user research firms to conduct the testing for you. But at the price that most firms charge, chances are that you won't be able to afford nearly as much of this type of testing as your product will need. If you're like most companies, you have few resources available, and even less money. But you can't let that stop you.

  So, I'll show you how to do this testing yourself.

  No, you won't be as proficient as a trained user researcher—at least at first—and it'll take you a few sessions to get the hang of it, but, in most cases, you'll find that you can still identify the serious issues and friction points with your product, which is what's important.

  There are several excellent books that describe how to conduct informal usability testing, so I won't try to recreate those here. Instead, I'll just emphasize the key points.

  Recruiting Users to Test

  You'll need to round up some test subjects. If you're using a user research group, they'll likely recruit and schedule the users for you, which is a huge help, but if you're on your own, you've got several options:

  If you've established the customer‐discovery program I described earlier, you are probably all set—at least if you're building a product for businesses. If you're working on a consumer product, you'll want to supplement that group.

  You can advertise for test subjects on Craigslist, or you can set up an SEM campaign using Google AdWords to recruit users (which is especially good if you are looking for users that are in the moment of trying to use a product like yours).

  If you have a list of e‐mail addresses of your users, you can do a selection from there. Your product marketing manager often can help you narrow down the list.

  You can solicit volunteers on your company website—lots of major companies do this now. Remember that you'll still call and screen the volunteers to make sure the people you select are in your target market.

  You can always go to where your users congregate. Trade shows for business software, shopping centers for e‐commerce, sports bars for fantasy sports—you get the idea. If your product is addressing a real need, you usually won't have trouble getting people to give you an hour. Bring some thank‐you gifts.

  If you're asking users to come to your location, you will likely need to compensate them for their time. We often will arrange to meet the test subject at a mutually convenient location, such as a Starbucks. This practice is so common it's usually referred to as Starbucks testing.

  Preparing the Test

  We usually do usability testing with a high‐fidelity user prototype. You can get some useful usability feedback with a low‐ or medium‐fidelity user prototype, but for the value testing that typically follows usability testing, we need the product to be more realistic (more on why later).

  Most of the time, when we do a usability and/or value test, it's with the product manager, the product designer, and one of the engineers from the team (from those that like to attend these). I like to rotate among the engineers. As I mentioned earlier, the magic often happens when an engineer is present, so I try to encourage that whenever possible. If you have a user researcher helping with the actual testing, they will typically administer the test,
but absolutely the product manager and designer must be there for each and every test.

  You will need to define in advance the set of tasks that you want to test. Usually, these are fairly obvious. If, for example, you're building an alarm clock app for a mobile device, your users will need to do things like set an alarm, find and hit the snooze button, and so on. There may also be more obscure tasks, but concentrate on the primary tasks—the ones that users will do most of the time.

  Some people still believe that the product manager and the product designer are too close to the product to do this type of testing objectively, and they may either get their feelings hurt or only hear what they want to hear. We get past this obstacle in two ways. First, we train the product managers and designers on how to conduct themselves, and second, we make sure the test happens quickly—before they fall in love with their own ideas. Good product managers know they will get the product wrong initially and that nobody gets it right the first time. They know that learning from these tests is the fastest path to a successful product.

  You should have one person administer the usability test and another person taking notes. It's helpful to have at least one other person to debrief with afterward to make sure you both saw the same things and came to the same conclusions.

  Formal testing labs will typically have setups with two‐way mirrors or closed‐circuit video monitors with cameras that capture both the screen and the user from the front. This is fine if you have it, but I can't count how many prototypes I've tested at a tiny table at Starbucks—just big enough for three or four chairs around the table. In fact, in many ways, this is preferable to the testing lab because the user feels a lot less like a lab rat.

 

‹ Prev