by Marty Cagan
This is why we use gentle deployment techniques, including assessing customer impact. Although this may seem counterintuitive, continuous deployment is a very powerful gentle deployment technique, and when used properly along with customer impact assessment, it is a powerful tool for protecting our customers.
Again, most experiments and changes are non‐issues, but it is our responsibility to be proactive with customers and employees and sensitive to change.
Don't get me wrong. I am not arguing that innovating in enterprise companies is easy—it's not. But it's not because product discovery techniques are the obstacles to innovation. They are absolutely critical to consistently delivering increased value to customers. There are broader issues in large enterprise companies that typically create obstacles to innovation.
If you are at a larger, enterprise company, know that you absolutely must move aggressively to continuously improve your product, well beyond small optimizations. But you also must do this product work in ways that protect brand and revenue, and protect your employees and your customers.
CHAPTER 53
Qualitative Value Testing Techniques
Quantitative testing tells us what's happening (or not), but it can't tell us why, and what to do to correct the situation. That's why we do qualitative testing. If users and customers are not responding to a product the way we had hoped, we need to figure out why that's the case.
As a reminder, qualitative testing is not about proving anything. That's what quantitative testing is for. Qualitative testing is about rapid learning and big insights.
I argue that qualitative testing of your product ideas with real users and customers is probably the single most important discovery activity for you and your product team.
When you do this type of qualitative user testing, you don't get your answer from any one user, but every user you test with is like another piece of the puzzle. Eventually, you see enough of the puzzle that you can understand where you've gone wrong.
I know this is a big claim, but I argue that qualitative testing of your product ideas with real users and customers is probably the single most important discovery activity for you and your product team. It is so important and helpful that I push product teams to do at least two or three qualitative value tests every single week. Here's how to do it:
Interview First
We generally begin the user test with a short user interview where we try to make sure our user has the problems we think she has, how she solves these problems today, and what it would take for her to switch (see Customer Interview Technique).
Usability Test
We have many good techniques for testing value qualitatively, but they all depend on the user first understanding what your product is and how it works. This is why a value test is always preceded by a usability test.
During the usability test, we test to see whether the user can figure out how to operate our product. But, even more important, after a usability test the user knows what your product is all about and how it's meant to be used. Only then can we have a useful conversation with the user about value (or lack thereof).
Preparing a value test therefore includes preparing a usability test. I described how to prepare for and run a usability test in the last chapter, so for now let me again emphasize that it's important to conduct the usability test before the value test, and to do one immediately after the other.
If you try to do a value test without giving the user or customer the opportunity to learn how to use the product, then the value test becomes more like a focus group where people talk hypothetically about your product, and try to imagine how it might work. To be clear: focus groups might be helpful for gaining market insights, but they are not helpful in discovering the product we need to deliver (see Product Discovery Principle #1).
This testing involves at least you as product manager and your product designer, but I am constantly amazed at how often the magic happens when one of your engineers is right there watching the qualitative testing with you. So, it's worth your pushing to make this happen as much as possible.
To test usability and value, the user needs to be able to use one of the prototypes we described earlier. When we're focused on testing value, we usually utilize high‐fidelity user prototypes.
High‐fidelity means it feels very realistic, which turns out to be especially important for value testing. You can also use a live‐data prototype or a hybrid prototype.
Specific Value Tests
The main challenge in testing value when you're sitting face to face with actual users and customers is that people are generally nice—and not willing to tell you what they really think. So, all of our tests for value are designed to make sure the person is not just being nice to you.
Using Money to Demonstrate Value
One technique I like for gauging value is to see if the user would be willing to pay for it, even if you have no intention of charging them for it. We're looking for the user to pull out his or her credit card right then and there and ask to buy the product (but we don't really want the card information).
If it's an expensive product for businesses—beyond what someone would put on a credit card—you can ask people if they will sign a “non‐binding letter of intent to buy” which is a good indicator that people are serious.
Using Reputation to Demonstrate Value
But there are other ways a user can “pay” for a product. You can see if they would be willing to pay with their reputation. You can ask them how likely they'd be to recommend the product to their friends or co‐workers or boss (typically on a scale of 0–10). You can ask them to share on social media. You can ask them to enter the e‐mail of their boss or their friends for a recommendation (even though we don't save the e‐mails, it's very meaningful if people are willing to provide them).
Using Time to Demonstrate Value
Especially with businesses, you can also ask the person if they'd be willing to schedule some significant time with you to work on this (even if we don't need it). This is another way people pay for value.
Using Access to Demonstrate Value
You can also ask people to provide the login credentials for whatever product they would be switching from (because you tell them there's a migration utility or something). Again, we don't really want their login and password—we just want to know if they value our product highly enough that they're truly willing to switch right then and there.
Iterating the Prototype
Remember, this is not about proving anything. It's about rapid learning. As soon as you believe you have a problem, or you want to try a different approach, you try it.
For example, if you show your prototype to two different people and the response you get is substantially different, your job is to try to figure out why. Maybe you have two different types of customers, with different kinds of problems. Maybe you have different types of users, with different skill sets or domain knowledge. Maybe they are running different solutions today, and one is happy with their current solution and one is not.
You might determine that you just aren't able to get people interested in this problem, or you can't figure out a way to make this usable enough that your target users can realize this value. In that case, you may decide to stop right there and put the idea on the shelf. Some product managers consider this a big failure. I view it as saving the company the wasted cost of building and shipping a product your customers don't value (and won't buy), plus the opportunity cost of what your engineering team could be building instead.
As product manager, you need to make sure you are at every single qualitative value test. Do not delegate this.
The remarkable thing about this kind of qualitative testing is just how easy and effective it is. The best way to prove this to yourself is to take your laptop or mobile device with your product or prototype on it to someone who hasn't seen it yet, and just give it a try.
One important note. As product manager, you need to make sure you are at every singl
e qualitative value test. Do not delegate this, and certainly don't try to hire a firm to do this for you. Your contribution to the team comes from experiencing as many users as possible, first hand, interacting with and responding to your team's ideas. If you worked for me, the continuation of your monthly salary would depend on this.
CHAPTER 54
Quantitative Value Testing Techniques
While qualitative testing is all about fast learning and big insights, quantitative techniques are all about collecting evidence.
We will sometimes collect enough data that we have statistically significant results (especially with consumer services with a lot of daily traffic), and other times we'll set the bar lower and just collect actual usage data that we consider useful evidence—along with other factors—to make an informed decision.
This is the main purpose of the live‐data prototype we discussed earlier. As a reminder, a live‐data prototype is one of the forms of prototype created in product discovery intended to expose certain use cases to a limited group of users to collect some actual usage data.
We have a few key ways to collect this data, and the technique we select depends on the amount of traffic we have, the amount of time we have, and our tolerance for risk.
While qualitative testing is all about fast learning and big insights, quantitative techniques are all about collecting evidence.
In a true startup environment, we don't have much traffic and we also don't have much time, but we're usually fine with risk (we don't have much to lose yet).
In a more established company, we often have a lot of traffic, we have some amount of time (mostly we're worried about management losing patience), and the company is usually more averse to risk.
A/B Testing
The gold standard for this type of testing is an A/B test. The reason we love A/B tests is because the user doesn't know which version of the product she is seeing. This yields data that is very predictive, which is what we ideally want.
Keep in mind that this is a slightly different type of A/B test than optimization A/B testing. Optimization testing is where we experiment with different calls to action, different color treatments on a button, and so forth. Conceptually they are the same, but in practice there are some differences. Optimization testing is normally surface‐level, low‐risk changes, which we often test in a split test (50:50).
In discovery A/B testing, we usually have the current product showing to 99 percent of our users, and the live‐data prototype showing to only 1 percent of our users or less. We monitor the A/B test more closely.
Invite‐Only Testing
If your company is more risk averse, or if you just don't have enough traffic to be able to show to 1 percent—or even 10 percent—and get useful results anytime soon, then another effective way to collect evidence is the invite‐only test. This is where you identify a set of users or customers that you contact and invite to try the new version. You tell them that it is an experimental version, so they are effectively opting in if they choose to run it.
The data that this group generates is not as predictive as from a true, blind, A/B test. We realize that those who opt in are generally more early adopter types; nevertheless, we are getting a set of actual users doing their work with our live‐data prototype, and we are collecting really interesting data.
I can't tell you how often we think we have something they'll love, and then we make it available to a limited group like this and we find that they are just not feeling it. Unfortunately, with a quantitative test like this all we know for sure is that they're not using it—we can't know why. That's when we'll follow up with a qualitative test to try and quickly learn why they're not as into it as we had hoped.
Customer Discovery Program
A variation of the invite‐only test is to use the members of the customer discovery program we discussed in the section on ideation techniques. These companies have already opted in to testing new versions, and you already have a close relationship with them so you can follow up with them easily.
For products for businesses, I typically use this as my primary technique for collecting actual usage data. We have the customer discovery program customers getting frequent updates to the live‐data prototype, and we compare their usage data to that of our broader customers.
The Role of Analytics
One of the most significant changes in how we do product today is our use of analytics. Any capable product manager today is expected to be comfortable with data and understand how to leverage analytics to learn and improve quickly.
I attribute this change to several factors.
First, as the market for our products has expanded dramatically due to access globally—and also by way of connected devices—the sheer volume of data has dramatically increased, which gives us interesting and statistically significantly results much faster.
Any capable product manager today is expected to be comfortable with data and understand how to leverage analytics to learn and improve quickly.
Second, the tools for accessing and learning from this data have improved significantly. Mostly, however, I see an increased awareness of the role that data can play in helping you learn and adapt quickly.
There are five main uses of analytics in strong product teams. Let's take a close look at each of these uses:
Understand User and Customer Behavior
When most people think of analytics, they think of user analytics. That is, however, but one type of analytic. The idea is to understand how our users and customers are using our products (remember, there can be many users at a single customer—at least in the B2B context). We may do this to identify features that are not being used, or to confirm that features are being used as we expect, or simply to gain a better understanding of the difference between what people say and what they actually do.
This type of analytic has been collected and used for this purpose by good product teams for at least 30 years. A solid decade before the emergence of the web, desktops and servers were able to call home and upload behavior analytics, which were then used by the product team to make improvements. This to me is one of the very few non‐negotiables in product. My view is that, if you're going to put a feature in, you need to put in at least the basic usage analytics for that feature. Otherwise, how will you know if it's working as it needs to?
Measure Product Progress
I have long been a strong advocate of using data to drive product teams. Rather than provide the team an old‐style roadmap listing someone's best guess of what features may or may not work, I strongly prefer to provide the product team with a set of business objectives—with measurable goals—and then the team makes the calls as to what are the best ways to achieve those goals. It's part of the larger trend in product to focus on outcome and not output.
Prove Whether Product Ideas Work
Today, especially for consumer companies, we can isolate the contribution of new features, new versions of workflows, or new designs by running A/B tests and then comparing the results. This lets us prove which of our ideas work. We don't have to do this with everything, but with things that have high risk or high deployment costs, or that require changes in user behavior, this can be a tremendously powerful tool. Even where the volume of traffic is such that collecting statistically significant results is difficult or time consuming, we can still collect actual data from our live‐data prototypes to make decisions that are much better informed.
Inform Product Decisions
In my experience, the worst thing about product in the past was its reliance on opinions. And, usually, the higher up in the organization the person was who voiced it, the more that opinion counted.
Today, in the spirit of data beats opinions, we have the option of simply running a test, collecting some data, and then using that data to inform our decisions. The data is not everything, and we are not slaves to it, but I find countless examples today in the best product teams of decisions informed by test results. I hear
constantly from teams how often they are surprised by the data, and how minds are changed by it.
Inspire Product Work
While I am personally hooked on each of the above roles of analytics, I must admit that my personal favorite is this last point. The data we aggregate (from all sources) can be a gold mine. It often boils down to asking the right questions. But by exploring the data, we can find some very powerful product opportunities. Some of the best product work I see going on right now was inspired by the data. Yes, we often get great ideas by observing our customers, and we do often get great ideas by applying new technology. But studying the data itself can provide insights that lead to breakthrough product ideas.
Largely, this is because the data often catches us off guard. We have a set of assumptions about how the product is used—most of which we are not even conscious of—and when we see the data, we're surprised that it doesn't track with those assumptions. It's these surprises that lead to real progress.
It's also important for tech product managers to have a broad understanding of the types of analytics that are important to your product. Many have too narrow of a view. Here is the core set for most tech products:
User behavior analytics (click paths, engagement)