Book Read Free

Traversing the Traction Gap

Page 12

by Bruce Cleveland


  Revenue

  Focus on a subset of a market, then expand outward from there.

  Team

  Hire only the best people, preferably those you’ve worked with previously.

  Systems

  Keep your systems incredibly lightweight at this stage.

  Traction Gap Hacks ▶ IPR

  The following are four of several methods you can execute to capture market signals during the Ideation-to-IPR stage. They have been developed from some of the best product practices at companies in Silicon Valley.

  Discovery Interviews

  How do you generate ideas? If you are interested in finding problems to solve and features to build, I recommend you use Discovery Interviews. This is an in-depth interview technique designed to uncover new insights from stakeholders regarding:

  business strategy

  product strategy/product roadmaps

  visual brand strategy

  positioning and messaging

  website strategy and design

  marketing strategy and planning

  content strategy

  customer experience strategy7

  Discovery Interviews are a great way of developing tens, if not hundreds, of new ideas. Additionally, Discovery Interviews can generate deeper insights with scale.

  You typically should work with groups of 5, 10, or 20 if you are focusing on B2B. Why? If you are looking for a specific job title, you need a larger pool to have a better shot at tracking that person down. I recommend you use LinkedIn as a very effective way to reach out and find the right people with the right titles for your B2B interviews.

  On the other hand, if you are focusing on B2C, then you may want to try a different tactic altogether. Why not try going to a coffee shop and offering to buy a coffee for the next person with a $5 gift card if they are up for answering questions for 10 minutes.

  Which coffee shop? It depends. For example, you can use geography to pick a coffee shop near a college or a military base if that is who you are looking for.

  What if you were looking for young mothers? Be creative and head to a park or neighborhood where you see lots of strollers in the middle of the day.

  Once you have your subject locked in, you can start the interview process. Remember, this is a Discovery Interview, which means you must be careful not to bias the conversation.

  Tips for conducting a Discovery Interview:

  ask open-ended questions first

  ask clarifying questions next

  ask to evaluate what others have said next

  ask them to make a choice (impose a cost)—what are your top 3 of these options I have here?

  It is important to impose a cost, because people are full of ideas. Some of those ideas are good ideas. The problem is that customers are very smart about what they want, but they are not as smart about what it costs to make those dreams a reality.

  It is critical then to determine just how badly customers really want a new feature or a new product. Imposing a cost means forcing them to confront an opportunity cost, such as “Would you give up two hours of battery life to make that happen?”

  Ideally you can record audio of the interviews (video is too intimidating for many subjects), because people’s notes are rarely accurate. In order to record, you will need permission—and sometimes gaining permission can be a bit of an impediment. But I have found that people become accustomed to the formal arrangement pretty quickly. It helps if you make sure that the microphone is small and unobtrusive. Even better, there are several phone and tablet applications that allow you to easily record interviews—no external microphone required.

  Large-Scale Surveys

  The beauty of Discovery Interviews is that they can generate lots of new ideas. Those ideas can translate into new proposals for features or products. What Discovery Interviews are not great at is giving you a real sense of what to prioritize at any given time.

  Even in a large interview study of, say, twenty people you can end up concluding that you’ve uncovered a significant trend or preference . . . only to find out that instead you merely had twenty people with a particular bias, and that a larger pool of people would have shown otherwise.

  That is why you should supplement your interviews with validation surveys. Validation surveys allow you to get a better sense of the overall market by tapping many more perspectives. You can take the 50 to 100 ideas that came out of a typical set of Discovery Interviews and ask people (especially noncustomers) what they think.

  The key is to understand that you are using validation surveys to get a sense of preference. The instrument is not exhaustive, and that is intentional.

  Greater depth is not always better if it means that people produce worse ideas. Asking someone to rank-order 10 things is reasonable, but asking them to rank-order 100 things produces junk data for anything beyond the first 10 items.

  Some basic features of a good validation survey include:

  Questions that ask participants to rate all features on a scale from 1 to 5 or 1 to 7. There is lots of research out there on when you should prefer one of these scales to the other.

  Questions that include a stack rank feature. This gives better relative detail since the interview subjects are literally choosing how they would rank their preferences. As a helpful tip, any more than 10 items tends to become too difficult for participants.

  Questions that ask the participants to “pick the ones you like.” This is great for usability studies, especially for a larger list of more than 15 items. As a helpful tip, limit the number of “picks” a person can make so you don’t get the dreaded “they are all fine” response.

  Questions that are a simple thumbs-up or thumbs-down. Once again, this is great for usability studies when you have a larger list to work with.

  Questions that ask the audience to consider spending a specific amount of money, say $100. Ask the participant to allocate the funds across a list to determine what he or she would really be interested in buying.

  The best survey is one that mixes all these question types, in order to approach the subject matter from multiple directions and modalities to triangulate in on the truth.

  In addition to these types of questions, consider going a bit more advanced and employing some max differential and conjoint analysis instruments. Those can really provide some cool and insightful data.

  Once you have your data, you will want to have some basic rules and tools for interpretation. You have probably taken several statistics courses over the years, so I won’t regurgitate it all here, but suffice it to say that you want to make sure that you have a grasp of:

  confidence level and margin of error (confidence interval)

  directionality and statistical significance

  distribution type

  single vs. double-tailed

  Z-scores

  Surveys are an important way to move from the ideas that are generated by Discovery Interviews toward an actual sense of the preferences of the market. Given the choice of using them or not, in most circumstances you should put them to work.

  Companies such as Obo, which provide product teams with the ability to build market-first products, typically offer their own courses on how to do statistical validation and teach teams how to use their application suites.

  Smoke Tests

  Discovery Interviews generate ideas, and surveys help you take those ideas and discern a market preference. With luck, that will narrow your list to the top 10 ideas for features or products.

  Now you need to know how many people would actually go beyond preference to buy the items in question.

  One way to start getting a sense of real market demand is to run smoke tests. The goal here is to see how big the very top of the funnel is for each of the ideas we have generated and tested through surveys.

  Each funnel typically has an outcome, such as increased revenue, virality, con
version rates, engagement rates, retention, etc.—but the critical question is how many people are at the very top and willing to try the product at all.

  When I have seen a product or feature fail, it is often because of poor performance at the very top of the funnel. For these doomed products, there is only a small percent of users or target market that ever tries the product or feature. With such a small input, the number of potential customers that can be induced to buy becomes too small for profitability at the bottom of the funnel.

  So if you believe that 25 percent of your target market is excited about your product and will try it right away . . . but only 2 percent actually do give it a try, then it doesn’t matter that you have a 100 percent conversion rate for the rest of the funnel. The top of the funnel is simply too small to make that product a success.

  How do you get a good sense of what percentage of users or target market will initially try a product or feature before you invest the time and money into writing a single line of code?

  This is where smoke tests come in.

  Smoke Test #1: Ad

  Description: You show an ad for a prospective product or feature to a channel (e.g., LinkedIn, Facebook, or Google). A user clicks on the ad and ends up on a landing page. The page provides a short 2-to-4-sentence description of the product or feature.

  After this description, the page says something like “Thank you for your interest. We are in the process of building this. If you would like to be notified when it is ready, please provide your email below.”

  The clicks on the ad will show you the reach for the concept. How many people are interested in the product or feature? The email addresses show how strong the interest really is because it imposes a cost—surrendering personal data—on the potential user. It is not as high a cost as a pre-order deposit; but it still shows interest, because no one wants to risk being “spammed” unless they are truly interested.

  Smoke Test #2: Fake Door

  Description: You use a widget that shows a pop-up or “toaster” ad in your website or app that describes the new concept or feature.

  A toaster ad is an animated flash mini-commercial overlaid into a video. When a “toaster” ad appears onscreen, the viewer has the option to watch more about the commercial and is then taken to a “player within the player,” where he or she is encouraged to interact with the advertiser’s content. The video then continues to play after the user is finished viewing the ad.

  After the short description, there is a “Try it now” button. When the user clicks the button, he or she sees a second pop-up that says “Thank you for your interest. We are in the process of building this. If you would like to be notified when it is ready, please provide your email below.”

  Once again, the clicks show reach and the email addresses show motivation.

  These smoke tests can help you determine how large the top of the funnel is before you make a serious investment in your new product or feature.

  With the mouth of the funnel secure, you can now focus on the throat of the funnel—the actual sales process—secure in the knowledge that you will have enough sales prospects with whom to work once the product is actually available.

  A/B Testing

  Once you are at the design stage and ready to proceed with specific decisions, you need to introduce A/B testing. A/B testing shows multiple variations concurrently.

  The primary benefit of A/B testing is that you avoid the problems associated with traditional pre/post testing, in which a subject interacts with variations of the same product and is biased toward one simply because of the order in which they were presented.

  There are times when you may not be able to perform concurrent operations. For example, you may not be in a position to run two databases from different vendors at the same time. So you have to be creative and do the best you can do within your constraints.

  When A/B testing began to become more mainstream, there were two basic objections to this methodology.

  First, there were no frameworks; it took a lot of custom coding and analysis to perform the testing properly.

  Second, people argued that they did not have time to build a product more than once for testing.

  In time, the first objection was remedied with technology; today there are lots of commercial products available to help with A/B testing. The second objection has been resolved through experience.

  Invariably, when people release a new product or major feature, it performs at only about 30 to 40 percent of expectations in terms of revenue or conversion rates or engagement. That’s bad enough; but if you release the product or feature without doing A/B testing, then you will have no real understanding why something is failing to perform up to expectations.

  Many companies assume that the failure is due to those 3 or 4 sub-features that were left out. They then force the release of the new features . . . and the needle only moves a couple of points. That’s when they panic.

  Two things tend to happen next: companies either move on to other products or features in hopes that the next bright and shiny option will be more successful; or they simply load the current offering up with even more features. But, as often as not, this gesture fails as well, because they still don’t understand the why of their depressed customer demand.

  A/B testing breaks this vicious cycle because, when you release the 3 or 4 variants for testing, you get a comprehensive overview of what is and what is not working.

  Experience suggests that if you release four variants, then one or two will work and one or two will not. You should update your A/B experiments every week, killing off the poorly performing variants and refining the ones that work.

  A simple analogy helps reveal the power of good A/B testing. Imagine that you go to your car mechanic because you noticed an electrical issue. Perhaps the dashboard lights are going on and off. The mechanic does no troubleshooting and instead just starts replacing parts . . . on your dime. The mechanic may randomly end up actually fixing the problem; but without any troubleshooting, the mechanic can’t inform you what the problem was in the first place—and you have spent a lot of money for that ignorance.

  After 5 or 6 weeks of good usability testing, you can usually go from 40 percent of expectations to 80, 90, and sometimes even 120 percent of expectations. A/B testing ultimately is a methodical and data-driven approach to troubleshooting issues and isolating possible root causes. By comparison, nonsystematic testing ultimately may identify a problem, but provides no answers to prevent a similar problem from recurring.

  When designing your A/B tests, here are several key principles you should follow:

  Typically limit to 2 to 4 variants for any given test.

  Keep the test population the same size for each variant. Thus, if 2 percent of population sees variant 1, then keep it to 2 percent for variants 2, 3, 4, etc. This makes it easier to compare results and avoid any scaling issues (sometimes when you double the population, the effect is nonlinear and may go up by only 50 percent instead of the expected 100 percent).

  You always need to have at least one control, so you have a benchmark to see how much each variant is better (or worse) than what you currently have.

  Testing in B2C is relatively easy—you may have millions of people and can get statistically relevant results (usually 1000s of clicks per metric) in a few days. Plus, if you upset anyone, the impact on your user base or revenues is likely very small.

  B2B has issues dealing with scale. It could take weeks or months to get to statistical significance. Plus, if you upset a customer, it could have a real impact on revenues and user base.

  If your testing population is not randomly distributed across the controls and variants, your results will not be valid and your testing must be restarted and old data thrown away.

  You should have more than one control. Why? You can compare results of two controls. If they are not fairly close, it highlights randomization issues and the need to restar
t the experiment. The downside is that it takes up samples, requiring more time to be statistically relevant.

  A/B tests are considered the gold standard for data on what works and what does not, but there are real limits on what you can test. Further, some B2C companies focus so much on A/B testing that they lose sight of the overall market.

  5

  GETTING TO MINIMUM VIABLE PRODUCT

  Congratulations!

  You’ve come a long way since Ideation. You are now formally in the early stages of the actual Traction Gap—it began when you reached your Initial Product Release (IPR)—although you are still in the middle of creating a product that your team can aggressively take to market. Everything you do at this point must be focused on reaching Minimum Viable Product (MVP) as quickly as you can—but not so quickly that you ignore this stage’s essential building blocks that include determining product quality and product usage rates. You must also complete basic market-engineering tasks such as developing preliminary pricing models, value propositions, product demonstrations, and sales enablement content (e.g., white papers and video testimonials).

  You should now have your initial team in place, defined (or redefined) your Minimum Viable Category, performed statistically valid market-first research, raised a seed or even a full round of financing, and placed the first version of your product (IPR) into the hands of consumers or business users.

  Industry data—from startups that have successfully scaled—suggests that after reaching IPR, you have only about six months to reach MVP. Needless to say, you are already behind, even when you begin.

  To achieve this growth objective, you must develop and implement a well-functioning Beta program so you can wring out the bugs, polish the user interface, and take care of some of that technical debt you’ve accumulated along the way.

  At this point, everyone on your product team should be focused on the Beta program. Meanwhile, people with marketing, product marketing, and customer support skills should be preparing company launch plans (e.g., media strategy, initial revenue systems, and support systems) and tracking the metrics associated with reaching MVP.

 

‹ Prev