Step 7
Based on the findings, the prototypes are improved and/or some variants are discarded. If none of the prototypes work, it is useful to obtain more facts and customer needs and adapt the prototypes accordingly. The new variants of prototypes, in turn, serve for tests with potential users.
KEY LEARNINGS
Building prototypes
When prototyping, start with the need of the persona and a trend in the market.
Always build prototypes on the basis of the question of what is to be tested.
Bear in mind that no offer has any intrinsic value. The value that customers ascribe to the offer is all that counts.
Make sure that as many customers as possible ascribe value to the offer.
Test the prototype as early as possible in the real world. Prototypes are assumptions that must be scrutinized.
Use the material that is available to build prototypes.
Create prototypes under time pressure. More time does not yield more results. Time boxing boosts the pressure to get results.
Make sure that the objective and the maturity of the prototype match.
Always schedule enough time for prototyping and testing, across the entire duration of the project.
Involve at an early stage the project team members who will implement the prototype in the end.
Apply “boxing and shelfing” to test a potential portfolio while prototyping.
1.10 How to test efficiently
We always receive valuable feedback when we test prototypes with customers in the real world, namely potential users or in the environment of the users. Peter knows the importance of user and customer tests and tries to get out of his innovation and co-creation lab with his prototypes as early and often as possible. His current test of a prototype—an app for monitoring metabolic diseases with the option of receiving help from a team of doctors online—involves being out and about in Bahnhofstrasse in Zurich. Where else would he find the clientele for such an upscale and expensive managed service?
Single-mindedly, Peter walks up to a pretty, elegantly attired lady in her mid-thirties, his prototype in hand. She is just leaving an exclusive shop for handbags and heads, loaded with bags, for her Bentley. She doesn’t look sick or anything but Peter doesn’t want to start the test with false assumptions. For Peter, the situation is clear: first offer help, come across as a likable fellow, and build up empathy. Peter is glad to carry the large shopping bags for the pretty woman. He asks if she would like to participate in the next big innovation, and two minutes later the two are enjoying a glass of Champagne for the “user test” in the bar kitty-corner from the exquisite shop.
The young lady likes the naive questions Peter asks with the attitude and behavior of a “greenhorn,” so she talks a lot about herself but even more about the illnesses of the older gentlemen with whom she usually spends the warm summer nights in Monte Carlo. After one and a half bottles of Champagne—the mood is cheerful indeed—Priya happens to walk by the bar. At least now we know what triggered Peter’s little marital crisis with Priya. Nonetheless, Peter has learned a lot about the application of his prototype, especially that there is no Wi-Fi coverage on yachts on the high seas, so getting online support from doctors there would be impossible.
Why is testing so important?
In tests with users, it is important to ask “why” in order to learn the real motivation, even if we think we know the answer. Our primary goal in a test interview is to learn, not to give reasons for or sell the prototype. This is why we don’t explain (too early) how it works. We ask for stories and situations in which our potential customers might have needed the prototype. Whenever possible, we collect and analyze quantitative data to validate the qualitative results. This approach allowed Peter to learn a great deal about life in Monte Carlo and on the high seas.
Testing is an essential step in the design thinking process. Not infrequently, decisive change proposals appear during this phase that could enhance the quality of the end result substantially. In particular, the fresh views of people who were not involved in the development of the prototype and thus are much freer in their assessment can pay off quite well in the end. They can see prototypes through the eyes of a customer or user.
HOW MIGHT WE...
design the test sequence?
A test can be broken down into four steps:
1. Test preparation
The best way to start is to define clear-cut learning goals or hypotheses that we want to test:
What do we want to learn?
What do we want to test?
With whom do we want to conduct the test, and where?
In the end, the test should show what parts of an idea we should keep, what we should change, and what we should discard. In the early phases, the goal might also be to understand the problem. Before embarking on the actual test series with various users, an initial test with one person should be carried out to exclude any errors. We leave enough time to implement improvements after the first test prior to conducting more tests.
Define question maps
We formulate simple, clear, and open questions that we can explore in greater depth at the end. They should not be hypothetical but tie in to the real situation of the test person. We do not ask many questions, but rather focus on the core about which we want to gain insights. Courage and focus are important. Less important stuff can be omitted, so we don’t overload the tests. We let the user talk about his experience. As the moderator, we can ask follow-ups when suitable; for example, “Tell us what you think while you do that.”
Determine the test scenario
We reflect on the exact sequence of the test and the situation of the test person and describe it. We provide as much context as necessary and explain it as simply as possible. We let the user experience our prototype and deliberately refrain from explaining the thoughts and considerations behind our prototype. Particularly in phases of the design thinking process in which there are still many iterations, the issue is not, for instance, to find out how much the customer would be willing to pay for a product. Instead, we try to find out whether our idea matches the context and life of our user and, if so, how does it fit.
2. Conducting the test
It has been our experience that we achieve the best results when we test multiple ideas or variants of one idea that we have described as a scenario beforehand. This way, the feedback will be far more differentiated. If we only have one solution ready, the user’s response to what he thinks about the idea might be rather vague. That usually doesn’t get us very far in terms of our clarifications. When the user must undergo several different test arrangements, he can make comparisons, evaluate, and formulate his feedback far more precisely, such as what exactly he finds better or worse in one prototype than in the other. It has become second nature for us to test the prototype in a context, namely in its natural environment.
As mentioned, it is better to include more people for observation and documentation in the test, as per the motto, “Never go hunting alone.” Those involved can take on different roles. For example:
The moderator:
As a moderator, we help the user to cross over from reality to the prototype situation and explain the context, so that the user has a better understanding of the scenario. In addition, it is our task as a moderator to pose the questions.
The actor:
As actors, we must take on certain roles in the scenario in order to create the right prototype experience—usually a service experience.
The observer:
The important task for observers is to watch in a focused way everything the user does in the situation. If we have only one observer on the team, it is best to film everything so we can look at the interaction together later in more detail.
Online tools can also be used for testing.
Example:
A wearable in the shape of a belt: When crossing the road, it warns children or people who are peering into their
mobile phones. It might have the following variants: (1) vibration, (2) acoustic warning signal, (3) voice that says: “Watch out! You’re crossing the road!” Or: “Caution, bus from the left.”
3 Document results
In our experience, it is of vital importance to document the results. In so doing, we actively observe how the users use (and misuse!) what we have given them. We do not immediately correct what our test person is doing. Photos or video recordings are very suitable for documentation. We always ask the users for their permission. Digital tools make documentation easier, but be careful not to forget to use them. To elict richer answers, we probe with further questions. It’s very important and often constitutes the most valuable part of the tests. Questions can be, for example, “Can you say more about how it feels to you?” “Why?” and “Show us why this would (not) work for you.” Ideally, we answer questions with questions: “What do you think this button is for?” Resist the temptation to conduct a marketing or voice-of-the-customer survey!
The use of a feedback-capture grid has proven quite useful. It facilitates the documentation of feedback, either real-time or from presentations and prototypes. We use the grid to capture the feedback systematically and deliberately in four main areas.
What do we like?
What wishes do we have?
What questions have cropped up?
Which initial ideas and solutions have we found?
Filling in the four quadrants is pretty easy: We write each piece of user feedback in the suitable quadrant category.
As an alternative, we can choose the following areas for the four quadrants: “I like…,” “I wish…,” “What if…,” and “What is the benefit?”
This method can be easily applied to groups consisting of two to over 100 people. The simple structure helps to formulate constructive feedback.
Giving feedback is one thing, receiving feedback quite another. When we receive feedback, we should see it as a gift and express our gratitude. We listen to the feedback and do not have to answer in any way. In addition, we should avoid justifying ourselves and simply listen well. At the end, we ask again if haven’t understand something or if something is still unclear to us.
4. Infer learnings
The insights serve to improve our prototypes and adapt the persona. Going through the iterations is crucial here; it contributes to constant learning.
The purpose of testing is to understand needs better and build up empathy. The approximation and constant improvement—as well as, again, failure and mistakes—achieve the learning effect. We all know the banal-sounding expression “fail fast—fail often.” Early and frequent failure is indeed an important element of design thinking and contributes significantly to realizing market opportunities in the end. At the end of the testing, it is important to document both the findings and the test well and share both with the team.
EXPERT TIP
Carry out an A/B testing with your prototype
One possibility of quantitative testing is to carry out an A/B comparison. It is especially suitable for simple prototypes and allows us to test two different versions of a landing page, for instance, or even two versions of an element such as a value proposition or test button. In the case of a Web site, the titles and descriptions of the offers, the text volume, style, promotion offers, length of forms, and boxes can be examined in an A/B test.
To achieve relevant test results, it is important for both versions to be tested concurrently or in tandem and within a predefined, appropriate time period. The final measurement and evaluation as to which version was more successful in the test and which one will be used in the real world must be done on the basis of clearly predefined criteria.
At an early stage of prototyping, we have the test person first experience variant A. Then we find out what the test person likes about it and what he would want changed. Then we repeat the procedure with variant B. Depending on the situation, we can also observe and question one test group about variant A and another about variant B.
Using a landing page, we can check the conversion rate directly in an A/B test by observing the reactions; we simply distribute the page views to version A and version B by means of an A/B testing tool. Only one variable at a time should be changed to find out why one variant is better liked. This A/B test shows clearly which Web site gets more registrations. Calculators are available to check the statistical relevance. If a Web site already exists and we want to test a new version B, we make sure that regular visitors don’t get confused by making version B available only to new visitors.
The test can show a result in favor of A or B, respectively, or else no statistically relevant preference at all. Perhaps possibilities can be inferred from the test as to how to combine the best of the two variants.
What digital tools can be used to test prototypes quickly?
An extremely simple and effective way of taking many users’ feedback into account is the use of a Web-based tool. Recently, various Software-as-a-Service solutions have evolved, with which affordable, efficient and Web-based feedback can be obtained.
With the aid of such a tool, Peter quickly built up an internal feedback community consisting of employees of his company and selected external customers. “Friendly user test,” a term frequently used in German-speaking countries, doesn’t quite hit home. After all, the specific purpose of the test is to identify weaknesses in the design and get suggestions for improvement—which are not necessarily “friendly.” The term “customer trial” used in English-speaking regions is a little better.
Peter has used such a tool for a customer trial several times already, and it has been helpful in his experience. It enables him to obtain feedback in relation to
prototype variants,
procedures, and
images or links through URLs.
and to conduct A/B testing. The number of prototypes is unlimited. One great advantage of such a tool is that additional questions can be asked and there is a great deal of leeway in terms of the makeup of the community surveyed. The segmentation ensures that the feedback matches his needs optimally.
On the same day he sets up the tool, Peter receives some initial feedback. Within only two days, he can give a valid assessment of the prototype variants, based on which he can develop a new product function.
A tool-supported approach for testing feedback allows you to obtain structured feedback quickly and easily. When selecting the right tool, the following criteria should be kept in mind:
Does the tool offer the possibility for uploading various types of prototypes?
Example:
Is there a possibility for drawing up a scenario? This will give responding users the opportunity to see and understand the situation.
Does the tool enable us to ask predefined and open questions? It pays to spend much time on formulating the question, because it directly affects the feedback and its quality.
Examples of questions:
Evaluate the prototype with 1 star (poor) to 5 stars (really awesome).
What do you like about the prototype?
What would you change in the prototype?
. . .
Another key factor of success is the selection of the feedback community. Ideally, it should not be limited to one’s own organization (university, company, etc.) but instead include the possibility of inviting additional, freely definable respondents for a survey.
Example:
It is useful when experts within an existing community have the possibility of selecting their field of expertise (e.g., channel marketing, big data analytics, accounting). This makes it easier in actual practice to obtain fast feedback, such as from the experts with respect to their expert knowledge.
The dedicated selection of technically accomplished community participants can boost the quality of the feedback, but you should always consider the feedback of nonexperts as well; because they are less profesionally blinkered, they often have a fresh viewpoint.
&nb
sp; EXPERT TIP
How do we visualize prototypes for tests in digital tools?
A prototype is the visualization of an idea. It can be a sketch, a photo, a storyboard, or a chart. Any offer can be visualized as a prototype early on and made available to a tester community for feedback:
HOW MIGHT WE...
conduct and document experiments in a structured way?
During the early phases of the innovation process, we frequently test several assumptions concurrently and learn on several levels. However, we recommend that you reflect before each test on what exactly you would like to learn and what the key question is. We also ask ourselves which assumptions we would like to test and how we can design the test scenario in such a way that the user can experience them.
Over the course of the further development of the product or service, we test our assumptions again and again and conduct experiments continuously. In the early phases of the innovation process, the prototypes are normally very simple. Often, several variables are tested at the same time. For the testing in later project stages, other types of experiments with customers (e.g., online tests, A/B testing, etc.) can be conducted. Here we usually focus on a single test variable or assumption.
The Design Thinking Playbook Page 12