The way the team shares work will vary depending on what solutions you have and where you are in the process. From sharing rough napkin sketches with stakeholders, to acting out proposed services or processes, to creating prototypes of products to test with users, what’s important is not so much how you go about testing the team’s ideas, but that you do it regularly and take what you hear to heart.
The level of fidelity you show should match your level of confidence in your ideas. When your prototypes of an initial concept are polished and seem finished, people may think the work is further along than it is. Their feedback may be about details, rather than the big idea. This is not to say you should tell those who test out a low-fidelity prototype that you don’t want to hear their opinions on details. But it does mean that you should focus your inquiry on the areas you are prepared to take input on. When working both with your external customers and even internally, never shut down an avenue of feedback. Even if you aren’t prepared or mandated to do something about it, if you shut it down in one area, you run the risk of shutting it down where you want to hear it. Instead, take a note and move on. You can always share it with those responsible for that aspect, or discard it in your own analysis.
Be careful not to focus only on building artifacts to share without going through the expansive thinking and decision-making we looked at in the last chapter. I hear from many people, often those who are experiencing “collaboration fatigue,” that “just getting something done” is a more effective use of time. If a team isn’t able to come together long enough, and productively enough, to do things like set a brief or throw away constraints, prototypes are just manifestations of someone’s pet ideas. Certainly, it’s better to have something tangible for your customers and stakeholders to react to. As my former boss, John Edson of McKinsey Digital, says, “You can’t steer a parked car,” and building something is one way to get the car going. You can always pull a 180 if needed. Prototypes are a great way to learn more about how good an idea is, but it’s always wise to make sure you’ve set some clear objectives (Chapter 6) and thought through a few different approaches using divergent thinking (Chapter 7), rather than just jumping into prototyping the first idea.
Be Disciplined About Gathering Feedback
As you prepare your plan, think about how you will keep track of the feedback you receive. If you are conducting a large-scale quantitative study, you’ll likely be gathering a mix of freeform discussion and more specific, bounded questions. If the approach is qualitative, you will be collecting richer, less structured info from fewer people. Capturing these two forms of information requires different approaches.
For the more open-ended discussion, be sure to have an outline of the kinds of questions to ask, but don’t feel beholden to following it slavishly. Appoint one person to be a note taker specifically for each session, and if possible keep it consistent between one or two people. Provide note takers with a script to mark up.
The key things you should look for are:
How well the person understands what you are showing at first. Do they ask clarifying questions? Are they able to jump in and try something, or do they hesitate?
How eager they are to try it. Skepticism isn’t necessarily bad, but it can be an indication that the person doesn’t understand or get the value of what you are showing.
Whether they can use the artifact without help, or whether they make mistakes.
For more bounded data, you can ask people questions verbally or provide a survey to complete. I find that asking people to complete forms takes them out of the moment and leads to comments you may not understand later, but if there’s a large amount of information to capture it may be required.
It’s also useful to take images of key points or make recordings to share highlights with those who weren’t there. Not every situation lends itself to this, so think through how you’ll capture some of the more expressive aspects of sessions. I suggest having a checklist of images you want for the note taker so they don’t forget in the heat of the moment.
One question that comes up often is about sample size. There are many different opinions about how many people you need to speak with, depending on your situation. A rule of thumb I use for one-on-one qualitative research in this part of the process is to start with five to seven people per segment (e.g., sales people versus managers) and see if you can identify patterns among them. If what you are learning doesn’t seem to lead in a direction with that many people, you can add more.
Making Use of What You Learn
Because we are so used to showing only “finished” work, rather than intentionally subjecting partial and imperfect work to scrutiny, we don’t have a lot of experience making use of feedback on what we create. We may have experience taking a manager’s input or including subject-matter expert information, but when we expose our somewhat raw thinking to strangers or those we don’t know well, we can tend to get defensive or overwhelmed. It’s also common to feel like receiving anything but glowing acceptance means we’ve failed. Mastering collaboration means helping people get over themselves to use what they find to strengthen ideas, and to avoid face-plants caused by problems we never saw coming.
It’s also a good idea to check in after every session and have the team do a quick round-up of what they’re hearing. You don’t want to have them drawing concrete conclusions based on the first thing they hear, but by calling out big insights, or by discussing things that should change about the way the session is moderated, the team can stay on the same page and keep things fresh.
Don’t Get Defensive
The first thing to continually remind teams testing out their work is that the feedback isn’t personal. The information won’t make it into performance reviews; it doesn’t go on a permanent record. A better way to think about testing ideas is as an exploration to incorporate more perspectives or as an evaluation, not a pass/fail test. The idea is not to see whether an idea passes muster, but how well it works. This means avoiding talking in absolutes and focusing people to ask and hear “how well” and “why” something works or not.
Watch for team members who try to argue against feedback that’s inconvenient. While it’s possible that there’s a study participant who “doesn’t get it,” it’s more likely that the team member isn’t understanding what seems clear to the rest of the team. Common causes of disconnects (besides faults with the idea itself) are:
Use of jargon or shorthand that isn’t shared
Help teams avoid this by editing their scripts to remove things that seem to be inside baseball, especially when testing with users. It’s also good to have multiple ways to explain the problem and the solution.
Failure to ground the solution in a problem
We sometimes assume that a problem is understood and experienced the same way for everyone. If the participant seems confused, try explaining the problem being addressed before launching back into explaining the solution.
Moving too quickly
Because you are familiar with the work, you may tend to rush through any context setting when sharing it. The note taker or others in the session can be helpful in applying the brakes if the person leading the session is going too fast.
I also look out for when study participants ask things like, “Can it do X?” about something the team has considered but hasn’t shown for whatever reason. The tendency is to rush to explain all of the things that aren’t in what’s being tested and further distracting the participant. Instead, I coach people to ask, “Why would that be useful?” so that the discussion can stay focused on hearing from the participant.
How to Handle Different Opinions
Often people who aren’t professional researchers complain that, in talking to study participants, “everyone says something different,” and they struggle to make sense of what feels like random feedback or an endless series of personal preferences. Divergent feedback is often tied to key behavioral differences in your participants, like the
ir level of experience in a domain or whether they use solutions on the go versus in a set location.
Illumina, a genetic sequencing and analysis company in Hayward, California, wanted to understand how scientists used their tools in a variety of settings, from large research labs like the Broad Institute at MIT, to small independent investigations in labs at Stanford. We visited different settings to see the equipment in action; what we were looking for wasn’t about the fundamental science being done, but rather how the different environments affected the usage of the equipment. At a large organization researchers handed off samples to be processed by large teams of specialists, while in a small lab, the researchers themselves had to process their samples. This meant that the actual users were quite different in the two settings. We used the differences in what we heard to support two distinct usage modes—one for high-frequency users who were very close to the equipment but not the science, and one that guided scientists working on their own experiments less frequently. Your team should look for these types of patterns and divergences in what they are hearing and not get lost in the details of each individual user.
Sometimes feedback is divergent because people have different reasons for offering it. In writing this book I had a circle of thoughtful reviewers who helped me shape the raw material into what you are reading today. There were several parts where one person commented on a section saying, “I love this, don’t change it,” while another said, “I’d get rid of this.” In those cases, I needed to focus on why they were offering that guidance to decide what to do with it. If you hear something that runs counter to what others have said, it’s a good idea to ask why they are making the suggestion or critique.
Don’t Fear Failure
Silicon Valley loves the “fail fast” maxim, which has been a point of contention for many. Most people with experience “failing” understand this to mean that not passing the test with flying colors is actually a chance to learn, not a prompt to throw everything away or become discouraged.
When I worked on a device that enabled people with reading disabilities or vision impairments to “read” text by photographing it and having it read aloud back to them, our first prototype was an unmitigated disaster. Because the core technology of the product was a camera, we had borrowed the form factor from cameras, requiring people to hold the back of the device facing what they were capturing. But while cameras are meant to photograph the world around you, often text you are reading is lying flat in front of you. As we watched little old ladies with shaking hands try to hold a heavy brick in an awkward position, we cringed. When we regrouped afterward, the predominant feeling in the room was pain and shame for having a terrible solution. The team sat with their discomfort for half an hour, but then the tone shifted. One person suggested a simple fix: to relocate the camera to the bottom of the device, making it easier to hold, and more flexible for other uses as well. Feedback may not always be positive, but it’s always useful.
Experiencing failure is actually a great way to find what works, as each failure closes off an option, leaving fewer to explore. Katherine Johnson, the NASA mathematician whose calculations enabled our first trips to the moon and more, has described her role as follows: “We were error checkers. We did the math the men didn’t want to. We were experts in error and failure. I simply kept fact checking all the errors until the only thing left was the answer.”
Troubleshooting Getting Feedback
It can be scary to put work that isn’t “finished” in front of those it’s intended to serve. This section covers some common challenges you may encounter and ideas for how to get past them so you can learn what works, and what doesn’t.
Participants Are Confused
During an evaluation session you may notice people aren’t quite following along. That could be a sign of a bad idea or solution being tested, but more likely it’s because the person leading the session is moving too quickly or not explaining things well. When participants get lost, their feedback is likely to be negative because they feel defensive or “stupid.”
So what can I do?
Have a “high sign”
I make sure that in every session the person leading the discussion checks in periodically with other teammates in the room, but it’s also good to arrange a signal that tells the moderator to stop and check in. This can be a simple hand raise, or it may be a verbal cue that isn’t totally interruptive of the session. You don’t want others just jumping in or trying to clarify; instead, let the moderator know to stop and ask questions of teammates.
Rehearse
Do a couple of dry runs with people who aren’t familiar with the effort to weed out jargon, and work on the pacing of the introduction and questions. You aren’t necessarily taking the feedback from these rehearsals seriously, though you never know when a big “a-ha!” might arrive based on a newcomer’s perspective.
Regroup after every session
You can always improve the way sessions go, so encourage the team to look back together at the questions being asked and the speed of the sessions to see if there are refinements that need to be made to get better feedback. Make sure that the whole team is holding the moderator accountable for keeping things clear and well paced.
Leading the Witness
A frequent mistake inexperienced moderators of feedback sessions make is to start asking yes/no questions, or leading people by asking, “Don’t you think that…” instead of leaving things more open. This is especially prevalent toward the end of a series of sessions where the team has started to see patterns and seeks confirmation rather than critique.
So what can I do?
Follow the “script”
If you notice this behavior, direct the person leading the session back to the outline you prepared. You can use a hand signal during the session that means, “keep it open-ended,” so that the flow isn’t interrupted and the moderator isn’t being criticized in front of participants. But you don’t want to taint your findings with responses that have been coached.
Focus on asking “why?”
Coach the team on and even practice asking open-ended questions, not just in feedback sessions with those who are testing out ideas, but with stakeholders as well. Instead of “Does this help you do X?” try “How well does this help you do X?” When the team hears feedback that is either surprising or consistent with a pattern, it’s still always a good idea to ask why or say, “Tell me more about that,” so that they can gain meaningful insight from the answers they are getting.
Too Many Observers
It can be exciting to hear directly from those you are serving, and at times there are those outside of the core team who will want to be in sessions because they have a relationship with the participants or just are very curious. It’s great when people want this exposure, since it’s often both inspiring and a source of great information, but having more than three people in a room can be overwhelming to a single participant.
So what can I do?
Have people watch or listen remotely
Setting up a video call or conference call for others to listen in from outside the room is useful. It’s also a good way to conduct sessions when you can’t be face-to-face. Since many calling systems have recording capabilities built in, this is also an easy way to record what you are hearing.
Mix it up
Have different people attend different sessions so that a wider group gets exposure to users without overloading the participant. It’s important that there’s consistency among those who are capturing findings and preparing to report them.
Conclusion
Just as you need to be intentional about going broad to get new ideas, you also need to be sure to share them early and often. Sharing ideas isn’t so much about asking stakeholders and subject-matter experts for their opinions, but about getting solutions into the hands of their intended users to see how they actually perform. Testing ideas can take many forms, from explaining a scenario, to sharing r
ough sketches, to creating a prototype. Teams should be sure not to make ideas that are very rough look more “finished” than they are, so that participants don’t get misled by details that haven’t been thought through.
Sharing work will help you learn and avoid blind spots. Teams can mitigate the risk of making mistakes that carry consequences by testing their ideas out in a safe “lab” setting, with a small group. Be sure that you actually listen to the feedback you get and make use of what you learn, because it can be tempting to get defensive and dismiss negative feedback if you aren’t ready to hear it.
Key Takeaways
Be sure the team shares the work they do with those it’s intended to serve early and often. Getting outside perspectives on ideas is a great way to find their flaws and refine them.
When the going gets rough, teams naturally turn inward to protect themselves. It’s especially important when things go wrong to gather outside perspectives and not let stress or fear lead you to simply make course correction after course correction.
Conducting research with end users takes discipline to hear a range of perspectives and make sense of them. Teams should be clear about what they are looking to learn, and create artifacts that help them learn it.
Just asking for feedback isn’t enough. Teams need to be able to take in positive and negative feedback in a constructive way to make use of it and get the benefits of outside perspectives.
Mastering Collaboration Page 18