Book Read Free

Think Again: The Power of Knowing What You Don't Know

Page 18

by Adam Grant


  After forty-four minutes in space, Luca felt something strange: the back of his head seemed to be wet. He wasn’t sure where the water was coming from. It wasn’t just a nuisance; it could cut off communication by shorting out his microphone or earphones. He reported the problem to Mission Control in Houston, and Chris asked if he was sweating. “I am sweating,” Luca said, “but it feels like a lot of water. It’s not going anywhere, it’s just in my Snoopy cap. Just FYI.” He went back to work.

  The officer in charge of spacewalks, Karina Eversley, knew something was wrong. That’s not normal, she thought, and quickly recruited a team of experts to compile questions for Luca. Was the amount of liquid increasing? Luca couldn’t tell. Was he sure it was water? When he stuck out his tongue to capture a few of the drops that were floating in his helmet, the taste was metallic.

  Mission Control made the call to terminate the spacewalk early. Luca and Chris had to split up to follow their tethers, which were routed in opposite directions. To get around an antenna, Luca flipped over. Suddenly, he couldn’t see clearly or breathe through his nose—globs of water were covering his eyes and filling his nostrils. The water was continuing to accumulate, and if it reached his mouth he could drown. His only hope was to navigate quickly back to the airlock. As the sun set, Luca was surrounded by darkness, with only a small headlight to guide him. Then his comms went down, too—he couldn’t hear himself or anyone else speak.

  Luca managed to find his way back to the outer hatch of the airlock, using his memory and the tension in his tether. He was still in grave danger: before he could remove his helmet, he would have to wait for Chris to close the hatch and repressurize the airlock. For several agonizing minutes of silence, it was unclear whether he would survive. When it was finally safe to remove his helmet, a quart and a half of water was in it, but Luca was alive. Months later, the incident would be called the “scariest wardrobe malfunction in NASA history.”

  The technical updates followed swiftly. The spacesuit engineers traced the leak to a fan/pump/separator, which they replaced moving forward. They also added a breathing tube that works like a snorkel and a pad to absorb water inside the helmet. Yet the biggest error wasn’t technical—it was human.

  When Luca had returned from his first spacewalk a week earlier, he had noticed some droplets of water in his helmet. He and Chris assumed they were the result of a leak in the bag that provided drinking water in his suit, and the crew in Houston agreed. Just to be safe, they replaced the bag, but that was the end of the discussion.

  The space station chief engineer, Chris Hansen, led the eventual investigation into what had gone wrong with Luca’s suit. “The occurrence of minor amounts of water in the helmet was normalized,” Chris told me. In the space station community, the “perception was that drink bags leak, which led to an acceptance that it was a likely explanation without digging deeper into it.”

  Luca’s scare wasn’t the first time that NASA’s failure at rethinking had proven disastrous. In 1986, the space shuttle Challenger exploded after a catastrophically shallow analysis of the risk that circular gaskets called O-rings could fail. Although this had been identified as a launch constraint, NASA had a track record of overriding it in prior missions without any problems occurring. On an unusually cold launch day, the O-ring sealing the rocket booster joints ruptured, allowing hot gas to burn through the fuel tank, killing all seven Challenger astronauts.

  In 2003, the space shuttle Columbia disintegrated under similar circumstances. After takeoff, the team on the ground noticed that some foam had fallen from the ship, but most of them assumed it wasn’t a major issue since it had happened in past missions without incident. They failed to rethink that assumption and instead started discussing what repairs would be done to the ship to reduce the turnaround time for the next mission. The foam loss was, in fact, a critical issue: the damage it caused to the wing’s leading edge let hot gas leak into the shuttle’s wing upon reentry into the atmosphere. Once again, all seven astronauts lost their lives.

  Rethinking is not just an individual skill. It’s a collective capability, and it depends heavily on an organization’s culture. NASA had long been a prime example of a performance culture: excellence of execution was the paramount value. Although NASA accomplished extraordinary things, they soon became victims of overconfidence cycles. As people took pride in their standard operating procedures, gained conviction in their routines, and saw their decisions validated through their results, they missed opportunities for rethinking.

  Rethinking is more likely to happen in a learning culture, where growth is the core value and rethinking cycles are routine. In learning cultures, the norm is for people to know what they don’t know, doubt their existing practices, and stay curious about new routines to try out. Evidence shows that in learning cultures, organizations innovate more and make fewer mistakes. After studying and advising change initiatives at NASA and the Gates Foundation, I’ve learned that learning cultures thrive under a particular combination of psychological safety and accountability.

  I ERR, THEREFORE I LEARN

  Years ago, an engineer turned management professor named Amy Edmondson became interested in preventing medical errors. She went into a hospital and surveyed its staff about the degree of psychological safety they experienced in their teams—could they take risks without the fear of being punished? Then she collected data on the number of medical errors each team made, tracking serious outcomes like potentially fatal doses of the wrong medication. She was surprised to find that the more psychological safety a team felt, the higher its error rates.

  It appeared that psychological safety could breed complacency. When trust runs deep in a team, people might not feel the need to question their colleagues or double-check their own work.

  But Edmondson soon recognized a major limitation of the data: the errors were all self-reported. To get an unbiased measure of mistakes, she sent a covert observer into the units. When she analyzed those data, the results flipped: psychologically safe teams reported more errors, but they actually made fewer errors. By freely admitting their mistakes, they were then able to learn what had caused them and eliminate them moving forward. In psychologically unsafe teams, people hid their mishaps to avoid penalties, which made it difficult for anyone to diagnose the root causes and prevent future problems. They kept repeating the same mistakes.

  Since then, research on psychological safety has flourished. When I was involved in a study at Google to identify the factors that distinguish teams with high performance and well-being, the most important differentiator wasn’t who was on the team or even how meaningful their work was. What mattered most was psychological safety.

  Over the past few years, psychological safety has become a buzzword in many workplaces. Although leaders might understand its significance, they often misunderstand exactly what it is and how to create it. Edmondson is quick to point out that psychological safety is not a matter of relaxing standards, making people comfortable, being nice and agreeable, or giving unconditional praise. It’s fostering a climate of respect, trust, and openness in which people can raise concerns and suggestions without fear of reprisal. It’s the foundation of a learning culture.

  In performance cultures, the emphasis on results often undermines psychological safety. When we see people get punished for failures and mistakes, we become worried about proving our competence and protecting our careers. We learn to engage in self-limiting behavior, biting our tongues rather than voicing questions and concerns. Sometimes that’s due to power distance: we’re afraid of challenging the big boss at the top. The pressure to conform to authority is real, and those who dare to deviate run the risk of backlash. In performance cultures, we also censor ourselves in the presence of experts who seem to know all the answers—especially if we lack confidence in our own expertise.

  A lack of psychological safety was a persistent problem at NASA. Before the Challenger launch, som
e engineers did raise red flags but were silenced by managers; others were ignored and ended up silencing themselves. After the Columbia launch, an engineer asked for clearer photographs to inspect the damage to the wing, but managers didn’t supply them. In a critical meeting to evaluate the condition of the shuttle after takeoff, the engineer didn’t speak up.

  About a month before that Columbia launch, Ellen Ochoa became the deputy director of flight crew operations. In 1993, Ellen had made history by becoming the first Latina in space. Now, the first flight she supported in a management role had ended in tragedy. After breaking the news to the space station crew and consoling the family members of the fallen astronauts, she was determined to figure out how she could personally help to prevent this kind of disaster from ever happening again.

  Ellen recognized that at NASA, the performance culture was eroding psychological safety. “People pride themselves on their engineering expertise and excellence,” she told me. “They fear their expertise will be questioned in a way that’s embarrassing to them. It’s that basic fear of looking like a fool, asking questions that people just dismiss, or being told you don’t know what you’re talking about.” To combat that problem and nudge the culture toward learning, she started carrying a 3 × 5 note card in her pocket with questions to ask about every launch and important operational decision. Her list included:

  What leads you to that assumption? Why do you think it is correct? What might happen if it’s wrong?

  What are the uncertainties in your analysis?

  I understand the advantages of your recommendation. What are the disadvantages?

  A decade later, though, the same lessons about rethinking would have to be relearned in the context of spacewalk suits. As flight controllers first became aware of the droplets of water in Luca Parmitano’s helmet, they made two faulty assumptions: the cause was the drink bag, and the effect was inconsequential. It wasn’t until the second spacewalk, when Luca was in actual danger, that they started to question whether those assumptions were wrong.

  When engineer Chris Hansen took over as the manager of the extravehicular activity office, he inaugurated a norm of posing questions like Ellen’s: “All anybody would’ve had to ask is, ‘How do you know the drink bag leaked?’ The answer would’ve been, ‘Because somebody told us.’ That response would’ve set off red flags. It would’ve taken ten minutes to check, but nobody asked. It was the same for Columbia. Boeing came in and said, ‘This foam, we think we know what it did.’ If somebody had asked how they knew, nobody could’ve answered that question.”

  How do you know? It’s a question we need to ask more often, both of ourselves and of others. The power lies in its frankness. It’s nonjudgmental—a straightforward expression of doubt and curiosity that doesn’t put people on the defensive. Ellen Ochoa wasn’t afraid to ask that question, but she was an astronaut with a doctorate in engineering, serving in a senior leadership role. For too many people in too many workplaces, the question feels like a bridge too far. Creating psychological safety is easier said than done, so I set out to learn about how leaders can establish it.

  SAFE AT HOME GATES

  When I first arrived at the Gates Foundation, people were whispering about the annual strategy reviews. It’s the time when program teams across the foundation meet with the cochairs—Bill and Melinda Gates—and the CEO to give progress reports on execution and collect feedback. Although the foundation employs some of the world’s leading experts in areas ranging from eradicating disease to promoting educational equity, these experts are often intimidated by Bill’s knowledge base, which seems impossibly broad and deep. What if he spots a fatal flaw in my work? Will it be the end of my career here?

  A few years ago, leaders at the Gates Foundation reached out to see if I could help them build psychological safety. They were worried that the pressure to present airtight analyses was discouraging people from taking risks. They often stuck to tried-and-true strategies that would make incremental progress rather than daring to undertake bold experiments that might make a bigger dent in some of the world’s most vexing problems.

  The existing evidence on creating psychological safety gave us some starting points. I knew that changing the culture of an entire organization is daunting, while changing the culture of a team is more feasible. It starts with modeling the values we want to promote, identifying and praising others who exemplify them, and building a coalition of colleagues who are committed to making the change.

  The standard advice for managers on building psychological safety is to model openness and inclusiveness. Ask for feedback on how you can improve, and people will feel safe to take risks. To test whether that recommendation would work, I launched an experiment with a doctoral student, Constantinos Coutifaris. In multiple companies, we randomly assigned some managers to ask their teams for constructive criticism. Over the following week, their teams reported higher psychological safety, but as we anticipated, it didn’t last. Some managers who asked for feedback didn’t like what they heard and got defensive. Others found the feedback useless or felt helpless to act on it, which discouraged them from continuing to seek feedback and their teams from continuing to offer it.

  Another group of managers took a different approach, one that had less immediate impact in the first week but led to sustainable gains in psychological safety a full year later. Instead of asking them to seek feedback, we had randomly assigned those managers to share their past experiences with receiving feedback and their future development goals. We advised them to tell their teams about a time when they benefited from constructive criticism and to identify the areas that they were working to improve now.

  By admitting some of their imperfections out loud, managers demonstrated that they could take it—and made a public commitment to remain open to feedback. They normalized vulnerability, making their teams more comfortable opening up about their own struggles. Their employees gave more useful feedback because they knew where their managers were working to grow. That motivated managers to create practices to keep the door open: they started holding “ask me anything” coffee chats, opening weekly one-on-one meetings by asking for constructive criticism, and setting up monthly team sessions where everyone shared their development goals and progress.

  Creating psychological safety can’t be an isolated episode or a task to check off on a to-do list. When discussing their weaknesses, many of the managers in our experiment felt awkward and anxious at first. Many of their team members were surprised by that vulnerability and unsure of how to respond. Some were skeptical: they thought their managers might be fishing for compliments or cherry-picking comments that made them look good. It was only over time—as managers repeatedly demonstrated humility and curiosity—that the dynamic changed.

  At the Gates Foundation, I wanted to go a step further. Instead of just having managers open up with their own teams about how they had previously been criticized, I wondered what would happen if senior leaders shared their experiences across the entire organization. It dawned on me that I had a memorable way to make that happen.

  A few years earlier, our MBA students at Wharton decided to create a video for their annual comedy show. It was inspired by “Mean Tweets,” the late-night segment on Jimmy Kimmel Live! in which celebrities read cruel tweets about themselves out loud. Our version was Mean Reviews, where faculty members read harsh comments from student course evaluations. “This is possibly the worst class I’ve ever taken in my life,” one professor read, looking defeated before saying, “Fair enough.” Another read, “This professor is a b*tch. But she’s a nice b*tch,” adding with chagrin: “That’s sweet.” One of my own was “You remind me of a Muppet.” The kicker belonged to a junior faculty member: “Prof acts all down with pop culture, but secretly thinks Ariana Grande is a font in Microsoft Word.”

  I made it a habit to show that video in class every fall, and afterward the floodgates would open. Studen
ts seemed to be more comfortable sharing their criticisms and suggestions for improvement after seeing that although I take my work seriously, I don’t take myself too seriously.

  I sent the video to Melinda Gates, asking if she thought something similar might help with psychological safety in her organization. She not only said yes; she challenged the entire executive leadership team to participate and volunteered to be the first to take the hot seat. Her team compiled criticisms from staff surveys, printed them on note cards, and had her react in real time in front of a camera. She read one employee’s complaint that she was like Mary F***ing Poppins—the first time anyone could remember hearing Melinda curse—and explained how she was working on making her imperfections more visible.

  To test the impact of her presentation, we randomly assigned one group of employees to watch Melinda engage with the tough comments, a second to watch a video of her talking about the culture she wanted to create in more general terms, and a third to serve as a pure control group. The first group came away with a stronger learning orientation—they were inspired to recognize their shortcomings and work to overcome them. Some of the power distance evaporated—they were more likely to reach out to Melinda and other senior leaders with both criticism and compliments. One employee commented:

  In that video Melinda did something that I’ve not yet seen happen at the foundation: she broke through the veneer. It happened for me when she said, “I go into so many meetings where there are things I don’t know.” I had to write that down because I was shocked and grateful at her honesty. Later, when she laughed, like really belly-laughed, and then answered the hard comments, the veneer came off again and I saw that she was no less of Melinda Gates, but actually, a whole lot more of Melinda Gates.

 

‹ Prev