Book Read Free

Think Again: The Power of Knowing What You Don't Know

Page 19

by Adam Grant


  It takes confident humility to admit that we’re a work in progress. It shows that we care more about improving ourselves than proving ourselves.* If that mindset spreads far enough within an organization, it can give people the freedom and courage to speak up.

  But mindsets aren’t enough to transform a culture. Although psychological safety erases the fear of challenging authority, it doesn’t necessarily motivate us to question authority in the first place. To build a learning culture, we also need to create a specific kind of accountability—one that leads people to think again about the best practices in their workplaces.

  THE WORST THING ABOUT BEST PRACTICES

  In performance cultures, people often become attached to best practices. The risk is that once we’ve declared a routine the best, it becomes frozen in time. We preach about its virtues and stop questioning its vices, no longer curious about where it’s imperfect and where it could improve. Organizational learning should be an ongoing activity, but best practices imply it has reached an endpoint. We might be better off looking for better practices.

  At NASA, although teams routinely debriefed after both training simulations and significant operational events, what sometimes stood in the way of exploring better practices was a performance culture that held people accountable for outcomes. Every time they delayed a scheduled launch, they faced widespread public criticism and threats to funding. Each time they celebrated a flight that made it into orbit, they were encouraging their engineers to focus on the fact that the launch resulted in a success rather than on the faulty processes that could jeopardize future launches. That left NASA rewarding luck and repeating problematic practices, failing to rethink what qualified as an acceptable risk. It wasn’t for a lack of ability. After all, these were rocket scientists. As Ellen Ochoa observes, “When you are dealing with people’s lives hanging in the balance, you rely on following the procedures you already have. This can be the best approach in a time-critical situation, but it’s problematic if it prevents a thorough assessment in the aftermath.”

  Focusing on results might be good for short-term performance, but it can be an obstacle to long-term learning. Sure enough, social scientists find that when people are held accountable only for whether the outcome was a success or failure, they are more likely to continue with ill-fated courses of action. Exclusively praising and rewarding results is dangerous because it breeds overconfidence in poor strategies, incentivizing people to keep doing things the way they’ve always done them. It isn’t until a high-stakes decision goes horribly wrong that people pause to reexamine their practices.

  We shouldn’t have to wait until a space shuttle explodes or an astronaut nearly drowns to determine whether a decision was successful. Along with outcome accountability, we can create process accountability by evaluating how carefully different options are considered as people make decisions. A bad decision process is based on shallow thinking. A good process is grounded in deep thinking and rethinking, enabling people to form and express independent opinions. Research shows that when we have to explain the procedures behind our decisions in real time, we think more critically and process the possibilities more thoroughly.

  Process accountability might sound like the opposite of psychological safety, but they’re actually independent. Amy Edmondson finds that when psychological safety exists without accountability, people tend to stay within their comfort zone, and when there’s accountability but not safety, people tend to stay silent in an anxiety zone. When we combine the two, we create a learning zone. People feel free to experiment—and to poke holes in one another’s experiments in service of making them better. They become a challenge network.

  One of the most effective steps toward process accountability that I’ve seen is at Amazon, where important decisions aren’t made based on simple PowerPoint presentations. They’re informed by a six-page memo that lays out a problem, the different approaches that have been considered in the past, and how the proposed solutions serve the customer. At the start of the meeting, to avoid groupthink, everyone reads the memo silently. This isn’t practical in every situation, but it’s paramount when choices are both consequential and irreversible. Long before the results of the decision are known, the quality of the process can be evaluated based on the rigor and creativity of the author’s thinking in the memo and in the thoroughness of the discussion that ensues in the meeting.

  In learning cultures, people don’t stop keeping score. They expand the scorecard to consider processes as well as outcomes:

  Even if the outcome of a decision is positive, it doesn’t necessarily qualify as a success. If the process was shallow, you were lucky. If the decision process was deep, you can count it as an improvement: you’ve discovered a better practice. If the outcome is negative, it’s a failure only if the decision process was shallow. If the result was negative but you evaluated the decision thoroughly, you’ve run a smart experiment.

  The ideal time to run those experiments is when decisions are relatively inconsequential or reversible. In too many organizations, leaders look for guarantees that the results will be favorable before testing or investing in something new. It’s the equivalent of telling Gutenberg you’d only bankroll his printing press once he had a long line of satisfied customers—or announcing to a group of HIV researchers that you’d only fund their clinical trials after their treatments worked.

  Requiring proof is an enemy of progress. This is why companies like Amazon use a principle of disagree and commit. As Jeff Bezos explained it in an annual shareholder letter, instead of demanding convincing results, experiments start with asking people to make bets. “Look, I know we disagree on this but will you gamble with me on it?” The goal in a learning culture is to welcome these kinds of experiments, to make rethinking so familiar that it becomes routine.

  Process accountability isn’t just a matter of rewards and punishments. It’s also about who has decision authority. In a study of California banks, executives often kept approving additional loans to customers who’d already defaulted on a previous one. Since the bankers had signed off on the first loan, they were motivated to justify their initial decision. Interestingly, banks were more likely to identify and write off problem loans when they had high rates of executive turnover. If you’re not the person who greenlit the initial loan, you have every incentive to rethink the previous assessment of that customer. If they’ve defaulted on the past nineteen loans, it’s probably time to adjust. Rethinking is more likely when we separate the initial decision makers from the later decision evaluators.

  © Hayley Lewis, Sketchnote summary of A Spectrum of Reasons for Failure. Illustration drawn May 2020. London, United Kingdom. Copyright © 2020 by HALO Psychology Limited.

  For years, NASA had failed to create that separation. Ellen Ochoa recalls that traditionally “the same managers who were responsible for cost and schedule were the ones who also had the authority to waive technical requirements. It’s easy to talk yourself into something on a launch day.”

  The Columbia disaster reinforced the need for NASA to develop a stronger learning culture. On the next space shuttle flight, a problem surfaced with the sensors in an external engine tank. It reoccurred several more times over the next year and a half, but it didn’t create any observable problems. In 2006, on the day of a countdown in Houston, the whole mission management team held a vote. There was overwhelming consensus that the launch should go forward. Only one outlier had voted no: Ellen Ochoa.

  In the old performance culture, Ellen might’ve been afraid to vote against the launch. In the emerging learning culture, “it’s not just that we’re encouraged to speak up. It’s our responsibility to speak up,” she explains. “Inclusion at NASA is not only a way to increase innovation and engage employees; it directly affects safety since people need to feel valued and respected in order to be comfortable speaking up.” In the past, the onus would’ve been on her to prove it was not safe to
launch. Now the onus was on the team to prove it was safe to launch. That meant approaching their expertise with more humility, their decision with more doubt, and their analysis with more curiosity about the causes and potential consequences of the problem.

  After the vote, Ellen received a call from the NASA administrator in Florida, who expressed surprising interest in rethinking the majority opinion in the room. “I’d like to understand your thinking,” he told her. They went on to delay the launch. “Some people weren’t happy we didn’t launch that day,” Ellen reflects. “But people did not come up to me and berate me in any way or make me feel bad. They didn’t take it out on me personally.” The following day all the sensors worked properly, but NASA ended up delaying three more launches over the next few months due to intermittent sensor malfunctions. At that point, the manager of the shuttle program called for the team to stand down until they identified the root cause. Eventually they figured out that the sensors were working fine; it was the cryogenic environment that was causing a faulty connection between the sensors and computers.

  Ellen became the deputy director and then the director of the Johnson Space Center, and NASA went on to execute nineteen consecutive successful space shuttle missions before retiring the program. In 2018, when Ellen retired from NASA, a senior leader approached her to tell her how her vote to delay the launch in 2006 had affected him. “I never said anything to you twelve years ago,” he said, but “it made me rethink how I approached launch days and whether I’m doing the right thing.”

  We can’t run experiments in the past; we can only imagine the counterfactual in the present. We can wonder whether the lives of fourteen astronauts would have been saved if NASA had gone back to rethink the risks of O-ring failures and foam loss before it was too late. We can wonder why those events didn’t make them as careful in reevaluating problems with spacesuits as they had become with space shuttles. In cultures of learning, we’re not weighed down with as many of these questions—which means we can live with fewer regrets.

  PART IV

  Conclusion

  CHAPTER 11

  Escaping Tunnel Vision

  Reconsidering Our Best-Laid Career and Life Plans

  A malaise set in within a couple hours of my arriving. I thought getting a job might help. It turns out I have a lot of relatives in Hell, and, using connections, I became the assistant to a demon who pulls people’s teeth out. It wasn’t actually a job, more of an internship. But I was eager. And at first it was kind of interesting. After a while, though, you start asking yourself: Is this what I came to Hell for, to hand different kinds of pliers to a demon?

  —Jack Handey

  What do you want to be when you grow up? As a kid, that was my least favorite question. I dreaded conversations with adults because they always asked it—and no matter how I replied, they never liked my answer. When I said I wanted to be a superhero, they laughed. My next goal was to make the NBA, but despite countless hours of shooting hoops on my driveway, I was cut from middle school basketball tryouts three years in a row. I was clearly aiming too high.

  In high school, I became obsessed with springboard diving and decided I wanted to become a diving coach. Adults scoffed at that plan: they told me I was aiming too low. In my first semester of college, I decided to major in psychology, but that didn’t open any doors—it just gave me a few to close. I knew I didn’t want to be a therapist (not patient enough) or a psychiatrist (too squeamish for med school). I was still aimless, and I envied people who had a clear career plan.

  From the time he was in kindergarten, my cousin Ryan knew exactly what he wanted to be when he grew up. Becoming a doctor wasn’t just the American dream—it was the family dream. Our great-grandparents emigrated from Russia and barely scraped by. Our grandmother was a secretary, and our grandfather worked in a factory, but it wasn’t enough to support five children, so he worked a second job delivering milk. Before his kids were teenagers, he had taught them to drive the milk truck so they could finish their 4:00 a.m. delivery cycle before the school day and workday started. When none of their children went on to med school (or milk delivery), my grandparents hoped our generation would bring the prestige of a Dr. Grant to the family.

  The first seven grandchildren didn’t become doctors. I was the eighth, and I worked multiple jobs to pay for college and to keep my options open. They were proud when I ended up getting my doctorate in psychology, but they still hoped for a real doctor. For the ninth grandchild, Ryan, who arrived four years after me, an M.D. was practically preordained.

  Ryan checked all the right boxes: along with being precocious, he had a strong work ethic. He set his sights on becoming a neurosurgeon. He was passionate about the potential to help people and ready to persist in the face of whatever obstacles would come into his path.

  When Ryan was looking at colleges, he came to visit me. As we started talking about majors, he expressed a flicker of doubt about the premed track and asked if he should study economics instead. There’s a term in psychology that captures Ryan’s personality: blirtatiousness. Yep, that’s an actual research concept, derived from the combination of blurting and flirting. When “blirters” meet people, their responses tend to be fast and effusive. They typically score high in extraversion and impulsiveness—and low in shyness and neuroticism. Ryan could push himself to study for long hours, but it drained him. Drawn to something more active and social, he toyed with the idea of squeezing in an economics major along with premed, but abandoned that idea when he got to college. Gotta stay on track.

  Ryan sailed through the premed curriculum and became a teaching assistant for undergrads while he was still an undergrad himself. When he showed up at exam review sessions and saw how stressed the students were, he refused to start covering the material until they stood up and danced. When he was accepted to an Ivy League medical school, he asked me if he should do a joint M.D.–M.B.A. program. He hadn’t lost his interest in business, but he was afraid to divide his attention. Gotta stay on track.

  In his last year of med school, Ryan dutifully applied to neurosurgery residencies. It takes a focused brain to slice into the brain of another human. He wasn’t sure if he was cut out for it—or if the career would leave any space for him to have a life. He wondered if he should start a health-care company instead, but when he was admitted to Yale, he opted for the residency. Gotta stay on track.

  Partway through his residency, the grueling hours and the intense focus began to take their toll, and Ryan burned out. He felt that if he died that very day, no one in the system would really care or even notice. He regularly suffered from the heartache of losing patients and the headache of dealing with abusive attending surgeons, and there was no end in sight. Although it was his childhood dream and our grandparents’ dream, his work left little time for anything else. The sheer exhaustion left him questioning whether he should quit.

  Ryan decided that he couldn’t give up. He had gone too far to change course, so he finished the seven-year neurosurgery residency. When he submitted the paperwork for his credentials, the hospital denied him because he had placed the dates on his résumé on the right instead of the left. He was so fed up with the system that, out of principle, he refused to move them. After winning that battle with bureaucracy, he added another feather to his cap, doing an eighth year of a fellowship in complex, minimally invasive spinal surgery.

  Today Ryan is a neurosurgeon at a major medical center. In his midthirties, he’s still in debt from student loans more than a decade after graduating from med school. Even though he enjoys helping people and caring for patients, the long hours and red tape undercut his enthusiasm. He tells me that if he could do it over, he would have gone a different route. I’ve often wondered what it would have taken to convince him to rethink his chosen line of work—and what he truly wanted out of a career.

  We all have notions of who we want to be and how we hope to lead our lives. They’re not
limited to careers; from an early age, we develop ideas about where we’ll live, which school we’ll attend, what kind of person we’ll marry, and how many kids we’ll have. These images can inspire us to set bolder goals and guide us toward a path to achieve them. The danger of these plans is that they can give us tunnel vision, blinding us to alternative possibilities. We don’t know how time and circumstances will change what we want and even who we want to be, and locking our life GPS onto a single target can give us the right directions to the wrong destination.

  GOING INTO FORECLOSURE

  When we dedicate ourselves to a plan and it isn’t going as we hoped, our first instinct isn’t usually to rethink it. Instead, we tend to double down and sink more resources in the plan. This pattern is called escalation of commitment. Evidence shows that entrepreneurs persist with failing strategies when they should pivot, NBA general managers and coaches keep investing in new contracts and more playing time for draft busts, and politicians continue sending soldiers to wars that didn’t need to be fought in the first place. Sunk costs are a factor, but the most important causes appear to be psychological rather than economic. Escalation of commitment happens because we’re rationalizing creatures, constantly searching for self-justifications for our prior beliefs as a way to soothe our egos, shield our images, and validate our past decisions.

  Escalation of commitment is a major factor in preventable failures. Ironically, it can be fueled by one of the most celebrated engines of success: grit. Grit is the combination of passion and perseverance, and research shows that it can play an important role in motivating us to accomplish long-term goals. When it comes to rethinking, though, grit may have a dark side. Experiments show that gritty people are more likely to overplay their hands in roulette and more willing to stay the course in tasks at which they’re failing and success is impossible. Researchers have even suggested that gritty mountaineers are more likely to die on expeditions, because they’re determined to do whatever it takes to reach the summit. There’s a fine line between heroic persistence and foolish stubbornness. Sometimes the best kind of grit is gritting our teeth and turning around.

 

‹ Prev