Creating Great Choices

Home > Other > Creating Great Choices > Page 4
Creating Great Choices Page 4

by Jennifer Riel


  Our seeming irrationality—making one choice in one context and the opposite in another context—can be understood in part as owing to the influence of outside forces on our models in those contexts. Given all the different influences in our lives, no wonder we are sometimes inconsistent!

  YET OUR MODELS ARE ALSO STICKY

  Once we have a strongly held and deeply embedded model of the world, though, it is tricky to shift it in any significant and lasting way. Why? It’s because we naturally seek information that fits with our existing model. It is much easier to look for answers that fit with our world view and bolster it than to actively seek to disconfirm what we know.

  Take, for instance, Steve. He has a long-held, strong belief that people who drive Audis are jerks (note that we make no claim as to the truth or falseness of this belief). Whenever Steve is in the car, he points out the misdeeds and misbehavior of Audi drivers: “See, that guy in the Audi just cut her off. Typical!” In the grip of confirmation bias, Steve is unconsciously on the lookout for data that supports his existing view of the world. When an Audi driver lets him into traffic or, say, a BMW driver behaves badly, Steve ignores these instances. They are noise and aberrations rather than the “true” pattern of evil Audi drivers. Every time he is on the road, Steve becomes ever more certain of his belief.

  Many of our models are like this. Once we form a belief about the world, we tend to hold fast to it. We get more confident of its veracity as we add evidence to our confirmation file. Personal, vivid, and recent events loom large, and they can make our models seem increasingly like reality. And once a model feels true, it is hard to shake, no matter how much countervailing evidence is presented.

  Contrary evidence can actually cause us to hold more tightly to our existing views. There’s a name for this: the backfire effect, explained in a 2006 experiment by Brendan Nyhan and Jason Reifler.13 In this experiment, Nyhan and Reifler created a set of newspaper articles on polarizing issues. The articles contained statements from political figures that would reinforce a widespread misconception. For instance, one article contained a quotation from US president George W. Bush that suggested that Iraq had possessed weapons of mass destruction (WMDs) prior to the US invasion of the country in 2003. Immediately after subjects had finished reading a false article, the researchers handed them a correction—an empirically true statement that corrected the error in the original article. In this case, the correction discussed the release of the Duelfer Report, which documented the lack of Iraqi WMD stockpiles or an active production program prior to the US-led invasion. Unsurprisingly, those opposed to the war or who had strong liberal leanings tended to disagree with the original article and accept the correction. In contrast, conservatives and those who supported the war tended to agree with the first article and strongly disagree with the correction.

  The surprising kicker? After reading the correction that explained that there were no WMDs, conservatives grew more confident in their original belief. Prompted after reading the correction, they reported being even more certain that there actually were WMDs.

  Our beliefs, in other words, are sticky. Once we see the world in a certain way, it takes serious effort—and willing intent—to see it in another way. Most of us prefer to take the easier way out and just keep believing what we believe. The implications for personal decision making are profound: we tend to keep making the same choices, based on the same beliefs and assumptions, time and again. When we need to justify the belief that our current course of action is correct, we simply look for evidence that supports our view and ignore anything that might disconfirm it. We rely on information that is easily subsumed by our existing world view. As a result, interactions with those who disagree with our views are fraught with conflict and mistrust, reinforcing organizational silos and factions.

  OUR MODELS ARE SIMPLISTIC

  Our minds seek efficiency (or, less kindly, our minds are lazy), so we tend to short-circuit the reasoning process and rely on overly simple models of the world. We look for and use information that is readily available, easy to recall, and easy to understand as the foundation of our models, and we rarely dig deeper into the real reasons for our beliefs.

  Here, many cognitive biases are at play to keep our models of the world simpler than might be optimal. But for now, consider only one way of understanding the world: causation. We tend to seek the simplest and most direct cause to explain the outcomes we see. We use this same simple causal logic to explain why our actions will produce the outcomes we want to see.

  The executives at Tata Motors followed this course. In 2008, the company launched the Tata Nano. It was designed to be the most affordable car in the world. Chairman Ratan Tata explained the thinking that led to its creation: “I observed families riding on two-wheelers—the father driving the scooter, his young kid standing in front of him, his wife seated behind him holding a little baby. It led me to wonder whether one could conceive of a safe, affordable, all-weather form of transport for such a family.”14 The Nano was that vehicle. It was to be a car for India’s growing middle class, costing only 100,000 rupees (roughly US$2,300). The company was so confident that the new car would be successful that it created the capacity to manufacture 250,000 cars per year.

  In fact, only 60,000 cars were sold the first year. In part, the car suffered from quality and safety concerns. But at the core, the car failed because those who had created it had thought only in terms of simple cause-and-effect, failing to imagine more complex causality at work. Essentially, the Tata model was this: if you finally build a car that poor people can afford, they will buy it. The underlying assumption was that the only thing causing people to choose other forms of transportation was affordability. After all, didn’t everyone want a car? An affordable car would therefore capture massive latent demand and cause the Nano to be a huge success.

  The miss in this logical chain? A car is a functional purchase; it is a way to get you from point A to point B. But for an emerging middle class, a car is also an aspirational purchase; it is a way to signal that you have “made it.” And a Nano was anything but aspirational. It was a car for poor people. Buying a Nano might well cause you to feel you had settled for less than the aspiration. It was better, then, to keep saving for a “real” car. And that is what most Indians did.

  Try This

  Try capturing your own cause-and-effect models on paper. Our favorite place to start: your model of what causes one person to fall in love with another. Build the simplest version you can imagine that captures some cause-and-effect forces. Then push yourself, adding forces, outcomes, and even probabilities.

  For us to function well in the world, our models should be as simple as possible and no simpler. When they are oversimplified, models lose their explanatory and predictive power. They fail. Unfortunately, when our models fail, we tend to rationalize the failure as being caused by an exogenous force (an unexplainable event outside our model). Rather than blame the model, we tend to blame the world. But this is a misapprehension. As John Sterman explains, “There are no side effects—only effects. Those we thought of in advance, the ones we like, we call the main, or intended, effects, and take credit for them. The ones we didn’t anticipate, the ones that came around and bit us in the rear—those are the ‘side effects.’”15 We dismiss side effects as irrelevant and so our models get no better over time.

  The oversimplification of our models also can get us into trouble because we tend to be overconfident in our understanding of the world. People tend to overestimate their reasoning ability, just as they overestimate their leadership skills, sense of humor, and driving ability. Worse, they tend to be highly confident in those estimates. Every year at the beginning of her undergrad commerce class, Jennifer asks students to note whether they expect to be in the top half or the bottom half of the grade distribution of the class, and then to state how confident they are in that prediction (on a scale of 1 to 5). Each year, the overwhelming majority expect to be in the top half of the class
(a statistical impossibility, of course), and they are highly confident in that belief. Simply slowing down to think through the context—they are smart students, of course, but they are in a room full of smart students—might lead to the students having a richer understanding of the probabilities of their potential future grades, or at least a more reasoned level of confidence.

  We like simplicity. If you have ever raised an issue at a meeting, only to be told, “You’re overcomplicating this,” you have seen this bias at work. Unfortunately, our drive to simplify can lead us to ignore salient information and suppress dissenting views, producing poor choices as a result.

  OUR MODELS ARE UNHELPFULLY SINGULAR

  Our models tend to be narrow and singular. They aren’t singular in the sense that they apply only to one instance. In fact, evidence suggests that we overestimate the degree to which a model that applies in one situation can also be applied more broadly.

  For example, look at the Black-Scholes options pricing theorem. Created in 1973, it is a mathematical model for pricing a stock option.16 Myron Scholes and Fischer Black took pains to identify the specific domain in which their model was designed to apply: European call options (which can be exercised only at expiration), for which no dividends are paid during the life of the option, on which there are no commissions, in an efficient market in which the risk-free rate and volatility are known and constant. One needn’t be a derivatives expert to understand that this is a fairly narrow set of conditions. Yet Black-Scholes has become the standard model to price all kinds of options. As Warren Buffett wrote in a 2008 letter to Berkshire Hathaway shareholders, “The Black-Scholes formula has approached the status of holy writ in finance . . . If the formula is applied to extended time periods, however, it can produce absurd results. In fairness, Black and Scholes almost certainly understood this point well. But their devoted followers may be ignoring whatever caveats the two men attached when they first unveiled the formula.”17 Fischer Black, for one, agreed. He wrote in 1990, “I sometimes wonder why people still use the Black-Scholes formula, since it is based on such simple assumptions—unrealistically simple assumptions.”18

  Try This

  Think of a widely held model that you use in your work, such as net present value, the incentive theory of motivation, Maslow’s hierarchy of needs, or shareholder value maximization. Answer these questions.

  What is the objective of this model? What was it created to do?

  What are the key assumptions that underlie this model?

  Under what conditions does the model work best?

  Under what conditions does it break down?

  Finally, consider the implications of these answers for your ongoing use of the model.

  Models are constructed in a specific context. But often they are overextended to multiple contexts, becoming less fit for the purpose as they are used further from their original application. So when we say they are singular, we mean that our models feel to each of us like the single right answer. Lave and March tell us this isn’t true—that the way we model the world suggests that there are many possible ways to understand it. They wrote that “since a model has only some of the characteristics of reality, it is natural to have several different models of the same thing, each of which considers a different aspect.”19 The world naturally produces different models of the same thing, in the heads of different people (see figure 2-3). Why then does it feel so much that there should be a single, right model?

  In part, it reflects the way we’re educated. In school, there is typically a single right answer to any question; it is the answer in the back of the book. Anything that differs from that answer is, by definition, wrong. Moreover, we learn quickly that there are material rewards for finding the single right answer and parroting it back to the teacher with conviction. School teaches us that there are right answers and there are wrong answers. Our job is to find the right answer. And when a wrong answer crops up? Our job is to make it go away.

  Figure 2-3. Naturally Different Models

  Many of us have experienced this dynamic at work. We go to a meeting, listen closely to the discussion, and leave totally clear on what we need to do. On the way out, we turn to a colleague to confirm our understanding. But what she shares is an understanding so at odds with our own that we can only wonder, “Gosh, what meeting was she in?” That colleague has built a very different model of the meeting, one that fits with her existing models of the world, her biases, her ideas, her experiences. But to us, steeped in a single-right-answer world, it doesn’t feel as if she holds an alternative model. It simply feels as if she is wrong.

  Two additional cognitive biases amplify the negative effects of this dynamic. First is the affinity bias. We tend to feel more comfortable around people whom we see as being like us. We like them. We spend more time with them than with others. We hire and promote them. But this bias also means that those who see the world differently than we do will produce in us less affinity. We don’t like them. We don’t spend time with them. We don’t hire them, and we don’t promote them. Rather, those who disagree with us are seen as, well, disagreeable. We tend to shut down their views rather than seek to understand them.

  Then there is projection bias. We tend to believe that other people think the way we do. So when others have access to the same information we do, we expect them to come to the same answer. When someone instead arrives at a different answer, we struggle to make sense of it.

  When faced with someone who holds a different model of the world, we tend to default, at least implicitly, to one of two possible explanations. One, we may assume that he isn’t as smart as we are. Crudely put, we think of him as stupid. Or we may assume that the person isn’t stupid at all. He understands the right answer perfectly well, and yet he is arguing for the wrong answer because of a personal, hidden agenda. So explanation number two is that he’s evil. Under the “stupid” assumption, we explain the right answer to the person, slowly as if he is dim. Under the “evil” explanation, we launch a counterattack, seeking allies to argue our side, cutting out the individual from the process, and keeping information from him. Neither assumption, and neither reaction, is likely to help us win friends and influence people.

  Each of these characterizations is actually about us, about what we believe and about the biases driving our own thinking. Categorizing peers as either stupid or evil is a failure of empathy—a reflection of our own inability to understand how another person thinks or feels. And casting those who disagree with us—even implicitly—as either stupid or evil makes group decision making extremely difficult.

  MAKING CHOICES IN ORGANIZATIONS

  Think about the way your organization makes its biggest choices, such as developing a strategy. Often in our work, we see organizations that follow a linear process something like the one shown in figure 2-4. Imagine the mental models and biases that can flourish under such an approach.

  First, what is the objective of the process? It’s to get to the right answer, to solve the problem that was identified at the outset. To get there, we follow a process that is linear and consensus driven, with little room to question the original problem, explore creative alternatives, or loop back to earlier stages without its feeling like dreaded rework.

  Then we charter a team. How is the team chosen? Increasingly, it’s selected to be cross-functional, bringing together expertise and skills from across the organization. But because expertise is domain specific, there is also an understanding that each individual is there to contribute her expertise rather than to challenge the expertise of others. The team is constructed with the understanding that the members are to work well together. So despite all the corporate theater about valuing constructive dissent, the clear message is that conflict is a bad thing. It is dangerous. It might lead to the breakdown of group cohesion and to the failure of the project. So we smile and nod and work politely together. Rather than bring together a diversity of views to tackle the problem, we break the problem into small parts
that can be tackled by individual experts and then reassembled at the end.

  Figure 2-4. Decision Making in Organizations

  Next comes analysis. We decide what data to collect and analyze, and the analysis becomes the basis of everything that follows. The analysis becomes the “facts” on which we build everything else. But the data set we create is necessarily a model of the world—one we have constructed by focusing on some data and not others, by anchoring to the past and assuming that the future will look much like it, by seeking data that confirms our view of the world, by simplifying the causality within the problem, and by minimizing what is salient to our solution. The analysis stage sets us on a definitive path to a narrow answer based on a simplified view of the world, while simultaneously allowing us to smugly feel that we have been rigorous and evidence-based in our methodology.

  Only One Right Answer

  At around this point, a new challenge often emerges. As we start to identify possible solutions, individuals on the team diverge in their beliefs about which solution is the right answer. This is a problem, because there can be only one right answer in this process. Opposing views slow us down, create interpersonal conflict, and divert us from our lovely linear path. So there is meaningful pressure to converge on a single answer.

  At this point, argumentation and voting come in. In the face of multiple options, we tend to default to one of two courses of action. Either we argue to make the “wrong” answer go away, or we default to democratic principles and let the majority rule. In the first case, we explain the right answer to colleagues who disagree. And if they don’t get it right away, we explain it again, slower and louder. They might do the same to us. Eventually, one faction gives in: “Fine, we’ll do it your way!” Notice it isn’t, “Fine, you’re right.” Often, we give in, not because we are convinced our model is wrong but because our opponent has more power, or likes arguing more than we do, or might divorce us if we go on much longer.

 

‹ Prev