Chapter 6 shifts to examining the models. In this stage, you seek to explicitly live in the tension created by the use of opposing answers. Your aim is to find possible leverage points toward a creative resolution of that tension. To help you do that, we provide a series of questions designed to probe ever deeper into the opposing answers and the tension between them. Using the Toronto International Film Festival as the central example, we explore the value of assessing the true points of tension between the opposing answers, articulating key assumptions beneath them, and understanding the ways in which each produces its most important and valuable outcomes. In particular, we introduce a tool for thinking more deeply about cause-and-effect relationships to help produce insights about the opposing answers and open new possibilities for consideration. That is the next place to go after you examine the models: to generate possibilities that can resolve the tension between your opposing answers, creating great choices to solve your problem.
Generating new possibilities is the focus of chapter 7, which begins with the story of the founding of The Vanguard Group, Jack Bogle’s great investment management firm. In this third stage of the process, you’re seeking to create new choices. To offer a starting point, we provide three possible pathways toward differing integrative solutions. These approaches are based on consistent patterns we’ve seen in the ways that successful integrative thinkers go about generating their solutions.
These pathways are intended to serve as search mechanisms. They are three questions, essentially, to help frame your search for answers to the problem you’re seeking to solve. Here, the goal is to create a number of possible answers that you can prototype, test, and improve as you move ahead. In this chapter, we include stories to illustrate what each pathway looks like in practice. The goal is not to provide templates to copy but rather to give you a richer understanding of how best to use these three questions to explore the possibilities in your own context.
The final stage of the integrative thinking process is detailed in chapter 8, where we turn to assessing new possibilities through prototyping and testing. This stage has three components: clearly defining your new possibilities (via design-thinking tools such as storytelling, visualization, and modeling); understanding the conditions under which each of your new possibilities would be a winning solution to the problem you want to solve; and, finally, designing and conducting tests of the possibilities to help you choose among them. In this stage of the process, illustrated primarily with a story from Tennis Canada, you refine and improve the possibilities so that you can clarify the choice between them and begin implementing the great choice you’ve created.
The book closes with a final chapter on mindset. In it, we explore a way of being in the world that makes integrative thinking more doable, regardless of the specific situations in which you may find yourself. We use the story of Paul Polman, CEO of Unilever, to illustrate the implications of your stance for your ability to create great choices. We explore this foundational notion, discuss why an understanding of stance is important, and talk about the nature of an integrative thinking stance, all in order to provide the context for you to examine your own mindset. We end with mindset, just as we begin with it, to reinforce what we hope is a core theme of our work: that integrative thinking is itself a great choice, a way of being in the world that opens new possibilities where previously none existed.
In the end, this book was designed to be a practical user’s guide to integrative thinking. Sprinkled throughout its pages, you will find thought experiments and tasks intended to push you to try out the theory, tools, and process for yourself, along with templates to use when you’re working on a real-world problem with your team. Our goal is to share with you all we have learned about creating great choices and to provide you with the tools you need to do so.
Chapter 2
How We Choose
We’re blind to our blindness. We have very little idea of how little we know.
—DANIEL KAHNEMAN
Let’s face it—most of us make lots of bad decisions, whether it’s launching a new product that ultimately fails or choosing to rewatch that episode of Game of Thrones instead of going to the gym. Knowing that we tend to fall prey to bad decision making isn’t enough to keep us from making the same bad decisions again. If we are to have any hope of consistently making better decisions, we need to understand how and why our current decision-making processes fail us.
In part, our decisions often fail because of glitches in our thinking, including deep-seated biases that produce troubling lapses in logic. Each of us falls prey to these glitches to some degree, no matter how logical or open-minded we believe ourselves to be. But that’s at an individual level. Surely, once we get together in groups, we can overcome these failures in thinking and help each other come to better choices, right? Unfortunately, often organizations accidentally make the problem worse. By and large, organizational decision-making processes not only fail to account for those glitches but actually lean in to our individual biases and logical lapses, amplifying their worst effects.
The roots of our bad decisions—whether individual or collective—can be found in the way our minds process and understand the world. To be sure, the human mind is a remarkable thing. A mass of some 100 billion neurons, it controls our every thought, feeling, and action. It is what lets us speak, throw a ball, and remember our first kiss. It is how we make sense of the world and our function within it. According to Descartes, it is how we know we exist—the evidence of self, found in our ability to doubt, to question, and to think. It is the seat of memory, joy, movement, problem solving, and creation.
The mind is our means for understanding the world. But, it turns out, it is less a window into the world than a filter. And it serves as a filter for a good and helpful reason. The world is massively complex. It is too complex for us to take in and make sense of in real time. So our mind does us an important favor of which we are blissfully unaware. It filters out a great deal of that complexity and creates for us a simplified model of the world (see figure 2-1). Every time we encounter anything, whether a person, a place, or an idea, our mind builds a simplified model of it (and that is, after all, the very definition of a model: a representation of something, typically on a smaller scale).
Through automatic subroutines, our mind is constantly modeling. This process allows us to systematically pay attention to some things and not to others, to layer meaning onto our perceptions, and to make sense of our experiences in light of what we already know. It structures our world, creating what Kenneth Craik called “small-scale models” of reality that the mind uses to anticipate events.1 These models, then, are essential; they let us exist in a complex world without being overwhelmed by its complexity.
Figure 2-1. Building Mental Models
Our mental models—the set of models our minds create of the world—accumulate over time and ultimately become our reality. The modeling process happens automatically, continuously, and, for the most part, subconsciously. As systems dynamics guru John Sterman explains, “Every decision you make . . . everything you know and everything you do is on the basis of models of one sort or another. You never have the choice of let’s model or not; it’s only a question of which model. And most of the time, the models that you’re operating from are ones that you’re not even aware that you’re using.”2
Worse, these models are wrong, or at least incomplete. That is the nature of models: they leave things out. Charles Lave and James March explained it well when they wrote that “a model is a simplified picture of a part of the real world. It has some of the characteristics of the real world, but not all of them. It is a set of interrelated guesses about the world.”3 Or as philosopher Alfred Korzybski more poetically put it, “The map is not the territory.”4 The map is our representation of reality—a simplified version of the world that bears just enough resemblance to reality to be useful. But there is always a gap between reality and our perception of it.
Though they
are wrong, our mental models have a profound effect on our behavior and choices, as the growing field of behavioral economics has demonstrated. Our mental models, and the cognitive biases that influence them, can lead us to make suboptimal decisions because these models are largely implicit, easily manipulated, sticky, simplistic, and singular. Let’s explore these five ways our models can fail us and help produce poor decisions.
OUR MODELS ARE IMPLICIT
We’re rarely aware of the models we hold. It feels as if we see reality, so we rarely reflect on the way in which our own vantage point influences what we see. But it very much does. Consider a football game.
It’s 1951. Princeton and Dartmouth are playing a rough match. In the second quarter, Princeton’s star halfback, Dick Kazmaier, leaves the field with a broken nose. In the third quarter, a Dartmouth player suffers a broken leg. Tempers flare, whistles blow, and, at the end of the game, campus media on both sides highlight the hard feelings and broken bones.5
The Daily Princetonian calls the game a disgusting exhibition for which “the blame must be laid primarily on Dartmouth’s doorstep. Princeton, obviously the better team, had no reason to rough up Dartmouth.” The journalists at the Dartmouth disagree. Yes, there was dirty football, they write, but the blame lies entirely at the feet of Princeton head coach Charley Caldwell. It was he who had exhorted his team to exact revenge for Kazmaier’s injury (one, the paper assures us, was “no more serious than is experienced almost any day in any football practice”).
Professors Albert Hastorf and Hadley Cantril read both takes and were intrigued. So they did what social psychologists do: they launched an experiment. At both Princeton and Dartmouth, the professors asked students to watch a film of the game, noting any infractions they saw and whether these fouls were mild or flagrant.
When Princeton students watched the game, they judged it as rough and dirty, counting twice as many infractions by the Dartmouth side as by the Princeton team. At Princeton, the Dartmouth infractions were also more likely to be deemed flagrant than the Princeton fouls. Dartmouth students, in contrast, saw the two teams as having about the same number of fouls, and those same Dartmouth students “saw their own team make only half the number of infractions the Princeton students saw them make.”6
From these different models of the game, the authors drew the following conclusion: “It seems clear” they wrote, “that the ‘game’ actually was many different games and that each version of the events that transpired was just as ‘real’ to a particular person as other versions were to other people.”7 Unaware of their own implicit bias, the students of Dartmouth and Princeton paid attention to information about the game that fit with their existing understanding of the world. The game they saw was very real to them, inside their own minds.
None of the students in this seminal study was aware of how strongly their models of the world influenced their experience of the game. Our models exist under the surface, and we rarely reflect on how they influence our actions. Yet they do. And a deeper understanding of these models and the ways they influence our behavior can be helpful in understanding their broader implications for decision making.
Think about your team at work. If you were to ask team members for their mental models of how to be successful, it might be challenging to get to a clear answer. But you may be able to see underlying mental models in their behavior. One member of the team, Jeff, works tirelessly. He takes his lunch at his desk, he stays late almost every day, and clearly he takes great pride in his work. What kind of mental model drives this kind of behavior? A core belief that hard work is how you get ahead. Jeff’s mental model of success holds that producing quality work and demonstrating personal commitment are what it takes to win in the long run.
Then there is Ashley. She is rarely alone at her desk. She’s in meetings or chatting in the communal kitchen or heading out to play golf with a client. She leads the social committee, volunteers on a community board, and is more likely to spend the evening at a networking event than working late in the office. Ashley’s model of success, then, is closer to the old adage, “It isn’t what you know, it’s who you know.” Her core belief is that cultivating relationships is the way you succeed. Ashley’s mental model holds that time invested in making connections is time well spent.
The thing is, neither Jeff nor Ashley is very aware that these are the models they hold, nor that neither of these models is exactly right. Were Jeff and Ashley to reflect consciously on their models of success and on the behavior that is driven by them, they each might take a more balanced approach. It might also be less painful when one of their models fails in practice—when someone else gets the promotion or when Ashley and Jeff have to work together on a project, finding themselves in a constant state of conflict about how to proceed (and each blaming the other for the impasse).
Try This
What is your own model of career success? Build a mind map (see figure 2-2) of your beliefs, and then ask yourself, Where do these beliefs come from? When and how did I start to believe what I believe? How does this model help me? How does it hinder me?
The more implicit our models of the world, the more likely we are to struggle to understand why we do what we do and why we get the results we get. Many bad decisions can be tracked to unarticulated models of the world and the unsurfaced assumptions behind them.
Figure 2-2. A Sample Mind Map
OUR MODELS ARE EASILY MANIPULATED
Many of our mental models come from our own life experiences. They develop over time, from lessons taught by our parents, learned in school, and passed on from friends. But sometimes our models of the world can be manipulated by prompts that we hardly notice and might never guess could have any effect.
Dan Ariely writes about one simple manipulation of mental models in Predictably Irrational.8 He tells the story of an experiment he conducted at MIT with Leonard Lee, Shane Frederick, and some free beer. The experimental conditions were straightforward. Students at a pub called the Muddy Charles were offered a choice of two beers: Budweiser and MIT Brew. Students were given samples of each and then offered a full glass of whichever beer they preferred.
In some cases, the students were given a blind tasting, without any information about the beers other than the names. Under this condition, most of these students preferred the MIT Brew. A second set of students were offered the two beers but were told in advance the real difference between them: MIT Brew, it turns out, was simply Budweiser with a few drops of balsamic vinegar added per ounce of beer. Students alerted to the vinegar recoiled from MIT Brew at first taste and strongly preferred the regular Budweiser.
Then, Ariely, Lee, and Frederick added one more condition. For a final group of students, they explained the true nature of MIT Brew after it was tasted, but before the students had expressed a preference. Under this condition, the students liked MIT Brew just as much as those who didn’t learn about the vinegar at all, and much more than the students who were told about the vinegar before the taste test.
Ariely and his colleagues were interested in how expectations impact our perceptions. It turns out that knowing about the vinegar made the beer taste awful. For participants, the “reality” was materially altered by the presence or absence of a single piece of information. The beer itself was the same.
This is a simple illustration of the way other people can impact our mental models, our understanding of the world, in all kinds of ways, without our ever being aware that it is happening.
Chen-Bo Zhong and Geoff Leonardelli offered another such example. In their experiment they asked one set of participants to remember a time they felt socially excluded, and another set to remember a time they were included in a social group. Under both conditions, participants were then asked a series of questions. One such question: What is your estimate of the temperature of this room? Those who recalled being socially excluded estimated an average of three degrees colder than the ones who recalled being included.9 Feeling excluded actually made
participants feel colder. A simple manipulation produced a measurable effect on participants’ models of the world.
The fact that models can be manipulated can be used for good (such as encouraging healthy eating through behavioral nudges) or for evil (such as encouraging hatred and fear of minority groups through subliminal propaganda). Either way, it’s clear that small changes in context can lead to very different choices. Judges, for instance, are more lenient on sentencing early in the day and immediately after snack breaks than they are just before lunch, suggesting justice isn’t blind so much as hungry.10 Or suppose you ask people who have watched a car accident on video to estimate the cars’ speed. You’ll get one answer if you ask, “About how fast were the cars going when they smashed each other?” and you’ll get a very different one if you ask the same question but substitute “contacted” for “smashed.” (That’s forty-one miles per hour for smashed versus thirty-one miles an hour for contacted).11 Male skateboarders asked to perform tricks in front of a judge will perform far riskier and flashier moves if the judge is female and, importantly, attractive.12 You may want to pause to recover from that last bit of shocking news.
All this is to say that it is far easier to influence our models, at least in the short run, than we might expect. A quick comment from a colleague, the temperature of the room, the presence or absence of snacks—seemingly tiny changes in context can inform our mental models and trigger biases without our being conscious of it. Our models are influenced by more than we might imagine, and our choices are then framed by these models.
Creating Great Choices Page 3