The Opposable Mind

Home > Other > The Opposable Mind > Page 12
The Opposable Mind Page 12

by Roger L. Martin


  The problem with single-mindedly seeking to justify and confirm the veracity of the existing model is that the contented model defender won’t treat disconfirming data as valid, much less salient. When we go into defensive mode, we short-circuit any attempt to seek a more accurate model. Had he wanted to see it, Farhad could have found plenty of data to disconfirm his model of Daniel. If Daniel was a cheater, why was he so quick to offer not to charge for his labor?

  Daniel, for his part, had less data to disconfirm his model of Farhad, but he had some. After all, Farhad persisted in asking Daniel to come back and fix the lock. Viewed charitably, he was crying out, albeit ineffectively, for someone to clear up his confusion about the door and the lock. Had Daniel not gotten angry and walked out, the misunderstanding could have been resolved without violence.

  In many respects, Chamberlin offers the most effective counterargument to the contented model defense, with his admonitions against the “ruling theory” approach to scientific work. Contented model defenders want to have a ruling theory because when it’s confirmed, they can return to a resting state, all their certainties in place. The reigning Western model of education, with its emphasis on finding a single right answer, supports this tendency.

  Within the contented model defense stance, an alternative or clashing model is a problem to be eliminated. Alternative models pose a threat to the veracity of the existing model and must be disbelieved, distorted, and disproved. Farhad’s model didn’t allow for the possibility that Daniel genuinely wanted to help him fix his problem. Daniel’s offer not to charge for his labor challenged the accuracy of Farhad’s model of Daniel as a cheater, so Farhad felt compelled to reject the possibility that Daniel meant well. Seeing Daniel’s loving household would have posed another challenge to Farhad’s model, so he didn’t seek out that disconfirming data. The existing model had to be protected and justified.

  There is a more productive alternative to defending one existing model against all challenges: the stance of the optimistic model seeker. The optimistic model seeker doesn’t believe there is a right answer, just the best answer available now. American philosopher Charles Sanders Peirce (1839–1914) called this the fallibilist stance, because it presumes that all models are fallible.9 That doesn’t mean that current models should be rejected. Until the best present model is eclipsed by a better model, the best present model should govern. But fallibilism assumes that the best current model will be eclipsed in due course, as will its successor models.

  For optimistic model seekers, their resting state is not certainty. They are forever testing what they think they know against the best available data. Their goal is the refutation of their current belief, because refutation represents not failure but an advance. Just as Peirce suggested, each new model, while an improvement, is still imperfect and replaceable in due course with a still-better model. In essence, the stance can be characterized as optimistic because it implies optimism that future models will be superior to the current model.

  Optimistic model seeking reinforces and empowers the integrative thinking stance. Integrative thinkers look for and enjoy opposing models because they see the presence of an opposing model as evidence that a better model can and will emerge. Unlike contented model defenders, who are discomfited by multiple models, optimistic model seekers are discomfited by the presence of a single model. They see the value in the complexity of multiple models, and their preference is always to wait for a better model to emerge rather than to justify the existing model.

  Many of the integrative thinkers I interviewed were explicit and conscious in their optimistic model seeking. Bob Young of Red Hat loves being challenged and enlightened by a model that stands in opposition to the conventional wisdom. “When any asset is dismissed by others,” Young says, “it is a sign that it should probably be purchased.”10 Michael Lee-Chin of AIC Limited takes a similar view of out-of-favor securities.11 K. V. Kamath of ICICI Bank, while a supreme technologist, is suspicious of standard rules of thumb for technology. His goal at ICICI Bank was to match the IT capabilities of the bank’s international competitors but at a tenth of their cost, and that meant telling his IT department to toss out the standard models for how to build and manage a bank IT infrastructure.12

  Perhaps the most flamboyant of the optimistic model seekers was Rob McEwen, the former CEO of Goldcorp.13 McEwen is famous within the staid gold mining industry, and now far beyond it, for applying the open-source ethos of the Linux movement to the mining industry. He put all of Goldcorp’s geological information about its Red Lake mine on the Internet and offered prizes for the best ideas on where Goldcorp should drill for gold. McEwen’s rivals thought he was nuts, but that didn’t stop him. The ideas from McEwen’s Internet challenge transformed Gold-corp’s Red Lake mine from an underperforming asset to the most productive gold mine in the industry.

  McEwen’s description of the stance underlying his decision is telling. “I was looking,” he says, “for the fundamental, underlying, unquestioned assumption that everybody in the industry grows up with. And if you find that assumption, and then question it, you can start seeing opportunities. If you can define the problem differently than everybody else in the industry, you can generate alternatives that others aren’t thinking about.”

  Implicit in McEwen’s stance is his fundamental optimism that there is a better model out there. To cultivate a similar stance, we must first become aware of our own tendency to resort to contented model defense. Next, we need to come to see optimistic model seeking as a legitimate approach.

  The first step, then, is to examine our own personal beliefs and determine how and why we maintain them. Typically, we find that we maintain our beliefs by engaging in contented model defense. For example, we often resort to authority to justify our beliefs. “I know it is true because that is the way God meant it to be” is an example of that defensive strategy. Invoking divine authority neatly blocks any search for inconsistent or disconfirming data—such a search would be tantamount to blasphemy.

  Logical circularity is another favorite strategy of contented model defenders. “I know that I treated him fairly in that transaction,” we tell ourselves, “because I am a fair person.” This formulation neatly places the burden of error on the person who feels unfairly treated.

  Most of us entertain such beliefs without examining them closely. In the integrative thinking course I teach at the Rotman School, students learn to examine the logic behind their own beliefs. They’re usually surprised to discover that they can and do hold models to be true on the basis of little or no testing or evidence.

  4. I Am Capable of Finding a Better Model

  The three stance statements concerning the self are harder to teach than the three elements of stance about the world. While one may believe that being an optimistic model seeker is superior and want desperately to be one, the desire does not automatically produce the desired outcome. In truth, experience is the best teacher of these components of stance, because only through experience do we gain confidence that the statements are indeed true. And only through experience do we gain skill and confidence that we can find the better model, handle complexity, and be patient with ourselves. But we can help by teaching our students to reflect on how they think consciously and systematically. Through reflection they learn to explore the thinking that goes into their decisions. They learn to analyze the models underlying their decision to determine what was salient and what causal relationships were inferred. And by analyzing their decisions, students learn whether they were able to focus on the whole as they designed a solution, or whether, like most of us, they got lost along the way, emphasizing one detail at the expense of the whole.

  To learn from our decisions and their consequences, we must be explicit in advance about the thought process preceding the decision. For better and for worse, the mind has an almost infinite capacity for rationalizing after the fact. If things don’t go the way we hoped they would, we are capable of totally forgetting the though
ts that led to our decision. Instead, we tell ourselves that the unanticipated outcome is, in fact, what we expected all along.

  Corporate managers do this every day. They make investments in the expectation of a productive outcome. When the outcome is disappointing, they convince themselves that it couldn’t have turned out otherwise—in fact, it was exactly what they were expecting all along. The only way to defeat that rationalizing mechanism is to record the thinking that leads to a decision and the outcome we expect from that decision. At that point, it’s a fairly simple matter to compare the actual outcome against our expectations. The disparity—and there’s almost sure to be a disparity—offers us a valuable glimpse into our own thought process and our characteristic errors—errors which are, I suggest, the product of our stance.

  We ask students (some of whom are corporate executives) to practice optimistic model seeking by exploring a dilemma that features opposing models. I recently taught an integrative thinking workshop for the global human resources team of a large company. Their dilemma was whether to centralize their corporation’s training and development globally or to distribute it, whether by region or by business unit.

  I began by asking the group to “reverse engineer” the logic of both competing models. By reverse engineering, I meant I wanted them to trace the logical audit trail from salient data to causal connections to architecture of the model to its conclusions. But I wanted them to trace backward, from conclusion back to salient data.

  When a group reverse engineers the assumptions underlying a given model it is important that the group’s members focus on what would have to be true for the model to be valid, rather than what they think is true. By taking the time to consider what would have to be true for the model to be valid, they gain practice in not rushing to confirm or disconfirm the veracity of one model or the other.

  In the human resources case, reverse engineering revealed that several things had to be true for the centralized model to be valid. The needs of the various regions had to be fairly similar. The central training unit had to have a good understanding of the needs of each region, despite the central unit’s physical and cultural distance from the regions. The central unit would also have to gain buy-in from the key actors in the region.

  For the distributed model, other conditions had to hold true. The individual training centers would have to be able to maintain reasonable cost-effectiveness without global scale. They’d have to maintain sufficient consistency in training and development across the whole firm. They’d have to be in close enough contact with corporate human resources to focus on the correct firmwide needs and priorities.

  The participants were then asked to marshal two sorts of data. One set would support each statement of what would have to be true. Another set would undermine each statement. Seeking out disconfirming or undermining data was crucial to the exercise, because we want to avoid slipping into contented model defense.

  As I expected, the class found that neither model was perfect. Some data supported each pillar of what would have to be true; some data disconfirmed it. For example, it was pretty clear that it was unlikely that the central unit could gain the respect of the key actors in the region. They thought that the central unit was inflexible and didn’t understand the local training needs. Similarly, it was pretty clear that the distributed model wouldn’t attain the consistency required for the company as a whole to maintain uniform standards. When the disconfirming data was laid out so clearly, defenders of each model had to concede the deficiencies of their favorite.

  At the end of this exercise, the group had a clearer understanding that neither of their initial models was perfect. At that point they were prepared to entertain the suggestion that a better model awaited their discovery. More important, the thorough reverse engineering of the logic underlying both models helped them see that both models were constructions of reality and not reality itself. When the group understood that point, it was easier for them to consider the possibility that they might discover or devise better models.

  The workshop participants came to understand that the global model was inflexible and couldn’t allow for regional customization. The regional model lacked consistency and cost-efficiency. But they began to believe it wasn’t necessary to trade off the advantages of each model against its disadvantages.

  Together, the class worked to design an integrative resolution to the training dilemma. After some struggle and trial and error, a better model emerged by which a centralized global function would create, to borrow a metaphor from IT, training content “platforms.” The regions could then efficiently build custom “applications” on top of those platforms. The global platforms would ensure consistency and capture economies of scale. The local applications would ensure that the training was right for the regions and give the regions sense of ownership, which was crucial to the success of the effort.

  In hindsight, the solution appears simple, if not downright simplistic. Yet the participants in the class had not arrived at the solution before we undertook our exercise in reverse engineering. And it’s unlikely that they would have discovered the solution if the members of the human resources team had remained in contented model defense mode. They would have been looking only at data that supported whatever model they were rooting for and would have missed out on the advantages of the model they didn’t favor, as well as the shortcomings of the model they did favor. By shifting to an optimistic model-seeking approach, the class was able to analyze the opposing models dispassionately and gain the insight needed to devise the new model. And the discovery of the new and better model gave the participants confidence not only that there was a better model out there, but that they were fully capable of finding it.

  5. I Can Wade into and Get Through the Necessary Complexity

  To build the confidence and skill in wading into complexity to get to the other side—as did Victoria Hale—we teach students in the integrative thinking course at Rotman a version of the reverse engineering that the HR team engaged in. The students build backward from outcomes, to the actions that produced the outcomes, to the thinking behind the actions. We call this sequence

  Thinking → Actions → Outcomes,

  or TAO.

  To teach students how to follow this sequence, my colleagues and I ask them to play a standard business simulation game. The game consists of eight teams, or companies, each starting in the same position. They play four periods, which represent four years of operations for each company. Each team chooses the region (one of four) in which it will produce its theoretical product, how much it will produce, how much it will advertise, how much it will invest in research and development, and at what level it will set its prices. In short, each team must design a complicated sequence of actions in a complex environment and has limited time to do so.14 They submit new choices at each round, and a computer simulation runs the algorithms to determine the outcome.

  Unlike many business school exercises, this is a complex and ambiguous game with no predetermined right answer. And as quickly becomes clear, the results of each team’s choices are dependent to some degree on the choices of the other teams.

  Things never turn out as the students expected. After they finish the game, we ask each team to pick one outcome they found particularly disappointing. What action or actions, we ask, led most directly to this unhappy outcome? What thinking led to the action or actions?

  Working backward through the action sequence enables students to see where they’ve missed salient data or overlooked a crucial causal sequence. One team was most disappointed by its third-period sales. In analyzing the action that produced the poor sales results, the team found that they had sought too large a profit margin. In consequence, they ended up pricing themselves out of the market.

  Was it simple greed that led to the pricing error, or did the team miss salient market information? In thinking backward along the causal chain, they realized that they had misinterpreted the pricing signals that emerged from
the preceding period. They thought that every team would be content to walk prices upward in a straight line, but in fact, some teams opted to slash prices in the third period in order to gain volume and market share. By not considering the competitive environment in all its complexity, with all its many points of salience, this team invited disappointment.

  The participants in these classes learn three things quite quickly. First, they learn that they don’t think about their thinking much. Second, they learn that thinking about their thinking—reflecting, in other words—is hard. The teams struggle mightily to put together the

  Thinking → Actions → Outcomes

  chain. Even

  Actions → Outcomes

  is hard for them. Typically, outcomes just occur and those responsible for the actions that produced them quickly forget what they did, focusing instead on the next series of actions.

  If working backward from outcomes to actions is difficult, delving one level further back to the thinking that produced the action is harder still. Few students, whatever their age or level of attainment, have much experience reflecting on their own thinking.

  Third, they learn that systematically reflecting on how they think is a powerful way to change how they think. It’s very common to hear students exclaim, when they complete an exercise in reverse engineering, “What were we thinking?” Such moments of profound incredulity—and insight—come only from thinking about thinking.

  One powerful lesson in thinking about thinking emerged when we were running two games simultaneously (we had an especially large crop of students). A key choice that each team must make in the first period is where to build its plant. Each team can have only one plant, which can be expanded but not moved. Here again, the students learned the importance of taking into consideration the potential choices of the other teams. Absent such consideration, the obvious choice is to build the plant in North America, the biggest market with the most favorable average shipping costs to the other three regions.

 

‹ Prev