We often use machine learning systems because they can increase the accuracy of our predictions or classifications. But that doesn’t mean the models they construct for themselves are based on causal relations—a lack that Judea Pearl hopes will be rectified. There is, as far as we know, no causal relationship between having an open personality and liking Hello Kitty. There is no causal relationship between double-clicking on buttons, preferring cats, and overtipping. Those turn out to be signs of a tendency to overtip, but not causes. Tracks in the snow didn’t cause someone to walk up the hill, but those tracks can be reliable signs of the direction she’s going. A friend’s posture, microsecond hesitancy in talking, and choice of dessert are not causes of sadness, and may be only very indirectly caused by sadness, but may yet be reliable signs of sadness.
Likewise, the signs a machine learning system considers may spring from whatever causes overtipping as a trait. Maybe they are all expressions of a need to be liked, of a fear of embarrassment, or of a sense of compassion. If the correlations are statistically valid, there is presumably some reason why they are. But the causes may be a set of dependencies so manifold and subtle that it’s possible we may never discover them. Nor do we need to, so long as the machines are giving us accurate enough results and are not reinforcing societal biases.
Now, a machine learning model based on signs is clearly not the same as the ancient system of signs that was designed by God or that was an expression of the fundamental symmetry of the universe. Our new system of signs can be more chaotic; if all were orderly and beautiful, we wouldn’t need powerful computers to see the signs and to make inferences from them.
With machine learning, we have gained something much closer to the probabilistic truth, at the price of a universe simple enough for us to understand and beautiful enough to strike us dumb. But our new signs are far more reliable than a gutted bird and, taken together, are themselves a sign of an irreducible intricacy of connection that we may yet learn to love.
Chapter Five
Strategy and Possibility
As we become more comfortable accepting that much of what we thought were truths turn out to be shortcuts that let us deal with a world thoroughly beyond our understanding and control, the concepts we use to organize our behavior and ideas are being reframed. In this chapter we’ll explore one of the most basic terms in the vocabulary we use to talk about the future: possibility. It’s too fuzzy a term to approach head on, so we’ll ask what the evolution of strategy making—business strategies in particular—reveals about our ideas about the nature of the possible.
Narrowing Possibilities
Even though Apple likes to keep its cards close to its chest, in 2014, as part of its patent lawsuit against Samsung, the company published a memo from Steve Jobs written a year before his death. Jobs had sent it to Apple’s top one hundred executives, apparently outlining a presentation he was planning on giving.1 It begins with a slide titled “2011 Strategy,” followed by slides such as these:
2011: Holy War with Google
and
2011: Year of the Cloud
Before ending this overview section by announcing that the company will be building a new campus, he writes,
tie all of our products together, so we further lock customers into our ecosystem.
Jobs then goes through Apple’s product lines, each with an opening statement of his strategy for it. For example, he writes,
2011 Strategy: ship iPad 2 with amazing hardware and software before our competitors even catch up with our current model.
And about Apple’s mobile operating system, he writes,
Strategy: catch up to Android where we are behind (notifications, tethering, speech, …) and leapfrog them (Siri, …).
These are strategies in different senses. Announcing the “Year of the Cloud” marks out a broad, cross-product area Jobs wanted Apple to focus on. “Holy War with Google” designates the enemy against which the company will compete to the death. Tying “all of our products together” is a tactic in furtherance of the strategy of trying to “further lock customers into our ecosystem.” (It is also a sterling example of using intraoperability to reduce users’ possibilities.)
Jobs may not have been entirely consistent in his use of the term strategy—as we’ll see, the term’s application to business is relatively new and not entirely settled—but his talk illustrates a shared, functional definition: a strategy is what leaders tell their lieutenants to focus on. When Scott McNealy, the CEO of Sun Microsystems, in 1990 said, “We’re getting all our wood behind one arrow,” the phrase caught on not just for Freudian reasons—“arrow,” “wood”—but because we think about strategy as a way of focusing all of our organization’s energy on a single goal.2 A strategy is a way to marshal limited resources by making a decision that says yes to one path and no to all the others.
So, in an age as chaotic, uncontrollable, and unpredictable as the one we have entered, you’d think strategic focus—making the hard decisions about how to best use limited resources—would be more important than ever. Without a doubt, that is often the case. But as we have already seen, many organizations are instead beginning to think about strategy differently (adopting lean, agile, disruptive strategies, etc.), in part because of the volatility of our environment; in part because the digital elements of a strategy can be altered so much more quickly and inexpensively than, say, retooling a manufacturing process; and in part because our recent experiences with unanticipation have made obvious the hidden costs of the old anticipate-and-prepare strategy of strategies.
We will undoubtedly continue to use strategies that focus our resources by reducing possibilities, even while we’re adopting elements of strategic interoperability that make more things possible. But just as how we predict reveals how we think the future happens, how we strategize reveals what sort of thing we think possibilities are.
The Invention of Strategy
In 1964, the publisher of management pioneer Peter Drucker’s new book insisted that he change the title from Business Strategies to something that would make more sense to the business audience; Drucker published it as Managing for Results.3 Strategy was still something that armies, not businesses, needed. In fact, even armies didn’t think about strategies the way we do today until a few hundred years ago. Strategies only make sense when we think the future comes about in particular ways.
That’s why in the eighth century BCE when Homer recounted the story of the Trojan War that had occurred four hundred to five hundred years earlier, King Menelaus did not present a slide deck that laid out the strategy for assaulting the walled city of Troy. Instead, his “strategy” consisted of showing up and doing some fighting.
It took another three hundred years for the distinction between strategy and tactics to emerge for the first time, at the hands of no less than Socrates.4 But Socrates meant something very different by strategy than we do. He tells us a strategist is an “inventor of tricks,” and compares strategic generals to musicians coming up with new songs—like Odysseus, who ended the Trojan War by stuffing warriors into a gift horse.5 So, at this founding moment of the distinction between strategy and tactics, strategy isn’t characterized by steadiness but by nimbleness, a term we today associate with agile development and companies that pivot.
That view was true to the Greek understanding of how things happen: moody Fates, self-absorbed gods, and unpredictable mortals set us on the dark, twisty path of our lives. For our modern idea of strategy to emerge, we needed a future that’s orderly and predictable enough for long-term planning to make sense.
We didn’t get that sort of future until around the time of Newton. Lawrence Freedman, in his sprawling work Strategy: A History, says this new definition of strategy reflected “an Enlightenment optimism that war—like all other spheres of human affairs—could benefit from the application of reason.” Warfare became subject, or so it seemed, to laws not entirely unlike Newton’s.6
The concept of martial
strategy flowered in the first half of the nineteenth century, when books began to appear that explained the strategy of war the way a mathematician proves theorems in geometry or a clockmaker explains clocks.7 The most renowned theoretician of war, Carl von Clausewitz, in his 1835 On War, certainly acknowledged the randomness and unpredictability of battle, but he also tried to find the rules that could generally be relied on.8 Clausewitz sometimes expressed these in a Newtonian vocabulary of force, movement, friction, mass, and inertia, with occasional geometrical-style deductions that prove, for example, that grand battles are preferable to many smaller skirmishes.9 Strategy thus became something subject to laws, although those laws interacted with battles so frenzied and desperate that only God or Laplace’s demon could fully comprehend them.
Today we cannot imagine a publisher asking an author to remove the word strategy from the title of a business book, or a CEO responding to the question, “What’s your company’s strategy?” with a firm “I don’t know.” But the concept of strategy has gone through considerable and rapid churn over the past thirty years. In part this was intentional: consultants discovered they could charge a premium for explaining why the existing strategies were pathetically outmoded compared to the new, shiny ones they just happened to have a slide deck about—reengineering’s call for senior management to “blow up the organization” being perhaps the poster child for this.10 Other approaches to strategy making were far more consequential.
These different approaches have had at least two things in common. First, to varying degrees and in different ways, they work by enabling an organization to narrow the possibilities down to the ones that the organization is going to go after. Second, if how we predict tells us how we think about the future, how we strategize tells us how we think about possibility.
The Possibility That Strategy Reveals
Into the category of “possibility,” we have traditionally crammed everything from wishes and fantasies to the choices of entrées on the menu we’re holding. That’s because in the West we have been so focused on the actual and the real that we treat possibilities as a broad category of things defined by their lack of reality. But possibility looks different in a universe characterized by interoperability.
Let’s look at five not quite miscellaneous examples of conceptions of strategy from recent decades, then come back to see how each reveals the nature of possibility.
1. Military. In 1941, before the Pearl Harbor attack precipitated the United States’ entry into World War II, the US and British militaries met in Washington, DC, and decided on a joint strategic goal they termed “Europe First”: they would focus on defeating Germany before turning to the Japanese.11 This strategy would entail making strategic decisions about the theaters to commit resources to and which not to, whether to launch a massive invasion of Europe, and the balance of land and sea forces. Such decisions were guided by intensely pragmatic considerations about logistics, terrain, weather, the state of the troops, and so on.
2. Cold War. Named for “research and development,” RAND was founded in 1946 by General Henry “Hap” Arnold, who believed America needed to collect its greatest minds to keep American science and technology ahead of everyone else’s, especially the Russians. The US Air Force became its main client, and the group grew so rapidly that it was soon advertising jobs for hundreds of researchers, including in ads that bragged that RAND’s president was a direct intellectual descendant of Isaac Newton.12 Herman Kahn became the most famous of the RAND crew because he wrote best-selling books, because he was an eccentric character, and because he was one of the inspirations behind the crazed scientist in Stanley Kubrick’s 1964 Dr. Strangelove, or: How I Learned to Stop Worrying and Love the Bomb. He developed nuclear war strategies by cold-bloodedly thinking through the various moves Russia and the United States might make in an extended exchange of nuclear missiles. This led him to calmly compare scenarios in which “only” five million people die with ones in which twenty million civilians are incinerated—rational discussions of the unthinkable.
3. Scenario Planning. Peter Schwartz credits Kahn with helping to inspire what seems like a very different approach to strategy making—scenario planning—invented in the 1960s at Royal Dutch Shell by Pierre Wack.13 (Schwartz carried on Wack’s work there.)14 Schwartz describes the process this way: “In a scenario process, managers invent and then consider, in depth, several varied stories of equally plausible futures. The stories are carefully researched, full of relevant detail, oriented toward real-life decisions, and designed (one hopes) to bring forward surprises and unexpected leaps of understanding. Together, the scenarios comprise a tool for ordering one’s perceptions.”15
As Wack himself wrote, the challenge was not to spin up imaginary scenes but rather to break managers out of the existing models that assumed the business environment would continue pretty much as it was at the moment. Wack feared that the grip of those models was too strong to let managers take seriously the process he was proposing. “[H]ow could our view be heard?” he wondered.16
The answer was by presenting rigorous, fact-based analyses that show that what happens next might be very, very different from the way things are now. Wack’s group’s voice was heard much more clearly after its analyses convinced Shell to prepare for the possibility of an oil crisis, a scenario that came to pass in 1973.17 By 2006, Bain & Company reported that 70 percent of companies it surveyed were using scenario planning.18
4. Transient Advantage. In Rita Gunther McGrath’s 2013 book, The End of Competitive Advantage, she distinguishes her ideas about the “strategy of continuous reconfiguration” from prior approaches, especially Michael Porter’s “sustainable competitive advantage.”19 McGrath argues that competitive advantage is no longer sustainable and “no longer relevant for more and more companies” because digitalization, globalization, and other factors have made the environment far too dynamic. Even Porter’s assumption that a strategy is devised for a single market can keep a company from creating strategies that address the entire “arena” in which the business operates, to use McGrath’s term. So companies must be always alert to changes anywhere in their environment and have in place the organizational structure and the culture that enables them to disengage from the current strategy and to create a new one.
5. Flip-Chart Strategies. In a familiar ice-breaking exercise at management off-sites, designed to free up the imagination of the attendees, the participants are broken into small groups and asked to create a magazine cover story about their company ten or twenty years in the future. “Go wild!” they are instructed. The aim is distinctly not to point to the real possibilities looming but to imagine success beyond reason. Flip-chart strategies are designed not to be taken seriously.
* * *
If we turn inside out these five markers along the path that business strategy has followed, they become case studies in the perception of the nature of possibility. Let’s go back over them with that in mind.
1. During World War II, the possible actions worth considering were the ones that worked within the physical limitations of moving soldiers, supplies, and equipment, and that responded to the ever-changing, unpredictable situations on the ground. Possibilities were rooted in earth and mired in mud.
2. During the Cold War, possibilities arrived in the nose cones of unstoppable missiles that didn’t have to worry about fighting their way through enemy-held territory, about whether the rivers were swollen and the bridges intact, about how to get fresh water and canned rations to hungry soldiers. The missile launches in this model occurred in “turns”—retaliation for the enemy missiles that’d just been launched, followed by a counterretaliation—with nothing stopping the combatant nations except their willingness to press the button again. Never before had war seemed so much like playing chess.
Indeed, RAND’s approach removed the real world, with its mountains, rutted roads, and broken axles, from the “gameplay.” Thus, the possibilities were dictated by the logic of the game,
not by the physical impediments. That logic consisted of the set of assumptions the combatants had about the rationality of their opponents, their willingness to sacrifice their populations, and the like.20 As you followed each possible branch of the tree of possibilities, you summed the costs and gains, the risks and rewards of this terrifying game of life, or more appropriately, game of Global Thermonuclear War.21
3. The scenario planning that began at Royal Dutch Shell treats possibilities as much more real. It does not assume that the movers are rational actors following relatively simple rules independent of what’s happening on the ground. It instead looks at the ground and sees complex, potentially disruptive forces at play that can only be ascertained by taking a wide view anchored in deep factual analysis. It asks how the world might surprise us with changes in every dimension, from the climate to disease to the rise and fall of despots. This is much closer to the World War II military’s on-the-ground understanding of possibilities than to the Cold War’s gamelike logical possibilities, except Shell’s scenario planning considers disruptions in the context itself and proceeds with less urgency, with more information, and from far more comfortable quarters.
Everyday Chaos Page 14