MULLING
As rich as the history of computational decision-making may be—from Bentham to Google’s self-driving car—I think it’s fair to say that most of us end up making complex decisions without actually doing any math. This is probably not a bad thing. The most important work lies in the way we frame the decision, the strategies we use to overcome all the challenges of bounded rationality: exploring multiple perspectives, building scenario plans, identifying new options. If we’ve done a thorough job working through the mapping and predicting phases, the actual choice often becomes self-evident. This, too, is one of those places where the brain’s default network is so important. Your mind is amazingly gifted at mulling over a complicated decision, imagining how it might affect other people, imagining how you yourself might respond to different outcomes. We all make those intuitive scenario plans as a background process with extraordinary skill. The problem is that our visibility is often limited when we create those scenarios, so we miss crucial variables, or we get stuck in one assumption about how events will likely transpire, or we fail to see a third option that might actually reconcile conflicting objectives. So the mapping and predicting stages of a complex choice are really about giving the default network more material to process.
You can map all the variables, “red team” your assumptions, and build scenario plans for different options, but in the end, the final decision usually turns out to be closer to art than science. All the exercises of mapping and predicting—and the diversity of perspectives brought into the conversation—can open up new possible options that weren’t visible at the outset, or help you see why your first instincts were the wrong ones, the way the Obama team slowly came to see the possibility that the compound might indeed be housing their archenemy. If you’re lucky, investing the time and contemplation into the decision process takes you to a place where the choice becomes clear.
But sometimes the answer is murkier, and you have to make the tough call between a few remaining options, each of which promises a different mix of pain and pleasure to the individuals affected by the choice. In those situations, keeping score can sometimes be clarifying—as in the linear values approach. It can certainly help to think about the decision in computational terms if you are making group choices that involve different stakeholders with different objectives and values. But for choices with a small number of decision-makers, the best approach is often an old-fashioned one: give your mind the free time to mull it over. In a sense, the preparation for the choice should involve state-of-the-art strategies: premortems, scenario plans, expert roles, stakeholder charrettes. But once those exercises have widened your perspective and helped you escape your initial gut reactions, the next step is to let it all sink in and let the default network do its magic. Go for long walks, linger in the shower a little longer than usual, let your mind wander.
Hard choices demand that we train the mind to override the snap judgments of System 1 thinking, that we keep our mind open to new possibilities—starting with the possibility that our instinctive response to a situation is quite likely the wrong one. Almost every strategy described in this book ultimately pursues the same objective: helping you to see the current situation from new perspectives, to push against the limits of bounded rationality, to make a list of things that would never occur to you. These are not, strictly speaking, solutions to the problem you confront. They are prompts, hacks, nudges. They’re designed to get you outside your default assumptions, not to give you a fixed answer. But unlike the quack cures that Darwin dabbled in—along with the rest of the Victorians—many of these interventions have been supported and refined by controlled experiments. We don’t have an infallible algorithm for making wise choices, but we do have a meaningful body of techniques that can keep us from making stupid ones.
OBAMA’S CHOICE
When word first reached Washington about the strange compound on the edge of Abbottabad that al-Kuwaiti had been tracked to, almost everyone who heard the description of the residence had the same instinctive reaction: it didn’t seem like the kind of place Osama bin Laden would use as a hideout. But those gut feelings, as powerful as they must have seemed at the time, turned out to be wrong, just as the sense that Saddam Hussein must be working on some kind of WMD program turned out to be wrong as well. But because the intelligence community and the White House did not merely submit to their instincts about bin Laden, but instead probed and challenged them; because they took a full-spectrum approach to mapping both the decision of whether bin Laden was living in the compound and the decision of how to attack it; because they built long-term predictions about the consequences of that attack and “red-teamed” those predictions—because, more than anything, they thought of the decision as a process that required time and collaboration and structured deliberation, they were able to see past the distortions of their initial instincts and make the right choice.
Neither Obama nor his direct reports appear to have done a mathematical analysis of the bin Laden decision, other than the many times they estimated the probability of bin Laden being in the compound. But in every other sense, they followed the patterns of decision-making that we have explored over the preceding chapters. In the end, Obama gathered his key team and asked each one to weigh in on the decision. Only Vice President Biden and Defense Secretary Gates voted against going in. Everyone else supported the raid, even though many—including Obama himself—thought the odds were fifty-fifty at best that they would find bin Laden on the premises. Gates would change his mind the next day; Biden remained opposed, and would later declare that Obama had “cojones of steel” for overruling his vice president and giving the green light to the raid. As is so often the case, by exploring the decision, playing out all the future consequences, and letting the default network do its work, it had become increasingly clear that one path led in the most promising direction. As the team investigated the four main options for attacking the compound, fatal flaws surfaced for three options—bombing runs, a targeted drone strike, and collaboration with the Pakistanis—leaving McRaven’s plan for the raid the last choice standing.
There was no shortage of praise for the killing of the al Qaeda leader once the news was announced. It was, in the end, a rare thing in the world of espionage and counterterror operations: an unqualified success. The compound did, in fact, turn out to be the home of Osama bin Laden; in a brief firefight, bin Laden was killed and his body removed; the SEALs suffered only minor injuries. The only factor McRaven and his team had not properly mapped was the internal wind currents in the courtyard that destabilized one of the Black Hawks as it attempted to land, causing it to crash. But even the possibility of losing one chopper was part of the scenario planning; they had ensured that the team could all reassemble on a single Black Hawk after the raid was completed. The loss of the chopper was a known unknown: you could plan for it, since it seemed within the realm of possibility that something would cause a Black Hawk to crash near the grounds. When that possible scenario became a reality, the SEALs followed the plan they had simulated in the months leading up to the raid: they blew it up, and then moved on.
What do we typically celebrate when an operation like this goes well? We celebrate the courage of both the SEAL Team 6 unit and their commanders. We celebrate the decisiveness of our leaders and their intelligence in making the right choice. But these are all attributes, not actions. What made the bin Laden decision such a successful one, in the end, was the way it was approached as a problem. There was intelligence and courage and decisiveness, to be sure, but those attributes had also been on display in less successful military actions like the Battle of Brooklyn or the Bay of Pigs. There were brilliant, confident minds making the decisions during the Iran hostage rescue mission and the Iraq WMD investigation. What this team had was different: a deliberation process that forced them to investigate all the things they didn’t like about the evidence and imagine all the ways their course of action could go terribly wrong. That proce
ss mattered every bit as much as the actual execution of the raid. But that process tends to get lost in the public memory of the event, because the heroism and the spectacular violence of a moonlit raid naturally overwhelm the subtleties of the months and months spent probing the decision itself. We should want our leaders—in government, in civic life, in corporate boardrooms, on planning commissions—to show that same willingness to slow down the decision, approach it from multiple angles, and challenge their instincts. If we are going to learn from triumphs like the Abbottabad raid, the raid itself is less important than the decision process that made it possible.
When the Black Hawks landed in Jalalabad at two in the morning, carrying the body of Osama bin Laden, McRaven and the CIA station chief laid out the body to do a proper identification. They realized, after all their planning, that they had failed to secure a tape measure to confirm that the body was six foot four, the known height of bin Laden. (They ultimately found someone the same height, who lay down next to the body so they could get a rough measurement.) Several weeks later, President Obama presented McRaven with a plaque praising him for his acumen in planning the mission. The plaque featured a tape measure mounted to its surface, a reminder of one of the very few elements that the “McRaven option” had failed to anticipate. McRaven and the rest of the analysts had mapped the decision and all of its complexity with astonishing detail and foresight; they had measured the compound and its walls down to the inch. They just forgot to bring a device to measure bin Laden himself.
4
THE GLOBAL CHOICE
What happens fast is illusion, what happens slowly is reality. The job of the long view is to penetrate illusion.
• STEWART BRAND
In the early 1960s, during the craze for war games that shaped so much of Cold War military strategy, the Naval War College acquired a $10 million computer. Its purpose was not to calculate torpedo trajectories or to help plan shipbuilding budgets. It was, instead, a game machine known as the Naval Electronic War Simulator. By managing the simulations of war games, the computer could amplify the decision-making powers of the military commanders, since a computer could presumably model a much more complex set of relationships than a bunch of humans rolling dice and moving tokens around a game board. It is unclear whether the Naval Electronic War Simulator actually improved US military decision-making in the years that followed. Certainly, the ultimate path of the Vietnam War suggests that its intelligence amplification was limited at best.
The idea of a computer smart enough to assist with complex decisions may have been premature in the 1960s, but today it no longer seems like science fiction. Ensemble forecasts from meteorological supercomputers help us decide whether to evacuate a coastal area threatened by a hurricane. Cities use urban simulators to evaluate the traffic or economic impact of building new bridges, subways, or highways. The decisions that confounded some of the finest minds of the nineteenth century—the urban planners filling in Collect Pond, Darwin and his water cure—are increasingly being guided by algorithms and virtual worlds.
Supercomputers have started taking on the role that in ancient times belonged to the oracles: they allow us to peer into the future. As that foresight grows more powerful, we rely on these machines more and more to assist us in our hard choices, and perhaps even to make them for us. It’s easy enough to imagine computer simulations and forecasts helping to decide the future of Collect Pond: projecting population growth in downtown Manhattan, the ecosystem impact of destroying a freshwater resource, and the economic fortunes of the tanneries polluting that water.
Almost a hundred years ago, when Lewis Fry Richardson alluded in his “Weather Prediction by Numerical Process” essay to the “dream” of a machine that might someday be able to calculate weather forecasts, the mathematician had only imagined predictions that extended a few days into the future, far enough perhaps to bring ships into harbor before a hurricane or prepare a bustling city for a coming blizzard. Richardson would no doubt be amazed to see the state of “numerical processing” two decades into the twenty-first century: machines like the supercomputer “Cheyenne” housed in the Wyoming offices of the National Center for Atmospheric Research, which uses its vast computational power to simulate the behavior of Earth’s climate itself. Machines like Cheyenne allow us to simulate time scales that would have seemed preposterous to Richardson: decades, even centuries. The forecasts are fuzzier, of course: you can’t ask Cheyenne to tell you whether New Yorkers should dress for rain on July 13, 2087. They can only tell us long-term trends—where new deserts may form, where floods may become more likely, where ice caps may melt—and even those are just probabilities. But that foresight, hazy as it may sometimes seem, is far more accurate than anything Richardson could have imagined just a century ago.
Digital technology is often blamed for the abbreviated attention spans of Snapchat and Twitter, but the fact is that computer simulations have been essential in forcing humans to confront what may be the most complex, long-term decision we have ever faced: what to do about climate change. The near-universal consensus among scientists that global warming poses a meaningful threat has emerged, in large part, thanks to the simulations of supercomputers like Cheyenne. Without the full-spectrum models that those machines are capable of building—tracking everything from planet-scale phenomena like the jet stream all the way down to the molecular properties of carbon dioxide—we would have far less confidence about the potential danger from climate change and the long-term importance of shifting to renewable energy sources. Those simulations now influence millions of decisions all across the planet, from individual choices to buy a hybrid automobile instead of a gas-powered one and community decisions to install solar panels to power public schools all the way up to decisions on the scale of signing the Paris climate accord, truly one of the most global agreements—both in its signatories and its objectives—ever reached in the history of our species.
The fact that we are capable of making these decisions should not be an excuse to rest on our laurels. I am writing these words in the fall of 2017, just a few months after the Trump administration announced that the United States would be withdrawing from the Paris Agreement. It is possible that we will look back at this period twenty or thirty years from now and see this as the beginning of a great unraveling, with more and more citizens dismissing climate change as “fake news,” generating increasing paralysis on a governmental level, and undermining efforts to reduce the impact of global warming.
If you polled most Americans, I suspect a majority of them would say that we are getting worse at long-term decisions, that we live in a short-attention-span age that keeps us from the long view. A significant number would probably point to the damage we are doing as a species to the environment as the most conspicuous example of our shortsightedness.
It is true that the last few decades have witnessed a number of troubling trends, most of which revolve around that critical attribute of diversity, that have compromised the way we make collective decisions in material ways. In the United States, gerrymandering reduces the ideological diversity behind the decision of who to elect to represent a district in the House of Representatives: members of Congress are increasingly elected by voting blocs that are overwhelmingly Republican or Democratic, far more homogeneous in their political worldviews than most congressional districts would have been at other points of our history. But that trend is not solely attributable to the schemes of politicians trying to ensure reelection. We are also experiencing a demographic “Big Sort,” in which our cities and inner-ring suburbs are increasingly populated by Democrats, while Republicans dominate the exurbs and the countryside. So when we come together to make any kind of local decision, we are—politically, at least—assembling teams of decision-makers that are more homogeneous and thus prone to all the failings that homogeneity brings to group decisions.
This is an often underappreciated point in the cultural debates about the importanc
e of diversity. When we look at those images of a Trump cabinet meeting or the Republican House Caucus—all those middle-aged white men in their suits and ties—we tend to frame the lack of diversity in those groups as a problem for egalitarian or representational reasons. And that’s a perfectly valid framing. We want a cabinet that “looks like America” because that will get us closer to a world where talented people from all walks of life can find their way into the top echelons of government, and because those different walks of life will inevitably have different interests that will need to be reflected in the way we are governed. But there is another factor that we often ignore when we complain about the lack of diversity at the top of any organization in the private or public sector: Diverse groups make smarter decisions. Nowhere is the data on this clearer than in the research on gender and decision-making. If you were trying to assemble a kind of Springtime for Hitler anti–dream team, designed to fail at complex decisions, you would do well to recruit an all-male roster. So when we see a phalanx of guys signing a bill to block funding to Planned Parenthood, we should not just point out that a woman might have an understanding of Planned Parenthood’s value that a man might lack. We should also point out that a group of men is more likely to make the wrong choice about anything, not just “women’s issues.”
But despite those limitations and setbacks, we should remind ourselves that in many other realms, we are attempting to make decisions that involve time horizons and full-spectrum maps that would have been unthinkable to our great-grandparents. No one in 1960 made a decision that contemplated for even a second that decision’s impact on atmospheric carbon in 2060. Today, countless people around the globe make decisions that factor in those long-term impacts every single day, from politicians proposing new regulations that include the true cost of carbon in their cost-benefit analysis and corporate executives choosing to run their headquarters on renewable energy sources all the way down to ordinary consumers who choose to buy “green” products at the supermarket.
Farsighted Page 14