Book Read Free

Thinking in Bets

Page 19

by Annie Duke


  Other violations of the Mertonian norm of universalism, shooting the message because we don’t think much of the messenger. Any sweeping term about someone, particularly when we equate our assessment of an idea with a sweeping personality or intellectual assessment of the person delivering the idea, such as “gun nut,” “bleeding heart,” “East Coast,” “Bible belter,” “California values”—political or social issues. Also be on guard for the reverse: accepting a message because of the messenger or praising a source immediately after finding out it confirms your thinking.

  Signals that we have zoomed in on a moment, out of proportion with the scope of time: “worst day ever,” “the day from hell.”

  Expressions that explicitly signal motivated reasoning, accepting or rejecting information without much evidence, like “conventional wisdom” or “if you ask anybody” or “Can you prove that it’s not true?” Similarly, look for expressions that you’re participating in an echo chamber, like “everyone agrees with me.”

  The word “wrong,” which deserves its own swear jar. The Mertonian norm of organized skepticism allows little place in exploratory discussion for the word “wrong.” “Wrong” is a conclusion, not a rationale. And it’s not a particularly accurate conclusion since, as we know, nearly nothing is 100% or 0%. Any words or thoughts denying the existence of uncertainty should be a signal that we are heading toward a poorly calibrated decision.

  Lack of self-compassion: if we’re going to be self-critical, the focus should be on the lesson and how to calibrate future decisions. “I have the worst judgment on relationships” or “I should have known” or “How could I be so stupid?”

  Signals we’re being overly generous editors when we share a story. Especially in our truthseeking group, are we straying from sharing the facts to emphasize our version? Even outside our group, unless we’re sharing a story purely for entertainment value, are we assuring that our listener will agree with us? In general, are we violating the Mertonian norm of communism?

  Infecting our listeners with a conflict of interest, including our own conclusion or belief when asking for advice or informing the listener of the outcome before getting their input.

  Terms that discourage engagement of others and their opinions, including expressions of certainty and also initial phrasing inconsistent with that great lesson from improvisation—“yes, and . . .” That includes getting opinions or information from others and starting with “no” or “but . . .”

  This is by no means a complete list, but it provides a flavor of the kinds of statements and thinking that should trigger vigilance on our part.

  Once we recognize that we should watch out for particular words, phrases, and thoughts, when we find ourselves saying or thinking those things, we are breaking a contract, a commitment to truthseeking. These terms are signals that we’re succumbing to bias. Because if we are cutting the binding when we catch ourselves saying or thinking these things, it can trigger a moment of reflection, interrupting us in the moment. Popping out of the moment like that can remind us of why we took the trouble to list terms that signal potential decision traps.

  The swear jar is a simple example of a Ulysses contract in action: we think ahead to a hazard in our decision-making future and devise a plan of action around that, or at least commit that we will take a moment to recognize we are veering away from truthseeking. Better precommitment contracts result from better anticipation of what the future might look like, what kinds of decisions we want to avoid, and which ones we want to promote. That takes thoughtful reconnaissance.

  Reconnaissance: mapping the future

  Operation Overlord, the Allied forces operation to retake German-occupied France starting in Normandy, was the largest seaborne invasion in military history. It involved planning and logistics on an unprecedented scale. What if the forces were delayed at the start by bad weather? What if the airborne landing force had trouble communicating by radio due to the terrain? What if significant numbers of paratroopers were blown off course? What if currents interfered with the beach landings? What if the forces on the different beaches remained separated? Countless things could go wrong, with tens of thousands of lives at stake and, potentially, the outcome of the war.

  All those things did go wrong, along with many other challenges encountered on D-Day and immediately thereafter. The Normandy landings still succeeded, though, because they prepared for as many potential scenarios as possible. Reconnaissance has been part of advance military planning for as long as horses have been used in battle. The modern military, of course, has evolved from sending scouts by horse and reporting back to the main forces to planes, drones, satellites, and other high-tech equipment gathering information about what to expect in battle.

  The Navy SEAL team that caught and killed Osama bin Laden wouldn’t have entered his compound without knowing what they would find beyond the walls. What buildings were there? What was their layout and purpose? What differences might it make if they conducted the raid in different kinds of weather or different times of day? What other people would be present and what risks would they pose? What would they do if bin Laden wasn’t there? What was the team trying to commit to, given what they knew about each of those things (and, of course, numerous others)? Just as they relied on reconnaissance, we shouldn’t plan our future without doing advance work on the range of futures that could result from any given decision and the probabilities of those futures occurring.

  For us to make better decisions, we need to perform reconnaissance on the future. If a decision is a bet on a particular future based on our beliefs, then before we place a bet we should consider in detail what those possible futures might look like. Any decision can result in a set of possible outcomes.

  Thinking about what futures are contained in that set (which we do by putting memories together in a novel way to imagine how things might turn out) helps us figure out which decisions to make.

  Figure out the possibilities, then take a stab at the probabilities. To start, we imagine the range of potential futures. This is also known as scenario planning. Nate Silver, who compiles and interprets data from the perspective of getting the best strategic use of it, frequently takes a scenario-planning approach. Instead of using data to make a particular conclusion, he sometimes takes the approach of discussing all the scenarios the data could support. In early February 2017, he described the merits of scenario planning: “When faced with highly uncertain conditions, military units and major corporations sometimes use an exercise called scenario planning. The idea is to consider a broad range of possibilities for how the future might unfold to help guide long-term planning and preparation.”

  After identifying as many of the possible outcomes as we can, we want to make our best guess at the probability of each of those futures occurring. When I consult with enterprises on building decision trees and determining probabilities of different futures, people frequently resist having to make a guess at the probability of future events mainly because they feel like they can’t be certain of what the likelihood of any scenario is. But that’s the point.

  The reason why we do reconnaissance is because we are uncertain. We don’t (and likely can’t) know how often things will turn out a certain way with exact precision. It’s not about approaching our future predictions from a point of perfection. It’s about acknowledging that we’re already making a prediction about the future every time we make a decision, so we’re better off if we make that explicit. If we’re worried about guessing, we’re already guessing. We are already guessing that the decision we execute will result in the highest likelihood of a good outcome given the options we have available to us. By at least trying to assign probabilities, we will naturally move away from the default of 0% or 100%, away from being sure it will turn out one way and not another. Anything that moves us off those extremes is going to be a more reasonable assessment than not trying at all. Even if our assessmen
t results in a wide range, like the chances of a particular scenario occurring being between 20% and 80%, that is still better than not guessing at all.

  This kind of reconnaissance of the future is something that experienced poker players are very familiar with. Before making a bet, poker players consider each of their opponents’ possible responses (fold, call, raise), and the likelihood and desirability of each. They also think about what they will do in response (if some or all of the opponents don’t fold). Even if you don’t know much about poker, it should make sense that a player is better off considering these things before they bet. The more expert the player, the further into the future they plan. Before making that decision to bet, the expert player is anticipating what they’ll do following each response, as well as how the action they take now affects their future decisions on the hand. The best players think beyond the current hand into subsequent hands: how do the actions of this hand affect how they and their opponents make decisions on future hands? Poker players really live in this probabilistic world of, “What are the possible futures? What are the probabilities of those possible futures?” And they get very comfortable with the fact that they don’t know exactly because they can’t see their opponent’s cards.

  This is true of most strategic thinking. Whether it involves sales strategies, business strategies, or courtroom strategies, the best strategists are considering a fuller range of possible scenarios, anticipating and considering the strategic responses to each, and so on deep into the decision tree.

  This kind of scenario planning is a form of mental time travel we can do on our own. It works even better when we do it as part of a scenario-planning group, particularly one that is open-minded to dissent and diverse points of view. Diverse viewpoints allow for the identification of a wider variety of scenarios deeper into the tree, and for better estimates of their probability. In fact, if two people in the group are really far off on an estimate of the likelihood of an outcome, that is a great time to have them switch sides and argue the other’s position. Generally, the answer is somewhere in the middle and both people will end up moderating their positions. But sometimes one person has thought of a key influencing factor the other hasn’t and that is revealed only because the dissent was tolerated.

  In addition to increasing decision quality, scouting various futures has numerous additional benefits. First, scenario planning reminds us that the future is inherently uncertain. By making that explicit in our decision-making process, we have a more realistic view of the world. Second, we are better prepared for how we are going to respond to different outcomes that might result from our initial decision. We can anticipate positive or negative developments and plan our strategy, rather than being reactive. Being able to respond to the changing future is a good thing; being surprised by the changing future is not. Scenario planning makes us nimbler because we’ve considered and are prepared for a wider variety of possible futures. And if our reconnaissance has identified situations where we are susceptible to irrationality, we can try to bind our hands with a Ulysses contract. Third, anticipating the range of outcomes also keeps us from unproductive regret (or undeserved euphoria) when a particular future happens. Finally, by mapping out the potential futures and probabilities, we are less likely to fall prey to resulting or hindsight bias, in which we gloss over the futures that did not occur and behave as if the one that did occur must have been inevitable, because we have memorialized all the possible futures that could have happened.

  Scenario planning in practice

  A few years ago, I consulted with a national nonprofit organization, After-School All-Stars (ASAS), to work with them on incorporating scenario planning into their budgeting.* ASAS, founded in 1992 by Arnold Schwarzenegger, provides three hours of structured after-school programming for over 70,000 underserved youth in eighteen cities across the United States. They depend heavily on grants for funding and were struggling with budget planning, given the uncertainty in the grant award process. To help them with planning, I asked for a list of all their grant applications and how much each grant was worth. They provided me with a list of all their outstanding grant applications and the award amounts applied for. I told them that I didn’t see how much each grant was worth in the information they provided. They pointed to the column of the award amounts sought. At that point, I realized we were working from different ideas about how to determine worth. The misunderstanding came from the disconnect between the expected value of each grant and the amount they would be awarded if they got the grant.*

  Coming up with the expected value of each grant involves a simple form of scenario planning: imagining the two possible futures that could result from the application (awarded or declined) and the likelihood of each future. For example, if they applied for a $100,000 grant that they would win 25% of the time, that grant would have an expected value of $25,000 ($100,000 × .25). If they expected to get the grant a quarter of the time, then it wasn’t worth $100,000; it was worth a quarter of $100,000. A $200,000 application with a 10% chance of success would have an expected value of $20,000. A $50,000 grant with a 70% chance of success would be worth $35,000. Without thinking probabilistically in this way, determining a grant’s worth isn’t possible—it leads to the mistaken belief that the $200,000 grant is worth the most when, in fact, the $50,000 grant is. ASAS recognized that uncertainty was causing them problems (to the point where they felt enslaved by it in their budgeting), but they hadn’t wrapped uncertainty into their planning or resource-allocation process. They were flying by the seat of their pants.

  After I worked with the national office, ASAS made estimating the likelihood of getting each grant part of their planning. The benefits they got from scenario planning were immediate and substantial:

  They created a more efficient and productive work stack. Before doing this exercise, they naturally put a higher priority on applications seeking larger dollar amounts, executing those first, putting more senior staff on them, and being more likely to hire outside grant writers to get those applications completed. By shifting to thinking about the probability of getting each grant, they were now able to prioritize by how much the grant was actually worth to the organization in making these decisions. Thereafter, the higher value grants went to the top of the stack rather than just the grants with the higher potential awards.

  They could budget more realistically. They had greater confidence in advance estimates of the amount of money they could expect to receive.

  Because coming up with an expected value required estimating the likelihood of getting the grant, they increasingly focused on improving the accuracy of their estimates. This prompted them to close the loop by going back to the grantors. They had previously, after rejections, followed up with grantors. Because they were now focusing on checking and calibrating their probabilities, they expanded this to the grants they won. Overall, their post-outcome reviews focused on understanding what worked, what didn’t work, what was luck, and how to do better, improving both their probability estimates and the quality of their grant applications.

  They could think about ways to increase the probability of getting grants and commit to those actions.

  They were less likely to fall prey to hindsight bias because they had considered in advance the probability of getting or not getting the grant.

  They were less likely to fall prey to resulting because they had evaluated the decision process in advance of getting or not getting the grant.

  Finally, because of all the benefits ASAS received from incorporating scenario planning into budgeting and grant applications, they expanded the implementation of this type of scenario planning across departments, making it part of their decision-making culture.

  Grant prospecting is similar to sales prospecting, and this process can be implemented for any sales team. Assign probabilities for closing or not closing sales, and the company can do better at establishing sales priorities, planning budgets and al
locating resources, evaluating and fine-tuning the accuracy of its predictions, and protecting itself against resulting and hindsight bias.

  A more complex version of scenario planning occurs when the number of possible futures multiplies and/or we go deeper into the tree, considering what we will do next in response to how things turn out and what the set of outcomes of that next decision is and so on.

  Consider the scenario planning involved in Seahawks coach Pete Carroll’s much-criticized Super Bowl decision, trailing by four, twenty seconds remaining, one time-out, second-and-goal at the Patriots’ one-yard line. Carroll has two general choices, run or pass, and they in turn lead to multiple scenarios.

  If Carroll calls for a run, these are the possible futures: (a) touchdown (immediate win); (b) turnover-fumble (immediate loss); (c) tackle short of the goal line; (d) offensive penalty; and (e) defensive penalty. Futures (c)–(e) branch into additional scenarios. The most likely failure scenario, by far, is that the runner is tackled before reaching the end zone. Seattle could stop the clock with its final time-out, but if they run the ball again and do not score, time will expire.

  If Carroll calls for a pass, the possible futures are (a) touchdown (immediate success); (b) turnover-interception (immediate failure); (c) incomplete pass; (d) sack; (e) offensive penalty; and (f) defensive penalty. Again, the first two futures essentially end the game and the others branch off into additional play calling and additional outcomes.

  The main difference between passing and running is that calling a pass likely gives Seattle a total of three plays to score, instead of two if Carroll calls a running play. An unsuccessful run would require that Seattle use their final time-out to stop the clock so they could run a second play. An incomplete pass would stop the clock and leave Seattle with a time-out and the chance to call those same two running plays. An interception, which negates the possibility of a second or third offensive play, is only a 2%–3% probability, a small price to pay for three chances to score rather than two. (A turnover caused by a fumble on a running play is 1%–2%.)*

 

‹ Prev