Book Read Free

Thinking in Bets

Page 15

by Annie Duke


  Hall of Fame football coach John Madden, in a documentary about Vince Lombardi, told a story about how, as a young assistant coach, he attended a coaching clinic where Lombardi spoke about one play: the power sweep, a running play that he made famous with the Green Bay Packers in the 1960s. Lombardi held the audience spellbound as he described that one play for eight hours. Madden said, “I went in there cocky, thinking I knew everything there was to know about football, and he spent eight hours talking about this one play. . . . I realized then that I actually knew nothing about football.”

  We are naturally reluctant to share information that could encourage others to find fault in our decision-making. My group made this easier by making me feel good about committing myself to improvement. When I shared details that cast me in what I perceived to be a bad light, I got a positive self-image update from the approval of players I respected. In my consulting, I’ve encouraged companies to make sure they don’t define “winning” solely by results or providing a self-enhancing narrative. If part of corporate success consists of providing the most accurate, objective, and detailed evaluation of what’s going on, employees will compete to win on those terms. That will reward better habits of mind.

  Agree to be a data sharer and reward others in your decision group for telling more of the story.

  Universalism: don’t shoot the message

  The well-known advice “don’t shoot the messenger” is actually good shorthand for the reasons why we want to protect and encourage dissenting ideas. Plutarch’s Life of Lucullus provided an early, literal example: the king of Armenia got advance notice that Lucullus’s troops were approaching. He killed the messenger for delivering that message and, henceforth, messengers stopped reporting such intelligence. Obviously, if you don’t like the message, you shouldn’t take it out on the messenger.

  The Mertonian norm of universalism is the converse. “Truth-claims, whatever their source, are to be subjected to preestablished impersonal criteria.” It means acceptance or rejection of an idea must not “depend on the personal or social attributes of their protagonist.” “Don’t shoot the message,” for some reason, hasn’t gotten the same historical or literary attention, but it addresses an equally important decision-making issue: don’t disparage or ignore an idea just because you don’t like who or where it came from.

  When we have a negative opinion about the person delivering the message, we close our minds to what they are saying and miss a lot of learning opportunities because of it. Likewise, when we have a positive opinion of the messenger, we tend to accept the message without much vetting. Both are bad.

  Whether the situation involves facts, ideas, beliefs, opinions, or predictions, the substance of the information has merit (or lack of merit) separate from where it came from. If you’re deciding the truth of whether the earth is round, it doesn’t matter if the idea came from your best friend or George Washington or Benito Mussolini. The accuracy of the statement should be evaluated independent of its source.

  I learned an early lesson in my poker career about universalism. I started playing poker using that list of hands my brother Howard wrote on a napkin. I treated this initial advice like a holy document. Therefore, when I saw someone playing hands off-list, I immediately labeled them as a bad player. When I saw such a player subsequently execute a strategy I didn’t have as part of my game, I dismissed it. Doing that across the board (especially when I was labeling these players as “bad” based on one view of a sound beginner’s strategy) was an expensive lesson in universalism. For so many things going on at the table in the first years of my poker career, I shot the message.

  I was guilty of the same thing David Letterman admitted in his explanation to Lauren Conrad. He spent a long time assuming people around him were idiots before considering the alternative hypothesis, “Maybe I’m the idiot.” In poker, I was the idiot.

  As I learned that Howard’s list was just a safe way to get me started and not the Magna Carta scrawled in crayon on a napkin, I developed an exercise to practice and reinforce universalism. When I had the impulse to dismiss someone as a bad player, I made myself find something that they did well. It was an exercise I could do for myself, and I could get help from my group in analyzing the strategies I thought those players might be executing well. That commitment led to many benefits.

  Of course, I learned some new and profitable strategies and tactics. I also developed a more complete picture of other players’ strategies. Even when I determined that the strategic choices of that player weren’t, in the end, profitable, I had a deeper understanding of my opponent’s game, which helped me devise counter-strategies. I had started thinking more deeply about the way my opponents thought. And in some instances, I recognized that I had underestimated the skills of certain players who I initially thought were profitable for me to play against. That led me to make more objective decisions about game selection. And my poker group benefited from this exercise as well because, in workshopping the strategies with each other, we multiplied the number of playing techniques we could observe and discuss. Admitting that the people I played against had things to teach me was hard, and my group helped me feel proud of myself when I resisted the urge to just complain about how lucky my opponents were.

  Nearly any group can create an exercise to develop and reinforce the open-mindedness universalism requires. As an example, with politics so polarized, we forget the obvious truth that no one has only good ideas or only bad ideas. Liberals would do well to take some time to read and watch more conservative news sources, and conservatives would do well to take some time to read and watch more liberal news sources—not with the goal of confirming that the other side is a collection of idiots who have nothing of value to say but to specifically and purposely find things they agree with. When we do this, we learn things we wouldn’t otherwise have learned. Our views will likely become moderated in the same way judges from opposite sides of the political aisle moderate each other. Even if, in the end, we don’t find much to agree with, we will understand the opposing position better—and ours as well. We’ll be practicing what John Stuart Mill preached.

  Another way to disentangle the message from the messenger is to imagine the message coming from a source we value much more or much less. If we hear an account from someone we like, imagine if someone we didn’t like told us the same story, and vice versa. This can be incorporated into an exploratory group’s work, asking each other, “How would we feel about this if we heard it from a much different source?” We can take this process of vetting information in the group further, initially and intentionally omitting where or whom we heard the idea from. Leading off our story by identifying the messenger could interfere with the group’s commitment to universalism, biasing them to agree with or discredit the message depending on their opinion of the messenger. So leave the source out to start, giving the group the maximum opportunity to form an impression without shooting (or celebrating) the message based on their opinion of the messenger (separate from the expertise and credibility of the messenger).

  John Stuart Mill made it clear that the only way to gain knowledge and approach truth is by examining every variety of opinion. We learn things we didn’t know. We calibrate better. Even when the result of that examination confirms our initial position, we understand that position better if we open ourselves to every side of the issue. That requires open-mindedness to the messages that come from places we don’t like.

  Disinterestedness: we all have a conflict of interest, and it’s contagious

  Back in the 1960s, the scientific community was at odds about whether sugar or fat was the culprit in the increasing rates of heart disease. In 1967, three Harvard scientists conducted a comprehensive review of the research to date, published in the New England Journal of Medicine, that firmly pointed the finger at fat as the culprit. The paper was, not surprisingly, influential in the debate on diet and heart disease. After all, the NEJM is and was a prest
igious publication and the researchers were, all three, from Harvard. Blaming fat and exonerating sugar affected the diets of hundreds of millions of people for decades, a belief that caused a shift in eating habits that has been linked to the massive increase in obesity rates and diabetes.

  The influence of this paper and its negative effects on America’s eating habits and health provides a stunning demonstration of the imperative of disinterestedness. It was recently discovered that a trade group representing the sugar industry had paid the three Harvard scientists to write the paper, according to an article published in JAMA Internal Medicine in September 2016. Not surprisingly, consistent with the agenda of the sugar industry that had paid them, the researchers attacked the methodology of studies finding a link between sugar and heart disease and defended studies finding no link. The scientists’ attacks on and defenses of the methodology of studies on fat and heart disease followed the same pro-sugar pattern.

  The scientists involved are all dead. Were they alive, it’s possible, if we could ask them, that they may not have even consciously known they were being influenced. Given human nature, they likely, at least, would have defended the truth of what they wrote and denied that the sugar industry dictated or influenced their thinking on the subject. Regardless, had the conflict of interest been disclosed, the scientific community would have viewed their conclusions with much more skepticism, taking into account the possibility of bias due to the researchers’ financial interest. At the time, the NEJM did not require such disclosures. (That policy changed in 1984.) That omission prevented an accurate assessment of the findings, resulting in serious harm to the health of the nation.

  We tend to think about conflicts of interest in the financial sense, like the researchers getting paid by the sugar industry. But conflicts of interest come in many flavors. Our brains have built-in conflicts of interest, interpreting the world around us to confirm our beliefs, to avoid having to admit ignorance or error, to take credit for good results following our decisions, to find reasons bad results following our decisions were due to factors outside our control, to compare well with our peers, and to live in a world where the way things turn out makes sense. We are not naturally disinterested. We don’t process information independent of the way we wish the world to be.

  Remember the thought experiment I suggested at the beginning of the book about what the headlines would have looked like if Pete Carroll’s pass call had won the 2015 Super Bowl? Those headlines would have been about his brilliance. People would have analyzed Carroll’s decision differently. Knowing how something turned out creates a conflict of interest that expresses itself as resulting.

  Richard Feynman recognized that in physics—a branch of science that most of us consider as objective as 2 + 2 = 4—there is still demonstrable outcome bias. He found that if those analyzing data knew, or could even just intuit, the hypothesis being tested, the analysis would be more likely to support the hypothesis being tested. The measurements might be objective, but slicing and dicing the data is vulnerable to bias, even unconsciously. According to Robert MacCoun and physics Nobel laureate Saul Perlmutter in a 2015 Nature article, outcome-blind analysis has spread to several areas of particle physics and cosmology, where it “is often considered the only way to trust many results.” Because the idea—introducing a random variable so that those analyzing the data could not surmise the outcome the researcher might be hoping for—is hardly known in biological, psychological, and social sciences, the authors concluded these methods “might improve trust and integrity in many sciences, including those with high-stakes analyses that are easily plagued by bias.” Outcome blindness enforces disinterestedness.

  We can apply this idea of outcome blindness to the way we communicate information as we workshop decisions about our far more ambiguous pursuits—like describing a poker hand, or a family argument, or the results of a market test for a new product. If the group is going to help us make and evaluate decisions in an unbiased way, we don’t want to infect them in the way the data analysts were infected if they could surmise the hypothesis being tested. Telling someone how a story ends encourages them to be resulters, to interpret the details to fit that outcome. If I won a hand, it was more likely my group would assess my strategy as good. If I lost, the reverse would be true. Win a case at trial, the strategy is brilliant. Lose, and mistakes were made. We treat outcomes as good signals for decision quality, as if we were playing chess. If the outcome is known, it will bias the assessment of the decision quality to align with the outcome quality.

  If the group is blind to the outcome, it produces higher fidelity evaluation of decision quality. The best way to do this is to deconstruct decisions before an outcome is known. Attorneys can evaluate trial strategy before the verdict comes in. Sales teams can evaluate strategy before learning whether they’ve closed the sale. Traders can vet process prior to positions being established or prior to options expiring. After the outcome, make it a habit when seeking advice to give the details without revealing the outcome. In poker, it isn’t practical to analyze hands before knowing how they turn out since the results come within seconds of the decisions. To address this, many expert poker players often omit the outcome when seeking advice about their play.

  This became such a natural habit that I didn’t realize, until I started conducting poker seminars for players newer to the game, that this was not the norm for everyone. When I used hands I had played as illustrations, I would describe the hand up to the decision point I was discussing and no further, leaving off how the hand ended. This was, after all, how I had been trained by my poker group. When we finished the discussion, it was jarring to watch a roomful of people look at me like I had left them teetering on the edge of a cliff.

  “Wait! How did the hand turn out?”

  I gave them the red pill: “It doesn’t matter.”

  Of course, we don’t have to be describing a poker hand to use this strategy to promote disinterestedness. Anyone can provide the narrative only up to the point of the decision under consideration, leaving off the outcome so as not to infect their listeners with bias. And outcomes aren’t the only problem. Beliefs are also contagious. If our listeners know what we believe to be true, they will likely work pretty hard to justify our beliefs, often without even knowing they are doing it. They will develop an ideological conflict of interest created by our informing our listeners of our beliefs. So when trying to vet some piece of information, some fact or opinion, we would do well to shield our listeners from what our opinion is as we seek the group’s opinion.

  Simply put, the group is less likely to succumb to ideological conflicts of interest when they don’t know what the interest is. That’s MacCoun and Perlmutter’s point.

  Another way a group can de-bias members is to reward them for skill in debating opposing points of view and finding merit in opposing positions. When members of the group disagree, a debate may be of only marginal value because the people debating are biased toward confirming their position, often creating stalemate. If two people disagree, a referee can get them to each argue the other’s position with the goal of being the best debater. This acts to shift the interest to open-mindedness to the opposing opinion rather than confirmation of their original position. They can’t win the debate if they can’t forcefully and credibly argue the other side. The key is for the group to have a charter that rewards objective consideration of alternative hypotheses so that winning the debate feels better than supporting the pre-existing position. The group’s reinforcement ought to discourage us from creating a straw-man argument when we’re arguing against our beliefs, and encourage us to feel good about winning the debate. This is one of the reasons it’s good for a group to have at least three members, two to disagree and one to referee.

  What I’ve generally found is that two people whose positions on an issue are far apart will move toward the middle after a debate or skilled explanation of the opposing position. Engaging in t
his type of exchange creates an understanding of and appreciation for other points of view much deeper and more powerful than just listening to the other perspective. Ultimately, it gives us a deeper understanding of our own position. Once again, we are reminded of John Stuart Mill’s assertion that this kind of open-mindedness is the only way to learn.

  Organized skepticism: real skeptics make arguments and friends

  Skepticism gets a bum rap because it tends to be associated with negative character traits. Someone who disagrees could be considered “disagreeable.” Someone who dissents may be creating “dissention.” Maybe part of it is that “skeptical” sounds like “cynical.” Yet true skepticism is consistent with good manners, civil discourse, and friendly communications.

  Skepticism is about approaching the world by asking why things might not be true rather than why they are true. It’s a recognition that, while there is an objective truth, everything we believe about the world is not true. Thinking in bets embodies skepticism by encouraging us to examine what we do and don’t know and what our level of confidence is in our beliefs and predictions. This moves us closer to what is objectively true.

 

‹ Prev