Book Read Free

Thinking in Bets

Page 6

by Annie Duke


  In poker, this belief-formation process can cost players a lot of money. One of the first things players learn in Texas Hold’em is a list of two-card starting hands to play or fold, based on your table position and actions from players before you.* When Texas Hold’em first developed in the sixties, some expert players innovated deceptive plays with middle cards consecutive in rank and of the same suit (like the six and seven of diamonds). In poker shorthand, such cards are called “suited connectors.”

  Suited connectors have the attraction of making a powerful, camouflaged straight or a flush. Expert players might choose to play these types of hands in a very limited set of circumstances, namely where they feel they could fold the hand at a small loss; successfully bluff if it doesn’t develop; or extract maximum value in later betting rounds by trapping a player with conventionally stronger starting cards when the hand does develop favorably.

  Unfortunately, the mantra of “win big or lose small with suited connectors” filtered down over the years without the subtlety of the expertise needed to play them well or the narrow circumstances needed to make those hands profitable. When I taught poker seminars, most of my students strongly believed suited connectors were profitable starting cards under pretty much any circumstances. When I asked why, I would hear “everyone knows that” or “I see players cleaning up with suited connectors all the time on TV.” But no one I asked had kept a P&L on their experience with suited connectors. “Do that,” I’d say, “and report back what you find.” Lo and behold, players who came back to me discovered they were net losers with suited connectors.

  The same belief-formation process led hundreds of millions of people to bet the quality and length of their lives on their belief about the merits of a low-fat diet. Led by advice drawn, in part, from research secretly funded by the sugar industry, Americans in one generation cut a quarter of caloric intake from fat, replacing it with carbohydrates. The U.S. government revised the food pyramid to include six to eleven servings of carbohydrates and advised that the public consume fats sparingly. It encouraged the food industry (which enthusiastically followed) to substitute starch and sugar to produce “reduced-fat” foods. David Ludwig, a Harvard Medical School professor and doctor at Boston Children’s Hospital, summarized the cost of substituting carbs for fats in the Journal of the American Medical Association: “Contrary to prediction, total calorie intake increased substantially, the prevalence of obesity tripled, the incidence of type 2 diabetes increased many-fold, and the decades-long decrease in cardiovascular disease plateaued and may reverse, despite greater use of preventive drugs and surgical procedures.”

  Low-fat diets became the suited connectors of our eating habits.

  Even though our default is “true,” if we were good at updating our beliefs based on new information, our haphazard belief-formation process might cause relatively few problems. Sadly, this is not the way it works. We form beliefs without vetting most of them, and maintain them even after receiving clear, corrective information. In 1994, Hollyn Johnson and Colleen Seifert reported in the Journal of Experimental Psychology the results of a series of experiments in which subjects read messages about a warehouse fire. For subjects reading messages mentioning that the fire started near a closet containing paint cans and pressurized gas cylinders, that information (predictably) encouraged them to infer a connection. When, five messages later, subjects received a correction that the closet was empty, they still answered questions about the fire by blaming burning paint for toxic fumes and citing negligence for keeping flammable objects nearby. (This shouldn’t be a surprise to anyone recognizing the futility of issuing a retraction after reporting a news story with a factual error.)

  Truthseeking, the desire to know the truth regardless of whether the truth aligns with the beliefs we currently hold, is not naturally supported by the way we process information. We might think of ourselves as open-minded and capable of updating our beliefs based on new information, but the research conclusively shows otherwise. Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs.

  “They saw a game”

  As a college football season is about to close, all eyes are fixed on a fierce rivalry. The favorite, playing at home, has a twenty-two-game winning streak and is on the verge of completing a second consecutive undefeated season. The most emotional reception will be for Dick Kazmaier, the offensive star. One of the school’s all-time athletic heroes, he made the cover of Time and is in contention for All-American and other postseason honors. The visitors, however, have no intention of going down to defeat quietly. Although their record this season has been only average, they have a reputation for playing hard. Pulling off a stunning upset would be an unexpected treat.

  Welcome to Princeton’s Palmer Stadium, November 23, 1951. The Dartmouth-Princeton football game became famous: part of a historic rivalry, the end of an epoch in Ivy League sports, and the subject of a groundbreaking scientific experiment.

  First, the game. Princeton won, 13–0. The outcome was not in much doubt, but it was nevertheless a dirty, violent, penalty-laden game. Dartmouth received seventy yards in penalties, Princeton twenty-five. A fallen Princeton player got kicked in the ribs. One Dartmouth player broke a leg, and a second also suffered a leg injury. Kazmaier exited the game in the second quarter with a concussion and a broken nose. (He returned for the final play, earning a victory lap on his teammates’ shoulders. A few months later he became the last player from the Ivy League to win the Heisman Trophy.)

  Surprised by the ferocity of the editorials in both schools’ newspapers after the game, a pair of psychology professors saw the occasion as an opportunity to study how beliefs can radically alter the way we process a common experience. Albert Hastorf of Dartmouth and Hadley Cantril of Princeton collected the newspaper stories, obtained a copy of the game film, showed it to groups of students from their schools, and had them complete questionnaires counting and characterizing the infractions on both sides. Their 1954 paper, “They Saw a Game,” could have been called “They Saw Two Games” because students from each school, based on their questionnaires and accounts, seemed to be watching different games.

  Hastorf and Cantril collected anecdotal evidence of this in the lively accounts and editorials of the Dartmouth-Princeton game in local newspapers. The Daily Princetonian said, “Both teams were guilty but the blame must be laid primarily on Dartmouth’s doorstep.” The Princeton Alumni Weekly called out Dartmouth for a late hit on the play that ended Kazmaier’s college career and for kicking a prone Princeton player in the ribs. Meanwhile, an editorial in the Dartmouth placed heavy blame on Princeton coach Charley Caldwell. After the injury to the “Princeton idol,” “Caldwell instilled the old see-what-they-did-go-get-them attitude into his players. His talk got results,” the editorial asserted, referring to the pair of Dartmouth players suffering leg injuries in the third quarter. In the next issue of the Dartmouth, the paper listed star players from the opposing team that Princeton had stopped by a similar “concentrated effort.”

  When the researchers showed groups of students the film of the game and asked them to fill out the questionnaires, the same difference of opinion about what they had seen appeared. Princeton students saw Dartmouth commit twice as many flagrant penalties and three times the mild penalties as Princeton. Dartmouth students saw each team commit an equal number of infractions.

  Hastorf and Cantril concluded, “We do not simply ‘react to’ a happening. . . . We behave according to what we bring to the occasion.” Our beliefs affect how we process all new things, “whether the ‘thing’ is a football game, a presidential candidate, Communism, or spinach.”

  A study in the 2012 Stanford Law Review called “They Saw a Protest” (the title is a homage to the original Hastorf and Cantril experiment) by Yale professor of law and psychology Dan Kahan, a leading researcher and analyst of biased reasoning, and
four colleagues reinforces this notion that our beliefs drive the way we process information.

  In the study, two groups of subjects watched a video of police action halting a political demonstration. One group was told the protest occurred outside an abortion clinic, aimed at protesting legalized abortion. Another group was told it occurred at a college career-placement facility, where the military was conducting interviews and protestors were demonstrating against the then-existing ban on openly gay and lesbian soldiers. It was the same video, carefully edited to blur or avoid giving away the subject of the actual protest. Researchers, after gathering information about the worldviews of the subjects, asked them about facts and conclusions from what they saw.

  The results mirrored those found by Hastorf and Cantril nearly sixty years before: “Our subjects all viewed the same video. But what they saw—earnest voicing of dissent intended only to persuade, or physical intimidation calculated to interfere with the freedom of others—depended on the congruence of the protestors’ positions with the subjects’ own cultural values.” Whether it is a football game, a protest, or just about anything else, our pre-existing beliefs influence the way we experience the world. That those beliefs aren’t formed in a particularly orderly way leads to all sorts of mischief in our decision-making.

  The stubbornness of beliefs

  Flaws in forming and updating beliefs have the potential to snowball. Once a belief is lodged, it becomes difficult to dislodge. It takes on a life of its own, leading us to notice and seek out evidence confirming our belief, rarely challenge the validity of confirming evidence, and ignore or work hard to actively discredit information contradicting the belief. This irrational, circular information-processing pattern is called motivated reasoning. The way we process new information is driven by the beliefs we hold, strengthening them. Those strengthened beliefs then drive how we process further information, and so on.

  During a break in a poker tournament, a player approached me for my opinion about how he played one of those suited-connector hands. I didn’t witness the hand, and he gave me a very abbreviated description of how he stealthily played the six and seven of diamonds to make a flush on the second-to-last card but “had the worst luck” when the other player made a full house on the very last card.

  We had only a minute or two left in the break, so I asked what I thought to be the most relevant question: “Why were you playing six-seven of diamonds in the first place?” (Even a brief explanation, I expected, would fill in details on many of the areas that determine how to play a hand like that and whether it was a profitable choice, such as table position, pot size, chip stack sizes, his opponent’s style of play, how the table perceived his style, etc.)

  His exasperated response was, “That’s not the point of the story!” Motivated reasoning tells us it’s not really the point of anyone’s story.

  It doesn’t take much for any of us to believe something. And once we believe it, protecting that belief guides how we treat further information relevant to the belief. This is perhaps no more evident than in the rise in prominence of “fake news” and disinformation. The concept of “fake news,” an intentionally false story planted for financial or political gain, is hundreds of years old. It has included such legendary practitioners as Orson Welles, Joseph Pulitzer, and William Randolph Hearst. Disinformation is different than fake news in that the story has some true elements, embellished to spin a particular narrative. Fake news works because people who already hold beliefs consistent with the story generally won’t question the evidence. Disinformation is even more powerful because the confirmable facts in the story make it feel like the information has been vetted, adding to the power of the narrative being pushed.

  Fake news isn’t meant to change minds. As we know, beliefs are hard to change. The potency of fake news is that it entrenches beliefs its intended audience already has, and then amplifies them. The Internet is a playground for motivated reasoning. It provides the promise of access to a greater diversity of information sources and opinions than we’ve ever had available, yet we gravitate toward sources that confirm our beliefs, that agree with us. Every flavor is out there, but we tend to stick with our favorite.

  Making matters worse, many social media sites tailor our Internet experience to show us more of what we already like. Author Eli Pariser developed the term “filter bubble” in his 2011 book of the same name to describe the process of how companies like Google and Facebook use algorithms to keep pushing us in the directions we’re already headed. By collecting our search, browsing, and similar data from our friends and correspondents, they give users headlines and links that cater to what they’ve divined as our preferences. The Internet, which gives us access to a diversity of viewpoints with unimaginable ease, in fact speeds our retreat into a confirmatory bubble. No matter our political orientation, none of us is immune.

  The most popular websites have been doing our motivated reasoning for us.*

  Even when directly confronted with facts that disconfirm our beliefs, we don’t let facts get in the way. As Daniel Kahneman pointed out, we just want to think well of ourselves and feel that the narrative of our life story is a positive one. Being wrong doesn’t fit into that narrative. If we think of beliefs as only 100% right or 100% wrong, when confronting new information that might contradict our belief, we have only two options: (a) make the massive shift in our opinion of ourselves from 100% right to 100% wrong, or (b) ignore or discredit the new information. It feels bad to be wrong, so we choose (b). Information that disagrees with us is an assault on our self-narrative. We’ll work hard to swat that threat away. On the flip side, when additional information agrees with us, we effortlessly embrace it.

  How we form beliefs, and our inflexibility about changing our beliefs, has serious consequences because we bet on those beliefs. Every bet we make in our lives depends on our beliefs: who we believe will make the best president, if we think we will like Des Moines, if we believe a low-fat diet will make us healthier, or even if we believe turkeys can fly.

  Being smart makes it worse

  The popular wisdom is that the smarter you are, the less susceptible you are to fake news or disinformation. After all, smart people are more likely to analyze and effectively evaluate where information is coming from, right? Part of being “smart” is being good at processing information, parsing the quality of an argument and the credibility of the source. So, intuitively, it feels like smart people should have the ability to spot motivated reasoning coming and should have more intellectual resources to fight it.

  Surprisingly, being smart can actually make bias worse. Let me give you a different intuitive frame: the smarter you are, the better you are at constructing a narrative that supports your beliefs, rationalizing and framing the data to fit your argument or point of view. After all, people in the “spin room” in a political setting are generally pretty smart for a reason.

  In 2012, psychologists Richard West, Russell Meserve, and Keith Stanovich tested the blind-spot bias—an irrationality where people are better at recognizing biased reasoning in others but are blind to bias in themselves. Overall, their work supported, across a variety of cognitive biases, that, yes, we all have a blind spot about recognizing our biases. The surprise is that blind-spot bias is greater the smarter you are. The researchers tested subjects for seven cognitive biases and found that cognitive ability did not attenuate the blind spot. “Furthermore, people who were aware of their own biases were not better able to overcome them.” In fact, in six of the seven biases tested, “more cognitively sophisticated participants showed larger bias blind spots.” (Emphasis added.) They have since replicated this result.

  Dan Kahan’s work on motivated reasoning also indicates that smart people are not better equipped to combat bias—and may even be more susceptible. He and several colleagues looked at whether conclusions from objective data were driven by subjective pre-existing beliefs on a topic. When subjects were a
sked to analyze complex data on an experimental skin treatment (a “neutral” topic), their ability to interpret the data and reach a conclusion depended, as expected, on their numeracy (mathematical aptitude) rather than their opinions on skin cream (since they really had no opinions on the topic). More numerate subjects did a better job at figuring out whether the data showed that the skin treatment increased or decreased the incidence of rashes. (The data were made up, and for half the subjects, the results were reversed, so the correct or incorrect answer depended on using the data, not the actual effectiveness of a particular skin treatment.)

  When the researchers kept the data the same but substituted “concealed-weapons bans” for “skin treatment” and “crime” for “rashes,” now the subjects’ opinions on those topics drove how subjects analyzed the exact same data. Subjects who identified as “Democrat” or “liberal” interpreted the data in a way supporting their political belief (gun control reduces crime). The “Republican” or “conservative” subjects interpreted the same data to support their opposing belief (gun control increases crime).

  That generally fits what we understand about motivated reasoning. The surprise, though, was Kahan’s finding about subjects with differing math skills and the same political beliefs. He discovered that the more numerate people (whether pro- or anti-gun) made more mistakes interpreting the data on the emotionally charged topic than the less numerate subjects sharing those same beliefs. “This pattern of polarization . . . does not abate among high-Numeracy subjects. Indeed, it increases.” (Emphasis in original.)

  It turns out the better you are with numbers, the better you are at spinning those numbers to conform to and support your beliefs.

 

‹ Prev