The Confidence Game

Home > Nonfiction > The Confidence Game > Page 27
The Confidence Game Page 27

by Maria Konnikova


  All the same, the teachers were soon seeing evidence of great intellectual promise from their “special” students. They were more curious, caught on to things faster, made fewer mistakes. In what Rosenthal and Jacobson then named the Pygmalion effect, now known as the self-fulfilling prophecy, by the end of the year, the spurters had indeed spurted past the other children. They were expected to do better, the teachers put more energy into teaching them—and, miracle of miracles, they did do better.

  While Rosenthal’s results are most often cited in the literature on self-fulfilling prophecies rather than confirmation bias, they illustrate one of the reasons the bias persists. First, it was selective information processing—classic confirmation—that caused the teachers to see students who were actually no different from their peers as somehow exceptional. It was all too easy for them to gather confirming instances of that superiority, and promptly forget the disconfirming ones. But then, the confirmation bias actually changed reality. The teachers managed to first reduce dissonance—between the students’ actual performance and their purported talent—by selectively analyzing information, and then to do what dissonance reduction almost never can: change the resulting actuality. In this case, their altered behavior was enough for things to shift in their expected direction: the world accommodated their false expectancy and made it true. Children at a malleable point in their intellectual development respond to the most minute nuance in their environment in seemingly outsized ways. Pay more attention to one, and she thrives. Ignore the advances of another, she wilts. Teachers expected spurters to be special; they singled those spurters out, to the detriment of the rest of the class; and so belief changed reality. Because, in some instances, how we act does indeed end up affecting how we do, the confirmation bias persists despite its tremendous destructive potential. After all, thinking can indeed sometimes make it so.

  Was it so crazy for Norfleet to think he could make back his investment? He had just been so successful, and Stetson was so good with the markets. It was all but a sure thing. That momentary loss was swiftly forgotten. The piles of ready cash loomed large.

  In the case of first and second graders, it can be relatively simple to read into behavior: it’s ambiguous enough, and they are still quite malleable. Tests aside, it can indeed be a matter of judgment as to who has higher potential—highly subjective judgment, but judgment all the same. Besides, for a teacher, being accurate in her evaluation of students isn’t inherently at a high premium. It’s not like she has money at stake. (The children, of course, are a different story. What was harmless for teachers was pernicious for them. One has to wonder how the non-spurting children turned out.) But what about more difficult cases, where the evidence is both clearer and more personally important? Do people then really do the same thing—selectively evaluate evidence and declare themselves confident in their own accuracy despite contravening evidence? How is upping the ante even possible—if you make a mark lose, don’t you lose him for good? The breakdown seems doomed to fail by its very intent. As the saying goes, fool me once, shame on you; fool me twice . . . How, then, does it manage to succeed so spectacularly?

  In 1994, a group of psychologists at Columbia University decided to test a case where the accurate reading of evidence was the whole point: the reasoning of juries. Here’s what we hope happens as a jury decides a case. Jurors come in with a completely open mind and no prior knowledge of the case. They listen to the evidence, piece by piece, making notes on each separate fact they hear. Then they look at all the facts together and see what story—the defendant’s or the prosecutor’s—seems to have the most support. But even then they’re not done. Next, they focus in on the supported story, review every piece of information that doesn’t support it, and make sure none of those facts are game changers: the exclamation points in favor of the verdict still outnumber the possible question marks. Only then do they reach a decision.

  In reality, Deanna Kuhn and her colleagues found, events unfurl in a quite different way. First, members of a mock jury listened to an audio reenactment of opening and closing statements, witness and defendant cross-examination, and judge’s instructions to the jury for the case of Commonwealth of Massachusetts v. Johnson. Frank Johnson stood accused of first-degree murder. One afternoon, he had quarreled in a bar with Alan Caldwell. Things got heated. Caldwell took a razor from his pocket and made a threat: Johnson had better watch himself. Later that day, as afternoon wore into evening, the two men found themselves back at the bar. They decided to take things outside. No one is quite sure what exactly went on out there, but the outcome was clear: Johnson knifed Caldwell, and Caldwell was dead. Had Caldwell again pulled out his razor? Did Johnson actively seek to stab him, or had he merely pulled out a knife to show he, too, was armed? Had Johnson gone home in the interim with the explicit intention of getting his knife—and why had he decided to return to the bar? Why did the two men go out together in the first place, after an earlier fight? The questions loomed large.

  What verdict did the supposed jurors favor? the judge’s instructions asked. What factors had gone into the choice? Was there any particularly influential evidence? How sure were they of their decision? And was there any evidence to suggest this verdict might not be the right one after all?

  The reasoning process, Kuhn found, was often the exact opposite of the ideal. Almost immediately, each juror had constructed a plausible story out of the events, spontaneously filling in uncertain holes to fit into the resulting narrative. Their “facts,” it turned out, diverged quite substantially. “Caldwell first hit him in the face and he [Johnson] fell to the floor and then Caldwell took out his razor,” wrote one juror. “So he [Johnson] thought he [Caldwell] would stab him, so he had to take out his fishing knife to defend himself.” Or another: “Because Caldwell was threatening him before and later during the day and attacked him in the evening. So what he was trying to do was to defend himself from that. He just walked with the knife like he was going fishing or something like that. So, since he drew out the razor from his pocket and started to . . . you know, he was trying to defend himself so he takes a knife to defend himself.” They had supplied a lot of those “facts” themselves, where actual factual evidence was scarce. But in their minds, their story was the story.

  Fewer than 40 percent of the mock jurors had even generated any spontaneous counterargument for their position—and the counterarguments, both spontaneous and prompted, were, in the majority of cases, not even real counterarguments. Two thirds simply presented evidence for another verdict, rather than evidence against this particular one. In other words, truly disconfirming evidence wasn’t even considered in the vast majority of cases.

  What’s more, even though there was no clear consensus as to the proper verdict—that is, the data were ambiguous enough that multiple decisions were possible—most jurors were highly confident of the fact that they’d chosen the “right” one. Support for the verdicts was just about evenly split, with 50 percent of jurors opting for either first-degree murder or self-defense, and 48 percent for manslaughter or second-degree murder. Yet confidence ran high: just about two thirds of the jurors reported either high or very high certainty in their choice.

  Kuhn’s subjects had included a wide range of ages, educational levels, backgrounds, communities, and professions. Yet for all of them, the confirmation bias loomed large: a plausible story, followed by selective weighing of evidence, with pieces that fit the bill carefully inserted into their proper place, and those that didn’t promptly discarded. In a jury, the motivation to be accurate couldn’t be higher: lives are being made or broken by your actions. But the person who wins the case need not have the best evidence, simply the best story, the story that most vividly catches a juror’s fancy. A good enough story—or one that successfully shows why the other guy’s yarn just doesn’t hold up—can trump any evidence that might later follow. That’s why the well-executed breakdown, instead of ending the game then and there, actually takes it to the next lev
el. We’ve already heard the tale, and so, hot off the convincer, our confirmation bias is going strong: the evidence seems off, but our confirmatory tendency dismisses it and we commit ourselves to the story even further. We are simply too far along to perform an objective evaluation.

  Moe Levine was a legendary trial lawyer. Throughout the 1960s, right up until his death in 1974, he represented dozens of clients in injury lawsuits, using an approach he termed the “whole man.” You cannot injure a part of a man, the logic goes; you only injure the whole man. A life is simply never the same after a serious injury. That philosophy colored his entire approach to trials—and earned him a reputation as one of the best speakers of his time. In a famous double-amputation case in which he was trying to win compensation for his client, he ended his closing argument with the following thought:

  As you know, about an hour ago we broke for lunch. I saw the bailiff come and take you all as a group to have lunch in the jury room. Then I saw the defense attorney, Mr. Horowitz. He and his client decided to go to lunch together. The judge and court clerk went to lunch. So, I turned to my client, Harold, and said, “Why don’t you and I go to lunch together?” We went across the street to that little restaurant and had lunch. (Significant pause.) Ladies and gentlemen, I just had lunch with my client. He has no arms. He has to eat like a dog. Thank you very much.

  According to reports at the time, he won one of the largest settlements in New York history.

  And that is how the breakdown is possible. It isn’t about the objective evidence in front of you, whether it’s determining whether a financial loss is evidence of a scam or whether or not an injury qualifies for compensation. Moe Levine could have rebutted many a fact on the emotional strength of this story, just as Stetson and Spencer could explain away any loss through a compelling narrative. Confidence men are master storytellers, so by the time things appear to be getting dicey, they are perfectly placed to make us believe ever more strongly in their fiction rather than walk away, as we by any sane estimation should. They don’t just tell the original tale; they know how to make even the most dire-seeming evidence against them look more like evidence in favor of their essential trustworthiness and their chosen scheme’s essential brilliance.

  In the case of Norfleet, what he already knew about Stetson and Spencer—that they were honest, had helped him in the past, had made him money, and had offered to buy his land—affected how he would see the first of what could, only in retrospect, be called red flags: the moment when Spencer not only lost Stetson’s ticket, but then made an elementary error in writing a new one. Norfleet had already formed a very specific expectancy: Stetson is a financial wizard and has fail-safe ways to earn me money—and he has asked for nothing in return. And Spencer is a man very much like himself, who has charmed his wife, told his boy he was buying the farm, and put in a good-faith show with cash of his own. So was this likely to be a ploy or an honest error, one that Stetson would now honestly try to fix? The story seemed solid enough, unlikely to change in midstride. After all, it was so persuasively told, with plenty of evidence to back it up from the start.

  “The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it,” wrote Bacon. “And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate.” As the scientific evidence has come in, it has only made his point all the stronger.

  * * *

  Back in Fort Worth, the men reconvened. They had collectively raised $70,000, still $10,000 short of their $80,000 guarantee. Not to worry. Stetson would just run this over to the exchange against their debt.

  But here, Norfleet paused. He was no sucker. He was not parting with his money until it was all there and he knew for a fact where it was going. Stetson reassured him, tucking the money under his arm and walking out the door.

  Not so fast. Now Norfleet drew a gun. A double-action Smith & Wesson. This was real cash. He was not about to watch it walk out the door in circumstances he didn’t fully grasp.

  This was not how Stetson conducted business. Disgust painting his face, he threw the money back on the bed. “Take the money and go to hell with it,” he spit out, “if you can’t stand by the agreement we made.”

  Norfleet was never one to bail on a deal. His word was good. Theirs, however, he was beginning to doubt. “You’re partners,” he told them. “And crooks of the first class.”

  Spencer began to sob. Stetson, meanwhile, looked Norfleet straight in the face and made a gesture: the grand hailing distress sign of a Master Mason. Not a sign to be used lightly. Norfleet replaced his gun.

  “Brother,” Stetson addressed him, a smile on his face. “You know I have trusted you with $60,000 and $70,000 in your room overnight, and not once did I question your honesty.” He continued, “When I started away with this money I only thought I was doing what had been agreed on.”

  Spirits calmer, the three men once more sat down. Spencer would raise the $10,000 balance, they agreed. He’d wire the amount to Norfleet. And together Stetson and Norfleet would go to the exchange and collect the $160,000. That settled, Spencer departed for Austin, where he would sell some Liberty Bonds to raise the missing cash, and Stetson, $70,000 in tow, departed for Dallas, to give him time to confirm the bid at the Dallas exchange. He was to meet Norfleet at ten sharp the following morning, at the Cadillac Hotel.

  Norfleet arrived at half past nine; he didn’t want to miss Stetson. Ten o’clock came and went. Eleven. Norfleet grew anxious. Leaving a note with the clerk, he walked from hotel to hotel in his search for Stetson. Maybe he had somehow ended up in the wrong place? He returned to the Cadillac. No, sir; no one answering to Stetson had called in the meantime. It was then that Norfleet realized that he’d lost not only his life savings, but, too, the buyer of his land. He wasn’t just $45,000 poorer; he was also in debt to the tune of $90,000. How would he pay Slaughter for the ranch? He’d been swindled not once, but twice—even when his gut had told him something might be off. How could that happen to a man like him? How could someone so famed for his business acumen have become a laughingstock—one the press would soon dub the “Boomerang Sucker,” the one who got suckered not once, but twice? It was the breakdown at its finest.

  * * *

  When reality pulls a one-eighty from expectancy, being selective in our perceptions isn’t the only strategy open to us. As Festinger argued, we can also change our prior beliefs. We can, in essence, revise history.

  Hindsight is always twenty-twenty, as the saying goes. And even though we often utter those words with a wry smile to justify a silly-seeming error, we don’t tend to realize that, just as often, we revise our own memories of what came before so that it’s not just hindsight that’s twenty-twenty; it’s as if we’d expected events to unfurl in a certain way all along. I knew she was up to no good. I knew he was pulling my leg. I knew he was going to call that shot. I knew, I knew, I knew. Yet if we had actually known, wouldn’t we have acted quite differently? “Within an hour of the market closing every day, experts can be heard on the radio explaining with high confidence why the market acted as it did,” says Kahneman. “A listener could well draw the incorrect inference that the behavior of the market was so reasonable that it could have been predicted earlier in the day.”

  It was the fall of 1972, and President Nixon was in the final stages of preparation for his trip to China. It would be a historic moment, all knew, but no one was sure precisely how. The media was filled with various predictions. Would the visit be a success? What would be accomplished? What would be discussed? For Baruch Fischhoff and his Hebrew University colleague Ruth Beyth, this was just the opportunity they had been waiting for. For several years, they had been studying the nature
of our judgments before and after the fact. They’d found something they called creeping determinism—a determinism that crept backward from knowledge to prior belief. Never before, though, had they had a chance to conduct such a precise test of their theory, where not only could predictions be made in real time, but they could then be verified, and memory retested.

  One afternoon, the two psychologists asked students in their classes to make a few predictions. President Nixon was about to go to China, they explained. Here were a few possible directions the visit might take: the United States will establish a permanent diplomatic mission in Beijing, but not grant diplomatic recognition; President Nixon will meet Mao at least once; and so on. How likely did they think each of them was to happen, from zero (no chance) to one hundred (certain to happen)? Two weeks later, Nixon’s trip now concluded, they again handed out some questionnaires. Now, though, they asked the students to do something slightly different: reconstruct their earlier answers—that is, pick the same likelihood for each event that they had two weeks prior. They also asked them how closely they had been following the news, and, for each event, whether they knew the actual outcome.

 

‹ Prev