15. Seventy-one percent of men and 66 percent of women believe they have above-average intelligence (M. Campbell, “100% Canadian,” The Globe and Mail, December 30, 2000). Evidence that drivers think they are better than average is from O. Svenson, “Are We All Less Risky and More Skillful Than Our Fellow Drivers?” Acta Psychologica 47 (1981): 143–148. This study also included a group of American students, who were slightly more confident in their abilities than their Swedish counterparts: 93 percent thought they were more skillful than 50 percent of their peers, and 88 percent thought they were safer. Evidence about self-judged attractiveness comes from a study of college students in which men judged themselves to be about 15 percent more attractive than they actually were. Women viewed themselves as slightly less attractive than they actually were, although both men and women viewed themselves as above average in attractiveness (the women in the study were judged to be a little more above average in attractiveness). See M. T. Gabriel, J. W. Critelli, and J. S. Ee, “Narcissistic Illusions in Self-Evaluations of Intelligence and Attractiveness,” Journal of Personality 62 (1994): 143–155. Interestingly, a meta-analysis of a number of studies that measured the relationship between self-rated attractiveness and actual attractiveness (as rated by others) showed only a small relationship. In other words, how attractive you judge yourself to be is only slightly related to how attractive others think you are. See A. Feingold, “Good-Looking People Are Not What We Think,” Psychological Bulletin 111 (1992): 304–311.
16. This belief in one’s own incompetence, despite all external evidence to the contrary, is sometimes known as the “Impostor Syndrome.” See M. E. Silverman, Unleash Your Dreams: Tame Your Hidden Fears and Live the Life You Were Meant to Live (New York: Wiley, 2007), 73–75; M. F. K. R. de Vries, “The Danger of Feeling Like a Fake,” Harvard Business Review (2005).
17. In the Kruger and Dunning study, the top 25 percent of subjects in sense of humor were, on average, funnier than 87.5 percent of the study participants (because the subjects occupied the 75–100th percentiles of the sense of humor distribution, and the midpoint of that range is 87.5). However, these subjects estimated, on average, that they were funnier than just 70 percent of their peers, indicating an average underconfidence of 17.5 percent.
18. D. Baird, A Thousand Paths to Confidence (London: Octopus, 2007), 10.
19. R. M. Kanter, Confidence: How Winning Streaks and Losing Streaks Begin and End (New York: Crown Business, 2004), 6.
20. A. Tugend, “Secrets of Confident Kids,” Parents, May 2008, pp. 118–122.
21. A transcript and video recording of the so-called malaise speech can be found at the Miller Center of Public Affairs website (millercenter.org/scripps/archive/speeches/detail/3402).
22. The story of Carter’s speech, its political context, and the response to it is told in K. Mattson, “What the Heck Are You Thinking, Mr. President?” Jimmy Carter, America’s “Malaise,” and the Speech That Should Have Changed the Country (New York: Bloomsbury, 2009).
23. J. B. Stewart, Den of Thieves (New York: Simon & Schuster 1991), 117, 206; J. Kornbluth, Highly Confident: The Crime and Punishment of Michael Milken (New York: Morrow, 1992).
24. The conversation between Tenet and Bush was reported in B. Woodward, Plan of Attack (New York: Simon & Schuster, 2004), 249. Fleischer’s quote is from a White House press conference, April 10, 2003, www.whitehouse.gov/news/releases/2003/04/20030410–6.html (accessed July 2006). Evidence about the absence of WMDs comes from Comprehensive Report of the Special Advisor to the DCI on Iraq’s WMD (also known as the “Duelfer Report”) (https://www.cia.gov/library/reports/general-reports-l/iraq_wmd_2004/index.html).
25. This is not as unusual a decision process as you might think. The U.S. Supreme Court uses it during the conferences that follow oral arguments in its cases: The Chief Justice states his views on the case, followed by the other justices, from the most to least senior. An advantage of this process is that it ensures that everyone gets to speak, and in the case of tough-minded federal judges who are appointed for life, it probably does more good than harm. When some group members are clearly subordinate to others, though, it is a recipe for bad outcomes. The Supreme Court’s decision-making process is described in W. H. Rehnquist, The Supreme Court: How It Was, How It Is (New York: William Morrow, 1987).
26. In his book The Wisdom of Crowds (New York: Doubleday, 2004), James Surowiecki reviews over a century of work, dating back to Sir Francis Galton, showing that the average of independent guesses comes closer to the actual total than the vast majority of the individual estimates that make up the average.
27. From a discussion Chris had with Richard Hackman on April 27, 2009.
28. C. Anderson and G. J. Kilduff, “Why Do Dominant Personalities Attain Influence in Face-to-Face Groups? The Competence-Signaling Effects of Trait Dominance,” Journal of Personality and Social Psychology 96 (2009): 491–503. In a second experiment, similar results were obtained with a more realistic, open-ended group task that involved simulated business decision-making.
29. Information on William Thompson comes from Wikipedia, en.wikipedia.org/wiki/William_Thompson_(confidence_man) (accessed May 2, 2009); and from the article “Arrest of the Confidence Man,” New-York Herald, July 8, 1849, chnm.gmu.edu/lostmuseum/lm/328/(accessed May 2, 2009).
30. The story of Frank Abagnale is based on Wikipedia, en.wikipedia.org/wiki/Frank_Abagnale (accessed May 2, 2009); and on his memoir: F. W. Abagnale and S. Redding, Catch Me If You Can (New York: Grosset & Dunlap, 1980).
31. The experiments described here are reported in C. F. Chabris, J. Schuldt, and A. W. Woolley, “Individual Differences in Confidence Affect Judgments Made Collectively by Groups” (poster presented at the annual convention of the Association for Psychological Science, New York, May 25–28, 2006).
32. In an experiment with 61 subjects, confidence levels between the two test versions were correlated (r = .80) but accuracy was not (r=-.05). In another experiment with 72 subjects, confidence correlated only r = .12 with scores on a twelve-item version of Raven’s Advanced Progressive Matrices, a nonverbal “gold standard” measure of general cognitive ability. Earlier research by others indicated that confidence is a domain-general trait: G. Schraw, “The Effect of Generalized Metacognitive Knowledge on Test Performance and Confidence Judgments,” Journal of Experimental Education 65 (1997): 135–146; A-R. Blais, M. M. Thompson, and J. V. Baranski, “Individual Differences in Decision Processing and Confidence Judgments in Comparative Judgment Tasks: The Role of Cognitive Styles,” Personality and Individual Differences 38 (2005): 1707–1713.
33. Cesarini and colleagues found that genetic differences explain 16–34 percent of the differences among individuals in overconfidence. They studied 460 pairs of twins from the Swedish Twin Registry and asked them to estimate their cognitive abilities relative to the other subjects in the study. The difference between their estimated ranks and their actual ranks on a cognitive test was taken as a measure of overconfidence. D. Cesarini, M. Johannesson, P. Lichtenstein, and B. Wallace, “Heritability of Overconfidence,” Journal of the European Economic Association 7 (2009), 617–627.
34. Quotes and information in this section are from H. Cooper, C. J. Chivers, and C. J. Levy, “U.S. Watched as a Squabble Turned into a Showdown,” The New York Times, August 17, 2008, p. A1 (www.nytimes.com/2008/08/18/washington/18diplo.html). A detailed summary of the Russia–Georgia War is available on Wikipedia (en.wikipedia.org/wiki/2008_South_Ossetia_war).
35. D. D. P. Johnson, Overconfidence and War: The Havoc and Glory of Positive Illusions (Cambridge, MA: Harvard University Press, 2004).
36. A similar collective overconfidence might have contributed to the decision to invade Iraq in 2003. Richard Pearle, then chairman of the Defense Policy Board, when interviewed later on PBS WideAngle, noted the strong consensus within the Bush administration on the need to overthrow Saddam Hussein: “It’s not quite the case that the president has the only vote that counts, but his thum
b on the scale is not insignificant. And I don’t think he’s meeting a lot of resistance, frankly. I think the other senior officials of the administration have come to the same conclusion he has.”
37. The average confidence of the individual subjects was 70 percent, and the average confidence of the groups was 74 percent, a small but statistically significant increase; 36 groups of two people each participated in this experiment, 12 each in the three conditions (Chabris et al., “Individual Differences in Confidence”).
38. See “The Case of the Missing Evidence” (www.blog.sethroberts.net/2008/09/13/the-case-of-the-missing-evidence/).
39. C. G. Johnson, J. C. Levenkron, A. L. Sackman, and R. Manchester, “Does Physician Uncertainty Affect Patient Satisfaction?” Journal of General Internal Medicine 3 (1988): 144–149.
40. B. McKinstry and J. Wang, “Putting on the Style: What Patients Think of the Way Their Doctor Dresses,” British Journal of General Practice 41 (1991): 275–278; S. U. Rehman, P. J. Nietert, D. W. Cope, and A. O. Kilpatrick, “What to Wear Today? Effect of Doctor’s Attire on the Trust and Confidence of Patients,” The American Journal of Medicine 118 (2005): 1279–1286; and A. Cha, B. R. Hecht, K. Nelson, and M. P. Hopkins, “Resident Physician Attire: Does It Make a Difference to Our Patients?” American Journal of Obstetrics and Gynecology 190 (2004): 1484–1488. White lab coats also appear to be a source of infection: A. Treakle, K. Thom, J. Furuno, S. Strauss, A. Harris, and E. Perencevich, “Bacterial Contamination of Health Care Workers’ White Coats,” American Journal of Infection Control 37 (2009): 101–105.
41. Information on the Jennifer Thompson rape case is based primarily on judicial opinions in the case and on the following sources: J. M. Doyle, True Witness: Cops, Courts, Science, and the Battle Against Misidentification (New York: Palgrave Macmillan, 2005); an episode of the PBS series Frontline, “What Jennifer Saw,” broadcast February 25, 1997; a joint memoir, J. Thompson-Cannino, R. Cotton, and E. Torneo, Picking Cotton: Our Memoir of Injustice and Redemption (New York: St. Martin’s Press, 2009); and an article by Jennifer Thompson, “I Was Certain, But I Was Dead Wrong,” Houston Chronicle, June 20, 2000, www.commondreams.org/views/062500–103.htm (accessed May 3, 2009). Direct quotes are also drawn from these sources.
42. Neil v. Biggers, 409 U.S. 188 (1972).
43. Kassin and colleagues surveyed 63 expert-witness psychologists and found that 46 said the evidence for this statement was either “very” or “generally” reliable: S. M. Kassin, P. C. Ellsworth, and V. L. Smith, “The ‘General Acceptance’ of Psychological Research on Eyewitness Testimony: A Survey of the Experts,” American Psychologist 44 (1989): 1089–1098.
44. Innocence Project website, www.innocenceproject.org/understand/Eyewitness-Misidentification.php (accessed February 21, 2009).
45. R. C. L. Lindsay, G. L. Wells, and C. M. Rumpel, “Can People Detect Eyewitness-Identification Accuracy Within and Across Situations?” Journal of Applied Psychology 66 (1981): 79–89.
46. S. Sporer, S. Penrod, D. Read, and B. L. Cutler, “Choosing, Confidence, and Accuracy: A Meta-analysis of the Confidence-Accuracy Relation in Eyewitness Identification Studies,” Psychological Bulletin 118 (1995): 315–327. They report an average correlation across studies of r = .41 between witness confidence and accuracy in simulated lineup tasks (when the “witness” chooses someone from the lineup, which Jennifer Thompson did in the Ronald Cotton investigation, as opposed to choosing no one; i.e., claiming the perpetrator is not in the lineup).
47. G. L. Wells, E. A. Olson, and S. D. Charman, “The Confidence of Eyewitnesses in Their Identifications from Lineups,” Current Directions in Psychological Science 11 (2002): 151–154.
48. We are not claiming that physical evidence is always infallible. It can be relied on only to the extent that it is produced by honest, careful technicians applying valid science. That said, the forensic science behind such common techniques as hair and fiber analysis and fingerprint matching is surprisingly primitive (e.g., see National Research Council, Strengthening Forensic Science in the United States: A Path Forward [Washington, DC: National Academies Press, 2009]). Circumstantial evidence, which is often derided as being lower in value than direct evidence from eyewitnesses, can in fact be more reliable than any other kind of evidence—even a sworn confession—because it does not stand or fall based on a single disputable fact (e.g., whether a witness has good memory, or whether a confession was coerced). A good circumstantial case can be compelling because it involves a large number of circumstances that would be unlikely to all occur together by chance.
Chapter 4: Should You Be More Like a Weather Forecaster or a Hedge Fund Manager?
1. Basic facts about the Human Genome Project, which involved researchers in several countries, can be found at the U.S. Department of Energy (DOE) website devoted to the project (www.ornl.gov/sci/techresources/Human_Genome/home.shtml). The DOE was involved in biomedical research because of the recognition that radiation from nuclear weapons and other sources could affect human genes. The majority of the project’s funding, however, came from the budget of the National Institutes of Health (NIH).
2. The story of the gene count betting pool is based on a series of articles in Science magazine: E. Pennisi, “And the Gene Number Is …?” Science 288 (2000): 1146–1147; E. Pennisi, “A Low Number Wins the GeneSweep Pool,” Science 300 (2003): 1484; and E. Pennisi, “Working the (Gene Count) Numbers: Finally, a Firm Answer?” Science 316 (2007): 1113. Other sources include an Associated Press article from October 20, 2004 (reprinted at www.thescienceforum.com/Scientists-slash-estimated-number-of-human-genes-5t.php), and an article by Cold Spring Harbor Laboratory’s David Stewart, who maintained the official handwritten ledger in which all bets were recorded, www.cshl.edu/public/HT/ss03-sweep.pdf (accessed August 27, 2009). The pool’s defunct website has been archived at web.archive.org/web/20030424100755/www.ensembl.org/Genesweep/ (accessed August 27, 2009).
3. The prediction was made in a talk given by Herbert Simon on behalf of himself and Allen Newell at the National Meeting of the Operations Research Society of America on November 14, 1957: H. A. Simon and A. Newell, “Heuristic Problem Solving: The Next Advance in Operations Research,” Operations Research 6 (1958): 1–10. They also predicted that within ten years, computers would be proving important mathematical theorems and composing high-quality original music, and that most theories in psychology would be expressed in the form of computer programs designed to simulate the human mind. None of these things fully came to pass, though some progress was made on each of them.
4. Nowadays even laptop computers are the equal of the world’s top players. The history of the bets is described by D. Levy and M. Newborn, How Computers Play Chess (New York: Computer Science Press, 1991). The match between Kasparov and Deep Blue is recounted in the following works: M. Newborn, Deep Blue: An Artificial Intelligence Milestone (New York: Springer, 2003); F-H. Hsu, Behind Deep Blue: Building the Computer That Defeated the World Chess Champion (Princeton, NJ: Princeton University Press, 2002); and D. Goodman and R. Keene, Man Versus Machine: Kasparov Versus Deep Blue (Cambridge, MA: H3 Publications, 1997).
5. P. Ehrlich, The Population Bomb (New York: Ballantine, 1968).
6. Quoted by J. Tierney, “Science Adviser’s Unsustainable Bet (and Mine),” TierneyLab blog, December 23, 2008 (tierneylab.blogs.nytimes.com/2008/12/23/science-advisors-unsustainable-bet-and-mine/). Other information on the Ehrlich-Simon wager is drawn from the following sources: J. Tierney, “Betting on the Planet,” The New York Times, December 2, 1990; J. Tierney, “Flawed Science Advisor for Obama?” TierneyLab blog, December 19, 2008 (tierneylab.blogs.nytimes.com/2008/12/19/flawed-science-advice-for-obama/); and E. Regis, “The Doomslayer,” Wired, February 1997.
7. J. L. Simon, “Resources, Population, Environment: An Oversupply of False Bad News,” Science 208 (1980): 1431–1437.
8. We could have gone on and on with examples of scientific overconfidence; for example, even physicists have been found t
o be overconfident when historical data was examined to see how accurately they had measured well-known physical constants, like the speed of light: M. Henrion and B. Fischhoff, “Assessing Uncertainty in Physical Constants,” American Journal of Physics 54 (1986): 791–797.
9. R. Lawson, “The Science of Cycology: Failures to Understand How Everyday Objects Work,” Memory and Cognition 34 (2006): 1667–1775.
10. L. G. Rozenblit, “Systematic Bias in Knowledge Assessment: An Illusion of Explanatory Depth,” PhD dissertation, Yale University, 2003.
11. From an interview Dan conducted with Leon Rozenblit on August 14, 2008.
12. B. Worthen, “Keeping It Simple Pays Off for Winning Programmer,” The Wall Street Journal, May 20, 2008, p. B6 (online.wsj.com/article/SB121124841362205967.html).
13. Information on the Big Dig drawn primarily from the project’s official website (masspike.com/bigdig/index.html).
14. Information on the Brooklyn Bridge and Sydney Opera House is from B. Flyvbjerg, “Design by Deception: The Politics of Megaproject Approval,” Harvard Design Magazine, Spring/Summer 2005, pp. 50–59. Information on the Sagrada Familia is from R. Zerbst, Gaudi: The Complete Buildings (Hong Kong: Taschen, 2005) and from Wikipedia (en.wikipedia.org/wiki/Sagrada_Família). The entire history of public architecture can be seen as one of cost overruns and delays. Bent Flyvbjerg, an expert on urban planning at the University of Aalborg in Denmark, has coauthored a study of three hundred such projects in twenty countries. He argues persuasively that all parties involved have learned to deliberately lowball the estimates, because if legislators and their constituents appreciated the true costs and uncertainties involved in these projects, they would never support them. In other words, those who do understand the complex systems—or at least understand the limits of their own knowledge—are exploiting the very lack of that understanding among the general public. See B. Flyvbjerg, N. Bruzelius, and W. Rothengatter, Megaprojects and Risk: An Anatomy of Ambition (Cambridge: Cambridge University Press, 2003).
The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us Page 32