What Intelligence Tests Miss

Home > Other > What Intelligence Tests Miss > Page 29
What Intelligence Tests Miss Page 29

by Keith E Stanovich


  10. Much research on the so-called bias blind spot is quite recent (Ehrlinger, Gilovich, and Ross, 2005; Pronin, 2006; Pronin, Lin, and Ross, 2002).

  11. The illusion of control is described in Langer (1975). The study of the traders is reported by Fenton-O’Creevy, Nicholson, Soane, and Willman (2003).

  12. The e-mail communication studies were conducted by Kruger, Epley, Parker, and Ng (2005).

  13. On feature creep and feature fatigue, see Rae-Dupree (2007) and Surowiecki (2007).

  14. But the designers are not solely to blame here. As in many areas of human affairs (see Gilbert, 2006), at the time they are choosing an electronic device, people do not know what will make them happy when they use it. Surowiecki (2007) discusses research indicating that people often think that more features will make them happier and thus prefer feature-laden products only to get the product home and find out that what they really wanted was simplicity. That many people really do want simplicity at the time of use is indicated by a study in which it was found that individuals returning an electronic device because it was too complicated spent just twenty minutes with it before giving up.

  15. Several sources review aspects of the myside processing literature (Baron, 1995, 2000; Kunda, 1990, 1999; Mele, 2001; Molden and Higgins, 2005; Perkins et al., 1991; Thagard, 2006).

  16. To summarize the individual differences research, intelligence differences in myside bias in the Ford Explorer–type problem are virtually nonexistent (Stanovich and West, 2007, 2008a). In the argument generation paradigms, they are also nonexistent (Macpherson and Stanovich, 2007; Toplak and Stanovich, 2003). Very low correlations between intelligence and myside bias are obtained in experiment evaluation paradigms (Klaczynski and Lavallee, 2005; Klaczynski and Robinson, 2000; Macpherson and Stanovich, 2007). Certain aspects of myside processing in the Kuhnian interview paradigm show modest relations with intelligence, but many others do not (Sá et al., 2005). Moderate (negative) correlations have been found between overconfidence effects and intelligence (Bruine de Bruin et al., 2007; Pallier, Wilkinson, Danthiir, Kleitman, Knezevic, Stankov, and Roberts, et al., 2002; Parker and Fischhoff, 2005; Stanovich and West, 1998c).

  9 A Different Pitfall of the Cognitive Miser

  1. See Gladwell (2000).

  2. On multiple-minds views of cognition and the concept of cognitive override, see Chapter 3, Evans (2003, 2007), and Stanovich (2004).

  3. There are discussions of the trolley problem and its philosophical and psychological implications in Foot (1967); Hauser (2006); Mikhail (2007); Petrinovich et al. (1993); Thompson (1976, 1985, 1990); Unger (1996); and Waldmann and Dietrich (2007). Greene’s work is described in several sources (Greene, 2005; Greene, Nystrom, Engell, Darley, and Cohen, 2004; Greene, Sommerville, Nystrom, Darley, and Cohen, 2001).

  4. The confabulatory tendencies of the conscious mind, as well as its tendency toward egocentric attribution, are discussed in, e.g., Calvin (1990); Dennett (1991, 1996); Evans and Wason (1976); Gazzaniga (1998); Johnson (1991); Moscovitch (1989); Nisbett and Ross (1980); Wegner (2002); T. Wilson (2002); Wolford, Miller, and Gazzaniga (2000); and Zajonc (2001); Zajonc and Markus (1982).

  5. The use of the term hot cognition for affect-laden cognition was the idea of psychologist Robert Abelson (Abelson, 1963; Roseman and Read, 2007). When the term cold cognition is used to label a task it does not mean that emotion is totally absent, only that affect is much less involved than it is in situations characterized as involving hot cognition.

  6. Epstein has conducted several studies using the task (Denes-Raj and Epstein, 1994; Kirkpatrick and Epstein, 1992; Pacini and Epstein, 1999). For information on children’s responses to the task, see Kokis et al. (2002).

  7. There has been a substantial amount of work on syllogisms where the validity of the syllogism conflicts with the believability of the conclusion (see, e.g., De Neys, 2006; Dias, Roazzi, and Harris, 2005; Evans, 2002b, 2007; Evans, Barston, and Pollard, 1983; Evans and Curtis-Holmes, 2005; Evans and Feeney, 2004; Goel and Dolan, 2003; Markovits and Nantel, 1989; Sá et al., 1999; Simoneau and Markovits, 2003; Stanovich and West, 1998c).

  8. Several studies on individual differences in conflict-type syllogisms have been conducted in my laboratory (Kokis et al., 2002; Sá et al., 1999; Macpherson and Stanovich, 2007; Stanovich and West, 1998c, 2008a).

  9. See Ainslie (2001, 2005), Baumeister and Vohs (2003, 2007), Loewenstein, Read, and Baumeister (2003), Rachlin (2000), and Stanovich (2004).

  10. Delayed-reward paradigms have been much investigated in psychology (Ainslie, 2001; Green and Myerson, 2004; Kirby and Herrnstein, 1995; Kirby, Winston, and Santiesteban, 2005; Loewenstein et al., 2003; McClure, Laibson, Loewenstein, and Cohen, 2004; Rachlin, 1995, 2000). The example is from Herrnstein (1990). There is a large literature on so-called akrasia (weakness of the will) in philosophy (Charlton, 1988; Davidson, 1980; Stroud and Tappolet, 2003) and an equally large literature on problems of self-control in psychology, economics, and neurophysiology (Ainslie, 1992, 2001; Baumeister and Vohs, 2003, 2007; Berridge, 2003; Elster, 1979; Loewenstein et al., 2003; Mischel, Shoda, and Rodriguez, 1989; O’Donoghue and Rabin, 2000; Rachlin, 1995, 2000). Problems of behavioral regulation that characterize various clinical syndromes are also the subject of intense investigation (Barkley, 1998; Castellanos, Sonuga-Barke, Milham, and Tannock, 2006; Tannock, 1998).

  11. There are many versions of the bundling idea in the literature (Ainslie, 2001; Loewenstein and Prelec, 1991; Prelec and Bodner, 2003; Read, Loewenstein, and Rabin, 1999; Rachlin, 2000; but see Khan and Dhar, 2007).

  10 Mindware Gaps

  1. There is a substantial literature on the history of facilitated communication (Dillon 1993; Gardner, 2001; Jacobson, Mulick, and Schwartz, 1995; Spitz, 1997; Twachtman-Cullen, 1997) and, by now, a number of studies showing it to be a pseudoscientific therapy (Burgess, Kirsch, Shane, Niederauer, Graham, and Bacon, 1998; Cummins and Prior, 1992; Hudson, Melita, and Arnold, 1993; Jacobson, Foxx, and Mulick, 2004; Mostert, 2001; Wegner, Fuller, and Sparrow, 2003). On autism, see Baron-Cohen (2005) and Frith (2003).

  2. My account of these two cases is taken from The Economist (January 24, 2004, p. 49), The Daily Telegraph (London) (June 12, 2003), The Times (London) (June 12, 2003), and Watkins (2000). On sudden infant death syndrome, see Hunt (2001) and Lipsitt (2003).

  3. The literature on heuristics and biases contains many such examples (e.g., Baron, 2000; Evans, 2007; Gilovich et al., 2002; Johnson-Laird, 2006; Kahneman and Tversky, 2000; Koehler and Harvey, 2004; Nickerson, 2004; Shafir, 2003; Sunstein, 2002; Tversky and Kahneman, 1974, 1983).

  4. On Thomas Bayes, see Stigler (1983, 1986). On the Bayesian formulas as commonly used in psychology, see Fischhoff and Beyth-Marom (1983).

  5. It is important to emphasize here a point that will become clear in later chapters. It is that the problems in probabilistic reasoning discussed in this chapter are not merely confined to the laboratory or to story problems of the type I will be presenting. They are not just errors in a parlor game. We will see in other examples throughout this book that the errors crop up in such important domains as financial planning, medical decision making, career decisions, family planning, resource allocation, tax policy, and insurance purchases. The extensive literature on the practical importance of these reasoning errors is discussed in a variety of sources (Åstebro, Jeffrey, and Adomdza, 2007; Baron, 1998, 2000; Belsky and Gilovich, 1999; Camerer, 2000; Chapman and Elstein, 2000; Dawes, 2001; Fridson, 1993; Gilovich, 1991; Groopman, 2007; Hastie and Dawes, 2001; Hilton, 2003; Holyoak and Morrison, 2005; Kahneman and Tversky, 2000; Koehler and Harvey, 2004; Lichtenstein and Slovic, 2006; Margolis, 1996; Myers, 2002; Prentice, 2003; Schneider and Shanteau, 2003; Sunstein, 2002, 2005; Taleb, 2001, 2007; Ubel, 2000).

  6. This probability is calculated using an alternative form of the Bayesian formula:

  P(H/D) = P(H)P(D/H)/[P(H)P(D/H) + P(~H)P(D/~H)]

  P(H/D) = (.5)(.99)/[(.5)(.99) + (.5)(.90)] = .5238

  7
. Doherty and Mynatt (1990).

  8. The covariation detection paradigm is described in a number of publications (e.g., Levin et al., 1993; Shanks, 1995; Stanovich and West, 1998d; Wasserman, Dorner, and Kao, 1990). Such errors have been found among medical personnel (Chapman and Elstein, 2000; Groopman, 2007; Kern and Doherty, 1982; Wolf, Gruppen, and Billi, 1985).

  9. The literature on the four-card selection task (Wason, 1966, 1968) has been reviewed in several sources (e.g., Evans, Newstead, and Byrne, 1993; Evans and Over, 2004; Manktelow, 1999; Newstead and Evans, 1995; Stanovich, 1999). There have been many theories proposed to explain why subjects respond to it as they do (Evans, 1972, 1996, 1998, 2006b, 2007; Hardman, 1998; Johnson-Laird, 1999, 2006; Klauer, Stahl, and Erdfelder, 2007; Liberman and Klar, 1996; Margolis, 1987; Oaks-ford and Chater, 1994, 2007; Sperber, Cara and Girotto, 1995; Stenning and van Lambalgen, 2004). On confirmation bias in general, see Nickerson (1998).

  10. The task was originally presented in Wason (1960). As with the four-card selection task, there are alternative theories about why subjects perform poorly in the 2-4-6 task (Evans, 1989, 2007; Evans and Over, 1996; Gale and Ball, 2006; Klayman and Ha, 1987; Poletiek, 2001). As with the four-card selection task, though, regardless of which of these descriptive theories explains the poor performance on the task, it is clear from research that a concern for falsifiability would facilitate performance. The DAX/MED experiment is reported by Tweney, Doherty, Warner, and Pliske (1980).

  11. Versions of the problem are investigated in Casscells, Schoenberger, and Graboys (1978); Cosmides and Tooby (1996); Sloman, Over, Slovak, and Stibel (2003); and Stanovich and West (1999).

  12. Dawkins (1976) emphasizes the point I am stressing here: “Just as we may use a slide rule without appreciating that we are, in effect, using logarithms, so an animal may be pre-programmed in such a way that it behaves as if it had made a complicated calculation. . . . When a man throws a ball high in the air and catches it again, he behaves as if he had solved a set of differential equations in predicting the trajectory of the ball. He may neither know nor care what a differential equation is, but this does not affect his skill with the ball. At some subconscious level, something functionally equivalent to the mathematical calculations is going” (p. 96).

  13. The Linda problem was first investigated by Tversky and Kahneman (1983). As with most of the tasks discussed in this book, the literature on it is voluminous (e.g., Dulany and Hilton, 1991; Girotto, 2004; Mellers, Hertwig, and Kahneman, 2001; Politzer and Macchi, 2000; Politzer and Noveck, 1991; Slugoski and Wilson, 1998). On inverting conditional probabilities, see Dawes (1988).

  14. On need for cognition, see Cacioppo et al. (1996). Our belief identification scale is described in Sá et al. (1999). The Matching Familiar Figures Test was developed by Kagan, Rosman, Day, Albert, and Phillips (1964).

  15. See the growing literature on the small but significant correlations between rational thinking mindware and intelligence (Bruine de Bruin et al., 2007; Kokis et al., 2002; Parker and Fischhoff, 2005; Sá et al., 1999; Stanovich and West, 1997, 1998c, 1998d, 1999, 2000, 2008b; Toplak et al., 2007; Toplak and Stanovich, 2002; West and Stanovich, 2003).

  16. In many situations, high-IQ people actually do not learn faster—or at least not uniformly so. Often, a better predictor of learning is what people already know in the relevant domain rather than how intelligent they are (Ceci, 1996; Hambrick, 2003).

  11 Contaminated Mindware

  1. My description of Ponzi schemes and the crisis in Albania is drawn from Bezemer (2001), Jarvis (2000), and Valentine (1998).

  2. Of course, such situations occur for a variety of reasons—many of them going beyond factors of individual cognition. Bezemer (2001) discusses many of the macroEconomic factors that contributed to the situation in Albania. To illustrate my point in this chapter, it is only necessary to acknowledge that irrational economic beliefs were one contributing factor in the Albania crisis.

  3. For my account of the recovered memory phenomenon, multiple personality disorder, and satanic ritual abuse, I have drawn on many sources (Brainerd and Reyna, 2005; Clancy, 2005; Hacking, 1995, Lilienfeld, 2007; Loftus and Guyer, 2002; Loftus and Ketcham, 1994; McNally, 2003; Nathan and Snedeker, 1995; Piper, 1998; Showalter, 1997). Multiple personality disorder is now termed dissociative identity disorder.

  4. The study is reported in Consumer Fraud Research Group (2006).

  5. These examples come from a variety of sources (e.g., Bensley, 2006; Brandon, 1983; Bulgatz, 1992; Dawes, 1988; Farias, 1989; Lehman, 1991; Lipstadt, 1994; Moore, 1977; Muller, 1991; Randi, 1980; Shermer, 1997; Stenger, 1990; Torrey, 1984).

  6. On the Nazi war criminals, see Lagerfeld (2004). On the doctoral degrees, see Gardner (1999, p. 205). On Holocaust deniers, see Lipstadt (1994).

  7. Stanovich (1999) used the term knowledge projection to classify an argument that recurs throughout many different areas of cognitive science (e.g., Dawes, 1989; Edwards and Smith, 1996; Koehler, 1993; Kornblith, 1993; Krueger and Zeiger, 1993; Mitchell, Robinson, Isaacs, and Nye, 1996). Evans, Over, and Manktelow (1993) use this argument to explain the presence of the belief bias effect in syllogistic reasoning. On knowledge assimilation, see Hambrick (2003).

  8. Rationalization tendencies have been discussed by many researchers (see Evans, 1996; Evans and Wason, 1976; Margolis, 1987; Nickerson, 1998; Nisbett and Wilson, 1977; Wason, 1969).

  9. A number of reasons why evolution does not guarantee human rationality have been discussed in the literature (Kitcher, 1993; Nozick, 1993; Over, 2002, 2004; Skyrms, 1996; Stanovich, 1999, 2004; Stein, 1996; Stich, 1990). Stich (1990), for example, discusses why epistemic rationality is not guaranteed. Regarding practical rationality, Skyrms (1996) devotes an entire book on evolutionary game theory to showing that the idea that “natural selection will weed out irrationality” (p. x) in the instrumental sense is false.

  10. I can only begin to cite this enormous literature (Ainslie, 2001; Baron, 2000; Brocas and Carrillo, 2003; Camerer, 1995, 2000; Camerer, Loewenstein, and Rabin, 2004; Dawes, 1998, 2001; Evans, 1989, 2007; Evans and Over, 1996, 2004; Gilovich, Griffin, and Kahneman, 2002; Johnson-Laird, 1999, 2006; Kahneman, 2003a, 2003b; Kahneman and Tversky, 1984, 2000; Koehler and Harvey, 2004; Lichtenstein and Slovic, 2006; Loewenstein et al., 2003; McFadden, 1999; Pohl, 2004; Shafir, 2003; Shafir and LeBoeuf, 2002; Stanovich, 1999, 2004; Tversky and Kahneman, 1983, 1986.)

  11. The contributors in a volume edited by Aunger (2000) discuss these and other related definitions (see also Blackmore, 1999; Dennett, 1991, 1995, 2006; Distin, 2005; Gil-White, 2005; Hull, 2000; Laland and Brown, 2002; Lynch, 1996; Mesoudi, Whiten, and Laland, 2006). I prefer to view a meme as a brain control (or informational) state that can potentially cause fundamentally new behaviors and/or thoughts when replicated in another brain. Meme replication has taken place when control states that are causally similar to the source are replicated in the brain host of the copy. Although my definition of the meme follows from Aunger’s (2002) discussion, precision of definition is not necessary for my purposes here. A meme can simply be used to refer to an idea unit or a unit of cultural information.

  There are numerous other controversial issues surrounding memetic theory, for example: the falsifiability of the meme concept in particular applications, the extent of the meme/gene analogy, how the meme concept differs from concepts of culture already extant in the social sciences. These debates in the science of memes are interesting, but they are tangential to the role that the meme concept plays in my argument. That role is simply and only to force on us one central insight: that some ideas spread because of properties of the ideas themselves. It is uncontroversial that this central insight has a different emphasis from the traditional default position in the social and behavioral sciences. In those sciences, it is usually assumed that to understand the beliefs held by particular individuals one should inquire into the psychological makeup of the individuals involved. It should also be noted that the term meme, for some s
cholars, carries with it connotations that are much stronger than my use of the term here. For example, Sperber (2000) uses the term meme not as a synonym for a cultural replicator in general, but as a cultural replicator “standing to be selected not because they benefit their human carriers, but because they benefit themselves” (p. 163). That is, he reserves the term for category 4 discussed later in this chapter. In contrast, my use of the term is more generic (as a synonym for cultural replicator) and encompasses all four categories listed below.

  12. On proximity and belief, see Snow, Zurcher, and Ekland-Olson (1980).

  13. In the literature, there are many discussions of evolutionary psychology (see Atran, 1998; Sperber, 1996; Tooby and Cosmides, 1992) and gene/culture coevolution (Cavalli-Sforza and Feldman, 1981; Durham, 1991; Gintis, 2007; Lumsden and Wilson, 1981; Richerson and Boyd, 2005).

  14. See Blackmore (1999) and Lynch (1996).

  15. On the implications of Universal Darwinism, see Aunger (2002), Dennett (1995), Hamilton (1996), and Stanovich (2004).

  16. Numerous sources have documented the education of the terrorists (Benjamin and Simon, 2005; Caryl, 2005; Dingfalter, 2004; Krueger, 2007; Laqueur, 2004; McDermott, 2005).

  17. The argument is not of course that the memeplex supporting this particular terrorist act is solely in category 4 discussed above. Most memeplexes combine properties of several categories. The point only is that there are some strong self-propagating properties in this memeplex, and that this fact forces us to look to the history and logic of these self-propagating properties rather than to a rational calculus based on the assumption that it serves only the interests of the host. The issue here is one that I have previously termed “leveling the epistemic playing field” (Stanovich, 2004). It is a matter of establishing that the assumption that this memeplex is solely self-propagating is no less extreme than the assumption that it must be serving the interests of the host. Many memeplexes combine the two, and I am simply suggesting that the properties of this memeplex suggest that it is balanced in favor of the former.

 

‹ Prev