Book Read Free

What Intelligence Tests Miss

Page 21

by Keith E Stanovich


  As the example that ended the last section shows, the normal calculus of behavioral cause and effect does not apply when contaminated mindware is involved. The default assumption that people always act in their own interests (or in the interests of those they care about) does not apply in the case of contaminated mindware, which acts in its own interests—replication. This insight, an outgrowth of modern Universal Darwinism, has only recently been fully absorbed by society.15 Its parallel, the insight that genes do not necessarily serve the interests of their human hosts, was not brought to general public attention until Richard Dawkins synthesized a set of evolutionary insights in his famous 1976 book. The insight that cultural replicators (mindware) could likewise not serve the individual is even more recent, and it remains counterintuitive for some people.

  The counterintuitive nature of the insight is reflected in the difficulty people have in dropping the default assumption of rationality in their attempts to explain behavior. One of the most salient events of the twenty-first century provides a glaring example. In the near aftermath of the destruction of the World Trade Center on September 11, 2001, the First Lady of the United States, Laura Bush, was asked to comment on the event and, in the course of her answer, she mentioned the importance of education in preventing such tragedies. Interestingly, in an interview around the same time, the wife of the prime minister of Great Britain, Cherie Blair, also mentioned education as a preventative for events like those of September 11. However, commentators at the time, and the more comprehensive 9/11 Report three years later, point out the disturbing fact that the hijackers of the airplanes on September 11 were by no means uneducated.16 For example, Mohammed Atta, who piloted American Airlines Flight 11 after the hijacking and incinerated scores of people when he slammed the plane into the North Tower of the World Trade Center, had a degree in city engineering and planning.

  People have a hard time accepting such behavior from fully educated and intelligent people. Because people are rational, the thinking goes, there must be some critical thing that they didn’t know—some educational or informational gap that led to this behavior.17 The concept of contaminated mindware opens up for us another possibility—perhaps the terrorists had not too little mindware but, instead, too much. Specifically, a variety of pernicious parasite memes had infected the terrorists—the martyrdom meme and the meme for extravagant rewards in the afterlife, for example. The destruction of the World Trade Center has, sadly, helped many people understand this horrific logic of the virus meme that will replicate itself at any cost to human life. It has spawned a more explicit discussion of the danger of memes that become weapons because they commandeer their hosts so completely.

  Memeplexes like that exemplified in the Harper’s excerpt that ended the last section are not serving any rational human ends. Instead, they might be called deal breaker memes—memes that brook no compromise with their replication strategies. The reason such a property facilitates idea propagation follows from the principles of Universal Darwinism. A replicator increases in frequency along with increases in its fecundity, longevity, and copying fidelity. A cultural replicator has much lower copying fidelity than a gene. Segments of cultural replicators are constantly being mixed and matched as they jump from brain to brain. By refusing to enter the bouillabaisse that is human culture, deal breaker memes assure themselves a clean replication into the future. On a frequency-dependent basis, there is probably a niche for deal breaker memes. The important point for the discussion here is that such mindware will not display the flexibility that is necessary to serve human interests in a changing world. Deal breaker memes thus become prime candidates for contaminated mindware.

  Strategies for Avoiding Contaminated Mindware

  The previous discussion suggests that we need strategies for avoiding contaminated mindware. The following are some rules for avoiding such mindware:

  1. Avoid installing mindware that could be physically harmful to you, the host.

  2. Regarding mindware that affects your goals, make sure the mindware does not preclude a wide choice of future goals.

  3. Regarding mindware that relates to beliefs and models of the world, seek to install only mindware that is true—that is, that reflects the way the world actually is.

  4. Avoid mindware that resists evaluation.

  Rules 1 and 2 are similar in that they both seek to preserve flexibility for the person if his or her goals should change. We should avoid mindware harmful to the host because the host’s ability to pursue any future goal will be impaired if it is injured or has expired. Likewise, mindware that precludes future goals that may be good for a person to acquire is problematic. For example, there is in fact some justification for our sense of distress when we see a young person adopt mindware that threatens to cut off the fulfillment of many future goal states (early pregnancy comes to mind, as do the cases of young people joining cults that short-circuit their educational progress and that require severing ties with friends and family).

  Rule 3 serves as a mindware check in another way. The reason is that beliefs that are true are good for us because accurately tracking the world helps us achieve our goals. Almost regardless of what a person’s future goals may be, these goals will be better served if accompanied by beliefs about the world which happen to be true. Obviously there are situations where not tracking truth may (often only temporarily) serve a particular goal. Nevertheless, other things being equal, the presence of the desire to have true beliefs will have the long-term effect of facilitating the achievement of many goals.

  Parasitic mindware, rather than helping the host, finds tricks that will tend to increase its longevity.18 Subverting evaluation attempts by the host is one of the most common ways that parasitic mindware gets installed in our cognitive architectures. Hence rule 4—avoid mindware that resists evaluation. Here we have a direct link to the discussion of falsifiability in the last chapter. In science, a theory must go out on a limb, so to speak. In telling us what should happen, the theory must also imply that certain things will not happen. If these latter things do happen, then we have a clear signal that something is wrong with the theory. An unfalsifiable theory, in contrast, precludes change by not specifying which observations should be interpreted as refutations. We might say that such unfalsifiable theories are evaluation disabling. By admitting no evaluation, they prevent us from replacing them, but at the cost of scientific progress.

  It is likewise with all mindware. We need to be wary of all mindware that has evaluation-disabling properties. Instead, we should be asking what empirical and logical tests it has passed. The reason is that passing a logical or empirical test provides at least some assurance that the mindware is logically consistent or that the meme maps the world and thus is good for us (rule 3 above). Untestable mindware that avoids such critical evaluation provides us with no such assurance.

  Of course, the classic example of unfalsifiable mindware is mindware that relies on blind faith.19 The whole notion of blind faith is meant to disarm the hosts in which it resides from ever evaluating it. To have faith in mindware means that you do not constantly and reflectively question its origins and worth. The whole logic of faith-based mindware is to disable critique. For example, one of the tricks that faith-based mindware uses to avoid evaluation is to foster the notion that mystery itself is a virtue (a strategy meant to short-circuit the search for evidence that mindware evaluation entails). In the case of faith-based mindware, many of the adversative properties mentioned earlier come into play. Throughout history, many religions have encouraged their adherents to attack nonbelievers or at least to frighten nonbelievers into silence.

  It is of course not necessarily the case that all faith-based memes are bad. Some may be good for the host; but a very stiff burden of proof is called for in such cases. One really should ask of any faith-based mindware why it is necessary to disable the very tools in our cognitive arsenal (logic, rationality, science) that have served us so well in other spheres. However, evaluation-di
sabling strategies are common components of parasitic memeplexes.

  Another ground (in addition to falsifiability) for suspicion about our mindware occurs when the deck of costs and benefits seems stacked against the possibility of disposing of the mindware. Such situations have been called “belief traps.”20 For example, Gerry Mackie cites the following case:

  Women who practice infibulation [a form of female genital mutilation] are caught in a belief trap. The Bambara of Mali believe that the clitoris will kill a man if it comes in contact with the penis during intercourse. In Nigeria, some groups believe that if a baby’s head touches the clitoris during delivery, the baby will die. I call these self-reinforcing beliefs: a belief that cannot be revised, because the believed costs of testing the belief are too high. (1996, p. 1009)

  The case here is a little different from that of falsifiability. In principle, this belief could be tested. It is in principle falsifiable. But the actual costs of engaging in the test are just too high. Note that on an expected value basis, if you thought that there was only a .01 probability of the belief being true, you still would not test it because the risks are too high. Once installed as mindware, it will be difficult to dislodge.

  In addition to falsifiability and excessive costs, another ground for suspicion about mindware occurs when it contains adversative properties. If an idea or strategy is true or good or helpful to the host, why should it need to fight off other mindware? Should not helpful mindware welcome comparative tests against other (presumably less useful) memes? So the presence of adversative properties (in addition to evaluation-disabling strategies) is another cue to the possible presence of contaminated mindware.

  Dysrationalia Due to Contaminated Mindware

  Smart people are uniquely capable of producing noxious ideas.

  —Steven Lagerfeld, The Wilson Quarterly, 2004

  Intelligence is no inoculation against irrational behavior generated by contaminated mindware. Pseudosciences provide many examples of contaminated mindware—and many pseudosciences are invented by and believed in by people of high intelligence. Additionally, participation in pseudoscientific belief systems is so widespread that it is a statistical certainty that many people participating are of high intelligence and thus displaying dysrationalia. For example, there are 20 times more astrologers in the United States than there are astronomers. A subcommittee of the U.S. Congress has estimated that $10 billion is spent annually on medical quackery, an amount that dwarfs the sum that is spent on legitimate medical research. The list of pseudosciences in which the participants number in the tens of thousands seems never-ending: astrological prediction, subliminal weight loss, biorhythms, the administration of laetrile, psychic surgery, pyramid schemes, Ponzi schemes, out-of-body experiences, firewalking.

  The remarkable prevalence of pseudoscientific beliefs indicates that a considerable amount of inadequate belief formation is taking place—too much to blame solely on the members of our society with low intelligence. Purely on a quantitative basis, it must be the case that some people with fairly high IQs are thinking quite poorly. The 22 percent of our population who believe in Big Foot, the 25 percent who believe in astrology, the 16 percent who believe in the Loch Ness monster, the 46 percent who believe in faith healing, the 49 percent who believe in demonic possession, the 37 percent who believe in haunted houses, the 32 percent who believe in ghosts, the 26 percent who believe in clairvoyance, the 14 percent who have consulted a fortune-teller, and the 10 percent who feel that they have spoken with the Devil are not all individuals with intellectual disability. A large number of them, however, may be dysrationalic.

  Actually, we do not have to speculate about the proportion of high-IQ people with these beliefs. Several years ago, a survey of paranormal beliefs was given to members of a Mensa club in Canada, and the results were instructive. Mensa is a club restricted to high-IQ individuals, and one must pass IQ-type tests to be admitted. Yet 44 percent of the members of this club believed in astrology, 51 percent believed in biorhythms, and 56 percent believed in the existence of extraterrestrial visitors—all beliefs for which there is not a shred of evidence.21

  In this chapter, I have established that high-IQ individuals can easily be plagued by contaminated mindware. In the previous chapter, I discussed how high-IQ individuals are not immune from the mindware gaps in the domains of probabilistic thinking and scientific thinking that can lead to irrational beliefs and action. In Chapters 6 through 9 we saw that the tendency to display the characteristics of the cognitive miser (egocentric processing, framing, attribute substitution tendencies) is largely unassessed on intelligence tests.

  It is beginning to become clear, I hope, why we should not be so surprised when we witness dysrationalia—smart people acting foolishly. But perhaps it is also beginning to seem puzzling that so much in the cognitive domain is missing from intelligence tests. A common criticism of intelligence tests is the argument that they do not tap important aspects of social and emotional functioning. But that has not been my argument here. I do not intend to cede the cognitive domain to the concept of intelligence, but instead wish to press the point that intelligence is a limited concept even within the cognitive domain. This chapter and the last illustrated that tests of intelligence do not assess for the presence of mindware critical to rational thought, or for disruptive mindware that impedes rational thought. Earlier chapters established that thinking dispositions relevant to rational thought also go unassessed. Many of these are related to the tendency to use (or avoid) strategies that trump Type 1 miserly processing with Type 2 cognition. In short, there are many more ways that thinking can go wrong than are assessed on intelligence tests. The next chapter presents a taxonomy of these thinking errors.

  TWELVE

  How Many Ways Can Thinking Go Wrong? A Taxonomy of Irrational Thinking Tendencies and Their Relation to Intelligence

  Behavioral economics extends the paternalistically protected category of “idiots” to include most people, at predictable times. The challenge is figuring out what sorts of “idiotic” behaviors are likely to arise routinely and how to prevent them.

  —Colin Camerer and colleagues, University of Pennsylvania Law Review, 2003

  For decades now, researchers have been searching for the small set of mental attributes that underlie intelligence. Over one hundred years ago, Charles Spearman proposed that a single underlying mental quality, so-called psychometric g, was the factor that accounted for the tendency of mental tests to correlate with each other.1 Few now think that this is the best model of intelligence. Proponents of the Cattell/Horn/Carroll theory of intelligence, Gf/Gc theory, posit that tests of mental ability tap a small number of broad factors, of which two are dominant. Some theorists like to emphasize the two broad factors, fluid intelligence (Gf) and crystallized intelligence (Gc), because they reflect a long history of considering two aspects of intelligence (intelligence-as-process and intelligence-as-knowledge) and because we are beginning to understand the key mental operations—cognitive decoupling—that underlie Gf. Other theorists give more weight to several other group factors beyond Gf and Gc that can be identified.

  Regardless of how these scientific debates are resolved, it is clear that a relatively few scientifically manageable cognitive features underlie intelligence, and they will eventually be understood. Rational thinking, in contrast, seems to be a much more unwieldy beast. Many different sources of irrational thinking and many different tasks on which subjects make fundamental thinking errors have been identified. I have detailed many of these in Chapters 6 through 11, but I have not covered them exhaustively. There are in fact many more than I have room here to discuss.2 Recall my earlier argument that rational thinking errors are multifarious because there are many ways that people can fail to maximize their goal achievement (instrumental rationality) and many ways that beliefs can fail to reflect reality (epistemic rationality).

  Rational thinking errors appear to arise from a variety of sources—it is unlikely th
at anyone will propose a psychometric g of rationality. Irrational thinking does not arise from a single cognitive problem, but the research literature does allow us to classify thinking into smaller sets of similar problems. Our discussion so far has set the stage for such a classification system, or taxonomy. First, though, I need to introduce one additional feature in the generic model of the mind outlined in Chapter 3.

  Serial Associative Cognition with a Focal Bias

  Figure 12.1 updates the preliminary model of the mind outlined in Chapter 3 with the addition of one new idea. Previous dual-process theories have emphasized the importance of the override function—the ability of Type 2 processing to take early response tendencies triggered by Type 1 processing offline and to substitute better responses. This override capacity is a property of the algorithmic mind, and it is indicated by the arrow labeled A in Figure 12.1. The higher-level cognitive function that initiates override is a dispositional property of the reflective mind that is related to rationality. In the model in Figure 12.1, it is shown by arrow B, which represents, in machine intelligence terms, the call to the algorithmic mind to override the Type 1 response by taking it offline. This is a different mental function from the override function itself (arrow A), and the two functions are indexed by different types of individual differences—the ability to sustain the inhibition of the Type 1 response is indexed by measures of fluid intelligence, and the tendency to initiate override operations is indexed by thinking dispositions such as reflectiveness and need for cognition.

  Figure 12.1. a More complete Model of the Tripartite Framework

  The simulation process that computes the alternative response that makes the override worthwhile is represented in Figure 12.1 as well as the fact that the call to initiate simulation originates in the reflective mind. Specifically, the decoupling operation (indicated by arrow C) is carried out by the algorithmic mind and the call to initiate simulation (indicated by arrow D) by the reflective mind. Again, two different types of individual differences are associated with the initiation call and the decoupling operator—specifically, rational thinking dispositions with the former and fluid intelligence with the latter.

 

‹ Prev