Book Read Free

What Intelligence Tests Miss

Page 20

by Keith E Stanovich


  Contaminated mindware is often acquired because it is wrapped in an enticing narrative, one that often has some complexity to it. This complexity is probably not the best “sell” to those of lower intelligence. Instead, complex mindware probably sounds most enticing to those of moderate to high intelligence. Search the Internet for examples of conspiracy theories, tax evasion schemes, get-rich-quick schemes, schemes for “beating” the stock market, and procedures for winning the lottery. You will quickly see that many of them are characterized by enticing complexity. For example, many get-rich-quick schemes involve real-estate transactions that interact in a complex manner with the tax system. Many win-the-lottery books contain explanations (wrong ones!) employing mathematics and probabilities. “Beat the market” stock investment advice often involves the mathematics and graphics of so-called technical analysis.

  The intuition that those taken in by fraudulent investment schemes are probably not of low intelligence is confirmed by the results of a study commissioned by the National Association of Securities Dealers.4 The study examined the beliefs and demographic characteristics of 165 people who had lost over $1000 in a fraudulent investment scheme, and compared them with those of a group of individuals who had not been victims of financial fraud. The study found that the investment fraud victims had significantly more education than the comparison group—68.6 percent of the investment fraud victims group had at least a BA degree compared to just 37.2 percent in the control group. The proportion of the investment victim group with incomes over $30,000 was 74.1 percent compared with 56.4 percent in the other group. We can infer from the education and income statistics that the victims of investment fraud are not more likely to be of low intelligence. This type of contaminated mindware may, if anything, be more enticing to those of higher intelligence.

  Much truly mischief-making mindware that supports the irrational behavior we observe in society is concocted by and infects those of moderate to high intelligence. As a result, there are numerous examples of famous individuals, noted for their intelligence, who displayed persistently irrational behavior. Philosopher Martin Heidegger, a conceptual thinker of world renown, was a Nazi apologist and used the most specious of arguments to justify his beliefs. He organized paramilitary camps for his students and often signed correspondence “Heil Hitler.” Famed scientist William Crookes, discoverer of the element thallium and a Fellow of the Royal Society, was repeatedly duped by spiritualist “mediums,” but never gave up his belief in spiritualism. Arthur Conan Doyle, creator of Sherlock Holmes, was likewise a notorious dupe for “mediums.” Poet Ezra Pound (who was surely no slouch in the verbal domain) spent most of World War II ranting Fascist propaganda on Italian radio broadcasts. These examples could be extended almost indefinitely.5

  Many truly evil ideas have been promulgated by people of considerable intelligence. Several of the Nazi war criminals tried at Nuremberg were given IQ tests and scored above 125, and eight of the fourteen men who planned the Final Solution had doctoral degrees. Studies of leading Holocaust deniers have revealed that their ranks contain the holder of a master’s degree from Indiana University in European history, the author of several well-known biographies of World War II figures, a professor of literature at the University of Lyon, an author of textbooks used in Ivy League universities, a professor of English at the University of Scranton, a professor at Northwestern University, and the list goes on.6 Of course, the ranks of creationist advocates include many with university degrees as well.

  Cognitive scientists have uncovered some of the reasons why intelligent people can come to have beliefs that are seriously out of kilter with reality. One explanation is in terms of so-called knowledge projection tendencies. The idea here is that in a natural ecology where most of our prior beliefs are true, processing new data through the filter of our current beliefs will lead to faster accumulation of knowledge.7 This argument has been used to explain the presence of the belief bias effect in syllogistic reasoning. Cognitive scientist Jonathan Evans and colleagues argued that because belief revision has interactive effects on much of the brain’s belief network, it may be computationally costly. Thus, they posit that a cognitive miser might be prone to accept conclusions that are believable without engaging in logical reasoning at all. Only when faced with unbelievable conclusions do subjects engage in logical reasoning about the premises. They argue that this could be an efficacious strategy when we are in a domain where our beliefs are largely true.

  But the assumption here—that we are in a domain where our beliefs are largely true—is critical. We use current knowledge structures to help to assimilate new ones more rapidly. To the extent that current beliefs are true, then, we will assimilate further true information more rapidly. However, when the subset of beliefs that the individual is drawing on contains substantial amounts of false information, knowledge projection will delay the assimilation of the correct information. And herein lies the key to understanding the creationist or Holocaust denier. The knowledge projection tendency, efficacious in the aggregate, may have the effect of isolating certain individuals on “islands of false beliefs” from which they are unable to escape. In short, there may be a type of knowledge isolation effect when projection is used in particularly ill-suited circumstances. Thus, knowledge projection, which in the aggregate might lead to more rapid induction of new true beliefs, may be a trap in cases where people, in effect, keep reaching into a bag of beliefs which are largely false, using these beliefs to structure their evaluation of evidence, and hence more quickly assimilating additional incorrect beliefs for use in further projection.

  Knowledge projection from an island of false beliefs might explain the phenomenon of otherwise intelligent people who get caught in a domain-specific web of falsity and because of projection tendencies cannot escape. Such individuals often use their considerable computational power to rationalize their beliefs and to ward off the arguments of skeptics.8 When knowledge projection occurs from an island of false belief, it merely results in a belief network even more divergent from that of individuals not engaged in such projection or with less computational power. This may be the reason why some of the most pernicious contaminated mindware was invented by and acquired by some of the most intelligent individuals (“if that man had two brains he’d be twice as stupid!”). Indeed, such people “had twice the brain and ended up twice as stupid.”

  Skepticism about Contaminated Mindware

  But isn’t there something inherently wrong with the idea of contaminated mindware? Why would people believe something that is bad for them? Don’t all beliefs serve some positive purpose?

  These are all reasonable questions and reflect a commonsense reaction to the idea of contaminated mindware. This commonsense worry about the idea of contaminated mindware is sometimes echoed in the scholarly literature as well. For example, some philosophers have argued that human irrationality is a conceptual impossibility, and other theorists have argued that evolution guarantees human rationality.

  The latter argument is now widely recognized to be flawed.9 Evolution guarantees that humans are genetic fitness optimizers in their local environments, not that they are truth or utility maximizers as rationality requires. Beliefs need not always track the world with maximum accuracy in order for fitness to increase. Thus, evolution does not guarantee perfect epistemic rationality. Neither does evolution ensure that optimum standards of instrumental rationality will be attained. Finally, the conceptual arguments of philosophers questioning the possibility of human irrationality are in some sense beside the point because literally hundreds of studies conducted by decision scientists, cognitive scientists, and behavioral economists over the last four decades have demonstrated that human action and belief acquisition violate even quite liberal rational strictures.10

  Why is it so difficult for people to accept that humans can sometimes be systematically irrational—that they can believe things in the absence of evidence and behave in ways that thwart their interests? I suggest
that it is because most of us share a folk theory of mindware acquisition that is faulty in one critical respect. The key to the error is suggested in the title of a paper written some years ago by psychologist Robert Abelson: “Beliefs Are Like Possessions.” This phrase suggests why people find it difficult to understand why they (or anyone else) might hold beliefs (or other mindware) that do not serve their own interests. Current critiques of overconsumption aside, most of us feel that we have acquired our material possessions for reasons, and that among those reasons is the fact that our possessions serve our ends in some way. We feel the same about our beliefs. We feel that beliefs are something that we choose to acquire, just like the rest of our possessions.

  In short, we tend to assume: (1) that we exercised agency in acquiring our mindware, and (2) that it serves our interests. The idea of contaminated mindware runs counter to both of these assumptions. If we consider the first assumption to be false—that sometimes we do not exercise agency when we acquire mindware—then the second becomes less likely, and the idea of contaminated mindware more plausible. This is precisely what an important theoretical position in the cognitive science of belief acquisition has asserted. A prominent set of thinkers has recently been exploring the implications of asking a startling question: What if you don’t own your beliefs, but instead they own you?

  Why Are People Infected by Contaminated Mindware?

  Surely almost all of us feel that our beliefs must be serving some positive purpose. But what if that purpose isn’t one of our purposes? Cultural replicator theory and the science of memetics have helped us come to terms with this possibility. The term cultural replicator refers to an element of a culture that may be passed on by non-genetic means. An alternative term for a cultural replicator—meme—was introduced by Richard Dawkins in his famous 1976 book The Selfish Gene.11 The term meme is also sometimes used generically to refer to a so-called memeplex—a set of co-adapted memes that are copied together as a set of interlocking ideas (so, for example, the notion of democracy is a complex interconnected set of memes—a memeplex).

  It is legitimate to ask why one should use this new term for a unit of culture when a variety of disciplines such as cultural anthropology already exist that deal with the diffusion of culture. The reason I think the term meme is useful is that the new and unfamiliar terminology serves a decentering function that makes understanding the concept of contaminated mindware easier. It can help somewhat to dislodge the “beliefs as possessions” metaphor that we see implicit in phrases such as “my belief” and “my idea.” Because the usage “my meme” is less familiar, it does not signal ownership via an act of agency in the same way. The second reason the term is useful is that it suggests (by its analogy to the term gene) using the insights of Universal Darwinism to understand belief acquisition and change. Specifically, Universal Darwinism emphasizes that organisms are built to advance the interests of the genes (replication) rather than for any interests of the organism itself. This insight prompts, by analogy, the thought that memes may occasionally replicate at the expense of the interests of their hosts.

  Thus, the fundamental insight triggered by the meme concept is that a belief may spread without necessarily being true or helping the human being who holds the belief in any way. Memetic theorists often use the example of a chain letter: “If you do not pass this message on to five people you will experience misfortune.” This is an example of a meme—an idea unit. It is the instruction for a behavior that can be copied and stored in brains. It has been a reasonably successful meme. Yet there are two remarkable things about this meme. First, it is not true. The reader who does not pass on the message will not as a result experience misfortune. Second, the person who stores the meme and passes it on will receive no benefit—the person will be no richer or healthier or wiser for having passed it on. Yet the meme survives. It survives because of its own self-replicating properties (the essential logic of this meme is that basically it does nothing more than say “copy me—or else”). In short, memes do not necessarily exist in order to help the person in whom they are lodged. They exist because, through memetic evolution, they have displayed the best fecundity, longevity, and copying fidelity—the defining characteristics of successful replicators.

  Memetic theory has profound effects on our reasoning about ideas because it inverts the way we think about beliefs. Social psychologists traditionally tend to ask what it is about particular individuals that leads them to have certain beliefs. The causal model is one where the person determines what beliefs to have. Memetic theory asks instead what it is about certain memes that leads them to collect many “hosts” for themselves. The question is not how people acquire beliefs (the tradition in social and cognitive psychology) but how beliefs acquire people!

  If this inversion of our traditional way of thinking at first seems odd, consider that participation in political movements has been found to be more related to proximity to others believing the same thing rather than to any psychological factors that have been identified.12 Likewise, religious affiliations are predicted best by geographic proximity as opposed to specific psychological characteristics.

  Our commonsense view of why beliefs spread is the notion that “belief X spreads because it is true.” This notion, however, has trouble accounting for ideas that are true but not popular, and ideas that are popular but not true. Memetic theory tells us to look to a third principle in such cases. Idea X spreads among people because it is a good replicator—it is good at acquiring hosts. Memetic theory focuses us on the properties of ideas as replicators rather than the qualities of people acquiring the ideas. This is the single distinctive function served by the meme concept and it is a critical one.

  With this central insight from memetic theory in mind, we can now discuss a fuller classification of reasons why mindware survives and spreads. The first three classes of reasons are reflected in traditional assumptions in the behavioral and biological sciences. The last reflects the new perspective of memetic theory:

  1. Mindware survives and spreads because it is helpful to the people that store it.

  2. Certain mindware proliferates because it is a good fit to pre-existing genetic predispositions or domain-specific evolutionary modules.

  3. Certain mindware spreads because it facilitates the replication of genes that make vehicles that are good hosts for that particular mindware (religious beliefs that urge people to have more children would be in this category).

  4. Mindware survives and spreads because of the self-perpetuating properties of the mindware itself.

  Categories 1, 2, and 3 are relatively uncontroversial. The first is standard fare in the discipline of cultural anthropology, which tends to stress the functionality of belief. Category 2 is emphasized by evolutionary psychologists. Category 3 is meant to capture the type of effects emphasized by theorists stressing gene/culture revolution.13 It is category 4 that introduces new ways of thinking about beliefs as symbolic instructions that are more or less good at colonizing brains. Of course, mindware may reside in more than one category. Mindware may spread because it is useful to its host and because it fits genetic predispositions and because of its self-perpetuating properties. Category 4 does, however, raise the possibility of truly contaminated mindware—mindware that is not good for the host because it supports irrational behavior.

  Various theorists have discussed some of the types of mindware (defined by their self-replicative strategies) that are in category 4.14 For example, there is parasitic mindware that mimics the structure of helpful ideas and deceives the host into thinking that the host will derive benefit from them. Advertisers are of course expert at constructing parasites—beliefs that ride on the backs of other beliefs and images. Creating unanalyzed conditional beliefs such as “if I buy this car I will get this beautiful model” is what advertisers try to do by the judicious juxtaposition of ideas and images. Other self-preservative memetic strategies involve changing the cognitive environment. Many religions, for example,
prime the fear of death in order to make their promise of the afterlife more enticing.

  More sinister are so-called adversative strategies that alter the cultural environment in ways that make it more hostile for competing memes or that influence their hosts to attack the hosts of alternate mindware. Many moderate residents of fundamentalist religious communities refrain from criticizing the extremist members of their communities because of fear that their neighbors are harboring mindware like that illustrated in the following excerpt:

  From an April 5 interview with Omar Bakri Muhammad, head of Al Muhajiroun, a radical Islamic group based in London, conducted by Paulo Moura of Publico, a Portuguese daily newspaper:

  Q: What could justify the deliberate killing of thousands of innocent civilians?

  A: We don’t make a distinction between civilians and non-civilians, innocents and non-innocents. Only between Muslims and nonbelievers. And the life of a nonbeliever has no value. There’s no sanctity in it.

  Q: But there were Muslims among the victims.

  A: According to Islam, Muslims who die in attacks will be accepted immediately into paradise as martyrs. As for the others, it is their problem. (Harper’s Magazine, July 2004, pp. 22–25)

  Deal Breaker Memes

  How can any person presume to know that this is the way the universe works? Because it says so in our holy books. How do we know that our holy books are free from error? Because the books themselves say so. Epistemological black holes of this sort are fast draining the light from our world.

  —Sam Harris, The End of Faith, 2004

 

‹ Prev