Book Read Free

What Intelligence Tests Miss

Page 24

by Keith E Stanovich


  Regarding the sixth category in Figure 12.3—contaminated mindware—we would of course expect more intelligent individuals to acquire more mindware of all types based on their superior learning abilities. This would result in their acquiring more mindware that fosters rational thought. However, this superior learning ability would not preclude more intelligent individuals from acquiring contaminated mindware—that is, mindware that literally causes irrationality. Many parasitic belief systems are conceptually somewhat complex. Examples of complex parasitic mindware would be Holocaust denial and many financial get-rich-quick schemes as well as bogus tax evasion schemes. Such complex mindware might even require a certain level of intelligence in order to be enticing to the host. This conjecture is supported by research on the characteristics of financial fraud victims.20 Pseudoscientific beliefs are also prevalent even among those of high intelligence.

  Conclusion: Dysrationalia Will Be Ubiquitous

  As the discussion in the last section indicated, intelligence is no inoculation against any of the sources of irrational thought presented in Figure 12.3. As we have gone through the categories looking at the intelligence/rationality correlations, even categories where there does appear to be a significant correlation leave enough room for a substantial number of dissociations. There is thus no reason to expect dysrationalia to be rare. We should not be surprised at smart people acting foolishly.

  Not one of the six categories of cognitive error we have just considered is prevented (very much) by having a high IQ, and that should not be surprising. Rationality is a multifarious concept—not a single mental quality. It requires various thinking dispositions that act to trump a variety of miserly information-processing tendencies. It depends on the presence of various knowledge bases related to probabilistic thinking and scientific thinking. It depends on avoiding contaminated mindware that fosters irrational thought and behavior for its own ends. None of these factors are assessed on popular intelligence tests (or their proxies like the SAT). Intelligence tests do not assess the propensity to override responses primed by the autonomous mind or to engage in full cognitive simulation. The crystallized abilities assessed on intelligence tests do not probe for the presence of the specific mindware that is critical for rational thought. And finally, there are no probes on intelligence tests for the presence of contaminated mindware. Thus, we should not be surprised when smart people act foolishly. That we in fact are sometimes surprised indicates that we are overvaluing and overconceptualizing the term intelligence—we are attributing to it qualities that intelligence tests do not measure.

  THIRTEEN

  The Social Benefits of Increasing Human Rationality—and Meliorating Irrationality

  In saying that a person is irrational, we are not accusing him of any irremediable flaw, but, rather, just urging him and people who think like him to reform.

  —Jonathan Baron, Rationality and Intelligence, 1985

  The tendency for our society to focus on intelligence and undervalue rational thinking is ironic and exasperating to a cognitive scientist like myself. Throughout this book, I have illustrated how several different rational thinking strategies and knowledge bases affect people’s lives. Yet we fail to teach these tools in schools and refuse to focus our attention on them as a society. Instead, we keep using intelligence proxies as selection devices in a range of educational institutions from exclusive preschools to graduate schools. Corporations and the military are likewise excessively focused on IQ measures.1

  Consider the example of Ivy League universities in the United States. These institutions are selecting society’s future elite. What societal goals are served by the selection mechanisms (for example, SAT tests) that they use? Social critics have argued that the tests serve only to maintain an economic elite. But the social critics seem to have missed a golden opportunity to critique current selection mechanisms by failing to ask the question “Why select for intelligence only and ignore rationality completely?”

  In short, we have been valuing only the algorithmic mind and not the reflective mind. This is in part the result of historical accident. We had measures of algorithmic-level processing efficiency long before we had measures of rational thought and the operation of the reflective mind. The dominance and ubiquitousness of early IQ tests served to divert attention from any aspect of cognition except algorithmic-level efficiency. And then, because of this historical accident, we have been trying to back out of this mistake (overvaluing the algorithmic part of the mind) ever since.

  In order to illustrate the oddly dysfunctional ways that rationality is devalued in comparison to intelligence, I would like to embellish on a thought experiment first imagined by cognitive psychologist Jonathan Baron in a 1985 book. Baron asks us to imagine what would happen if we were able to give everyone an otherwise harmless drug that increased their algorithmic-level cognitive capacities (for example, discrimination speed, working memory capacity, decoupling ability)—in short, that increased their intelligence as I have defined it in this book. Imagine that everyone in North America took the pill before retiring and then woke up the next morning with more memory capacity and processing speed. Both Baron and I believe that there is little likelihood that much would change the next day in terms of human happiness. It is very unlikely that people would be better able to fulfill their wishes and desires the day after taking the pill. In fact, it is quite likely that people would simply go about their usual business—only more efficiently. If given more memory capacity and processing speed, people would, I believe: carry on using the same ineffective medical treatments because of failure to think of alternative causes (Chapter 10); keep making the same poor financial decisions because of overconfidence (Chapter 8); keep misjudging environmental risks because of vividness (Chapter 6); play host to the contaminated mindware of Ponzi and pyramid schemes (Chapter 11); be wrongly influenced in their jury decisions by incorrect testimony about probabilities (Chapter 10); and continue making many other of the suboptimal decisions described in earlier chapters. The only difference would be that they would be able to do all of these things much more quickly!

  Of course, I use this thought experiment as an intuition pump to provoke thought and discussion about what society loses by the particular way we value cognitive attributes. The thought experiment has obvious caveats. More cognitive capacity would help to increase rational responding in the cases discussed in Chapter 9—where the algorithmic mind fails to override the processing tendencies of the autonomous mind. But it would do nothing to help in the many situations in which suboptimal rational thinking dispositions were at fault.

  Another aspect of the “IQ debate” that is exasperating for cognitive scientists who study reasoning and rational thought is the endless argument about whether intelligence is malleable.2 No one denies that this is an important question that needs to be resolved, but it has totally overshadowed cognitive skills that are just as useful as intelligence and that may well be more teachable. Likewise, we have failed to ameliorate the suboptimal consequences of difficulties in rational thinking that can be avoided by restructuring the environment so that it does not expose human fallibility. None of this will be possible if we continue to focus on intelligence at the expense of other cognitive skills. We will miss opportunities to teach people to think more rationally in their day-to-day life, and we will miss opportunities to restructure the environment so that people’s mindware problems and cognitive miser tendencies will be less costly (either for themselves or for society as a whole).

  The lavish attention devoted to intelligence (raising it, praising it, worrying when it is low, etc.) seems wasteful in light of the fact that we choose to virtually ignore another set of mental skills with just as much social consequence—rational thinking mindware and procedures. Popular books tell parents how to raise more intelligent children, educational psychology textbooks discuss the raising of students’ intelligence, and we feel reassured when hearing that a particular disability does not impair intelligence.
There is no corresponding concern on the part of parents that their children grow into rational beings, no corresponding concern on the part of schools that their students reason judiciously, and no corresponding recognition that intelligence is useless to a child unable to adapt to the world.

  I simply do not think that society has weighed the consequences of its failure to focus on irrationality as a real social problem. These skills and dispositions profoundly affect the world in which we live. Because of inadequately developed rational thinking abilities—because of the processing biases and mindware problems discussed in this book—physicians choose less effective medical treatments; people fail to accurately assess risks in their environment; information is misused in legal proceedings; millions of dollars are spent on unneeded projects by government and private industry; parents fail to vaccinate their children; unnecessary surgery is performed; animals are hunted to extinction; billions of dollars are wasted on quack medical remedies; and costly financial misjudgments are made.3 Distorted processes of belief formation are also implicated in various forms of ethnocentric, racist, sexist, and homophobic hatred.

  It is thus clear that widespread societal effects result from inadequately developed rational thinking dispositions and knowledge. In the modern world, the impact of localized irrational thoughts and decisions can be propagated and magnified through globalized information technologies, thus affecting large numbers of people. That is, you may be affected by the irrational thinking of others even if you do not take irrational actions yourself. This is why, for example, the spread of pseudoscientific beliefs is everyone’s concern. For example, police departments hire psychics to help with investigations even though research has shown that their use is not efficacious. Jurors have been caught making their decisions based on astrology. Major banks and several Fortune 500 companies employ graphologists for personnel decisions even though voluminous evidence indicates that graphology is useless for this purpose.4 To the extent that pseudodiagnostic graphological cues lead employers to ignore more valid criteria, both economic inefficiency and personal injustice are the result. How would you like to lose your chance for a job that you really wanted because you have a particular little “loop” in your handwriting? How would you like to be convicted of a crime because of an astrological “reading”?

  Unfortunately, these examples are not rare. We are all affected in numerous ways when such contaminated mindware permeates society—even if we avoid this contaminated mindware ourselves. Pseudosciences such as astrology are now large industries, involving newspaper columns, radio shows, book publishing, the Internet, magazine articles, and other means of dissemination. The House of Representatives Select Committee on Aging has estimated that the amount wasted on medical quackery nationally reaches into the billions. Physicians are increasingly concerned about the spread of medical quackery on the Internet and its real health costs.

  Pseudoscientific beliefs appear to arise from a complex combination of thinking dispositions, mindware gaps, and contaminated mindware. Pseudoscientific beliefs are related to the tendency to display confirmation bias, failure to consider alternative hypotheses, ignoring chance as an explanation of an outcome, identifying with beliefs and not critiquing them, and various fallacies in probabilistic thinking.5 Throughout this book I have argued that these rational thinking attributes are very imperfectly correlated with intelligence. But can we do anything about these attributes? Putting the decades-old debate about the malleability of intelligence aside, what do we know about the malleability of rational thinking tendencies?

  The Good News: Rationality Can Be Learned

  Regardless of the eventual outcome of the long-standing debate about the malleability of intelligence, it is striking that the field of psychology has not displayed an anywhere comparable concern about the malleability of rationality. This lack of concern is ironic given that there are at least preliminary indications that rationality may be more malleable than intelligence.

  Irrationality caused by mindware gaps is most easily remediable as it is entirely due to missing strategies and declarative knowledge that can be taught.6 Overriding the tendencies of the autonomous mind is most often done with learned mindware, and sometimes override fails because of inadequately instantiated mindware. In such a case, inadequately learned mindware is the source of the problem. For example, disjunctive reasoning is the tendency to consider all possible states of the world when deciding among options or when choosing a problem solution in a reasoning task. It is a rational thinking strategy with a high degree of generality. People make many suboptimal decisions because of the failure to flesh out all the possible options in a situation, yet the disjunctive mental tendency is not computationally expensive. This is consistent with the finding that there are not strong intelligence-related limitations on the ability to think disjunctively and with evidence indicating that disjunctive reasoning is a rational thinking strategy that can be taught.7

  The tendency to consider alternative hypotheses is, like disjunctive reasoning, strategic mindware of great generality. Also, it can be implemented in very simple ways. Many studies have attempted to teach the technical issue of thinking of P(D/~H) (the probability of the observed data given the alternative hypothesis) or thinking of the alternative hypothesis by instructing people in a simple habit. People are given extensive practice at saying to themselves the phrase “think of the opposite” in relevant situations. This strategic mindware does not stress computational capacity and thus is probably easily learnable by many individuals. Several studies have shown that practice at the simple strategy of triggering the thought “think of the opposite” can help to prevent a host of the thinking errors studied in the heuristics and biases literature, including but not limited to: anchoring biases, overconfidence effects, hindsight bias, confirmation bias, and self-serving biases.8

  Various aspects of probabilistic thinking represent mindware of great generality and potency. However, as any person who has ever taught a statistics course can attest (your present author included), some of these insights are counterintuitive and unnatural for people—particularly in their application. There is nevertheless still some evidence that they are indeed teachable—albeit with somewhat more effort and difficulty than strategies such as disjunctive reasoning or considering alternative hypotheses. Aspects of scientific thinking necessary to infer a causal relationship are also definitely teachable.9 other strategies of great generality may be easier to learn—particularly by those of lower intelligence. For example, psychologist Peter Gollwitzer has discussed an action strategy of extremely wide generality—the use of implementation intentions.10 An implementation intention is formed when the individual marks the cue-action sequence with the conscious, verbal declaration: “when X occurs, I will do Y.” Often with the aid of the context-fixing properties of language,11 the triggering of this cue-action sequence on just a few occasions is enough to establish it in the autonomous mind. Finally, research has shown that an even more minimalist cognitive strategy of forming mental goals (whether or not they have implementation intentions) can be efficacious. For example, people perform better in a task when they are told to form a mental goal (“set a specific, challenging goal for yourself”) for their performance as opposed to being given the generic motivational instructions (“do your best”).12

  We are often making choices that reduce our happiness because we find it hard to predict what will make us happy. For example, people often underestimate how quickly they will adapt to both fortunate and unfortunate events. Our imaginations fail at projecting the future. Psychologist Dan Gilbert cites evidence indicating that a remediating strategy in such situations might be to use a surrogate—someone who is presently undergoing the event whose happiness (or unhappiness) you are trying to simulate. For example, if you are wondering how you will react to “empty nest” syndrome, ask someone who has just had their last child leave for college rather than trying to imagine yourself in that situation. If you want to know how you will
feel if your team is knocked out in the first round of the tournament, ask someone whose team has just been knocked out rather than trying to imagine it yourself. People tend not to want to use this mechanism because they think that their own uniqueness makes their guesses from introspection more accurate than the actual experiences of the people undergoing the event. People are simply skeptical about whether other people’s experiences apply to them. This is a form of egocentrism akin to the myside processing which I have discussed. Gilbert captures the irony of people’s reluctance to adopt the surrogate strategy by telling his readers: “If you are like most people, then like most people, you don’t know you’re like most people” (2006, p. 229).

  Much of the strategic mindware discussed so far represents learnable strategies in the domain of instrumental rationality (achieving one’s goals). Epistemic rationality (having beliefs well calibrated to the world) is often disrupted by contaminated mindware. However, even here, there are teachable macro-strategies that can reduce the probability of acquiring mindware that is harmful to its host. For example, the principle of falsifiability provides a wonderful inoculation against many kinds of nonfunctional beliefs. It is a tool of immense generality. It is taught in low-level methodology and philosophy of science courses, but could be taught much more broadly than this.13 Many pseudoscientific beliefs represent the presence of contaminated mindware. The critical thinking skills that help individuals to recognize pseudoscientific belief systems can be taught in high school courses.

  Finally, the language of memetics itself is therapeutic—a learnable mental tool that can help us become more conscious of the possibility that we are hosting contaminated mindware. One way the meme concept will aid in cognitive self-improvement is that by emphasizing the epidemiology of belief it will indirectly suggest to many (for whom it will be a new insight) the contingency of belief. By providing a common term for all cultural units, memetic science provides a neutral context for evaluating whether any belief serves our interests as humans. The very concept of the meme will suggest to more and more people that they need to engage in mindware examination.

 

‹ Prev