Book Read Free

The Intelligence Trap

Page 5

by David Robson


  Houdini later wrote about his scepticism in an article for the New York Sun. It was the start of an increasingly public dispute between the two men, and their friendship never recovered before the escapologist’s death four years later.2

  Even then, Conan Doyle could not let the matter rest. Egged on, perhaps, by his ‘spirit guide’ Phineas, he attempted to address and dismiss all of Houdini’s doubts in an article for The Strand magazine. His reasoning was more fanciful than any of his fictional works, not least in claiming that Houdini himself was in command of a ‘dematerialising and reconstructing force’ that allowed him to slip in and out of chains.

  ‘Is it possible for a man to be a very powerful medium all his life, to use that power continually, and yet never to realise that the gifts he is using are those which the world calls mediumship?’ he wrote. ‘If that be indeed possible, then we have a solution of the Houdini enigma.’

  Meeting these two men for the first time, you would have been forgiven for expecting Conan Doyle to be the more critical thinker. A doctor of medicine and a best-selling writer, he exemplified the abstract reasoning that Terman was just beginning to measure with his intelligence tests. Yet it was the professional illusionist, a Hungarian immigrant whose education had ended at the age of twelve, who could see through the fraud.

  Some commentators have wondered whether Conan Doyle was suffering from a form of madness. But let’s not forget that many of his contemporaries believed in spiritualism – including scientists such as the physicist Oliver Lodge, whose work on electromagnetism brought us the radio, and the naturalist Alfred Russel Wallace, a contemporary of Charles Darwin who had independently conceived the theory of natural selection. Both were formidable intellectual figures, but they remained blind to any evidence debunking the paranormal.

  We’ve already seen how our definition of intelligence could be expanded to include practical and creative reasoning. But those theories do not explicitly examine our rationality, defined as our capacity to make the optimal decisions needed to meet our goals, given the resources we have to hand, and to form beliefs based on evidence, logic and sound reasoning.*

  * Cognitive scientists such as Keith Stanovich describe two classes of rationality. Instrumental rationality is defined as ‘the optimisation of someone’s goal fulfilment’, or, less technically, as ‘behaving so that you get exactly what you want, given the resources available to you’. Epistemic rationality, meanwhile, concerns ‘how well your beliefs map onto the actual structure of the world’. By falling for fraudulent mediums, Conan Doyle was clearly lacking in the latter.

  While decades of psychological research have documented humanity’s more irrational tendencies, it is only relatively recently that scientists have started to measure how that irrationality between individuals, and whether that variance is related to measures of intelligence. They are finding that the two are far from perfectly correlated: it is possible to have a very high SAT score that demonstrates good abstract thinking, for instance, while still performing badly on these new tests of rationality – a mismatch known as ‘dysrationalia’.

  Conan Doyle’s life story – and his friendship with Houdini, in particular – offers the perfect lens through which to view this cutting-edge research.3 I certainly wouldn’t claim that any kind of faith is inherently irrational, but I am interested in the fact that fraudsters were able to exploit Conan Doyle’s beliefs to fool him time after time. He was simply blind to the evidence, including Houdini’s testimonies. Whatever your views on paranormal belief in general, he did not need to be quite so gullible at such great personal cost.

  Conan Doyle is particularly fascinating because we know, through his writing, that he was perfectly aware of the laws of logical deduction. Indeed, he started to dabble in spiritualism at the same time that he first created Sherlock Holmes:4 he was dreaming up literature’s greatest scientific mind during the day, but failed to apply those skills of deduction at night. If anything, his intelligence seems to have only allowed him to come up with increasingly creative arguments to dismiss the sceptics and justify his beliefs; he was bound more tightly than Houdini in his chains.

  Besides Doyle, many other influential thinkers of the last hundred years may have also been afflicted by this form of the intelligence trap. Even Einstein – whose theories are often taken to be the pinnacle of human intelligence – may have suffered from this blinkered reasoning, leading him to waste the last twenty-five years of his career with a string of embarrassing failures.

  Whatever your specific situation and interests, this research will explain why so many of us make mistakes that are blindingly obvious to all those around us – and continue to make those errors long after the facts have become apparent.

  Houdini himself seems to have intuitively understood the vulnerability of the intelligent mind. ‘As a rule, I have found that the greater brain a man has, and the better he is educated, the easier it has been to mystify him,’ he once told Conan Doyle.5

  A true recognition of dysrationalia – and its potential for harm – has taken decades to blossom, but the roots of the idea can be found in the now legendary work of two Israeli researchers, Daniel Kahneman and Amos Tversky, who identified many cognitive biases and heuristics (quick-and-easy rules of thumb) that can skew our reasoning.

  One of their most striking experiments asked participants to spin a ‘wheel of fortune’, which landed on a number between 1 and 100, before considering general knowledge questions – such as estimating the number of African countries that are represented in the UN. The wheel of fortune should, of course, have had no influence on their answers – but the effect was quite profound. The lower the quantity on the wheel, the smaller their estimate – the arbitrary value had planted a figure in their mind, ‘anchoring’ their judgement.6

  You have probably fallen for anchoring yourself many times while shopping in the sales. Suppose you are looking for a new TV. You had expected to pay around £100, but then you find a real bargain: a £200 item reduced to £150. Seeing the original price anchors your perception of what is an acceptable price to pay, meaning that you will go above your initial budget. If, on the other hand, you had not seen the original price, you would have probably considered it too expensive, and moved on.

  You may also have been prey to the availability heuristic, which causes us to over-estimate certain risks based on how easily the dangers come to mind, thanks to their vividness. It’s the reason that many people are more worried about flying than driving – because reports of plane crashes are often so much more emotive, despite the fact that it is actually far more dangerous to step into a car.

  There is also framing: the fact that you may change your opinion based on the way information is phrased. Suppose you are considering a medical treatment for 600 people with a deadly illness and it has a 1 in 3 success rate. You can be told either that ‘200 people will be saved using this treatment’ (the gain framing) or that ‘400 people will die using this treatment’ (the loss framing). The statements mean exactly the same thing, but people are more likely to endorse the statement when it is presented in the gain framing; they passively accept the facts as they are given to them without thinking what they really mean. Advertisers have long known this: it’s the reason that we are told that foods are 95 per cent fat free (rather than being told they are ‘5 per cent fat’).

  Other notable biases include the sunk cost fallacy (our reluctance to give up on a failing investment even if we will lose more trying to sustain it), and the gambler’s fallacy – the belief that if the roulette wheel has landed on black, it’s more likely the next time to land on red. The probability, of course, stays exactly the same. An extreme case of the gambler’s fallacy is said to have been observed in Monte Carlo in 1913, when the roulette wheel fell twenty-six times on black – and the visitors lost millions as the bets on red escalated. But it is not just witnessed in casinos; it may also influence family planning. Many parents falsely believe that if they have already produced
a line of sons, then a daughter is more likely to come next. With this logic, they may end up with a whole football team of boys.

  Given these findings, many cognitive scientists divide our thinking into two categories: ‘system 1’, intuitive, automatic, ‘fast thinking’ that may be prey to unconscious biases; and ‘system 2’, ‘slow’, more analytical, deliberative thinking. According to this view – called dual-process theory – many of our irrational decisions come when we rely too heavily on system 1, allowing those biases to muddy our judgement.

  Yet none of the early studies by Kahneman and Tversky had tested whether our irrationality varies from person to person. Are some people more susceptible to these biases, while others are immune, for instance? And how do those tendencies relate to our general intelligence? Conan Doyle’s story is surprising because we intuitively expect more intelligent people, with their greater analytical minds, to act more rationally – but as Tversky and Kahneman had shown, our intuitions can be deceptive.

  If we want to understand why smart people do stupid things, these are vital questions.

  During a sabbatical at the University of Cambridge in 1991, a Canadian psychologist called Keith Stanovich decided to address these issues head on. With a wife specialising in learning difficulties, he had long been interested in the ways that some mental abilities may lag behind others, and he suspected that rationality would be no different. The result was an influential paper introducing the idea of dysrationalia as a direct parallel to other disorders like dyslexia and dyscalculia.

  It was a provocative concept – aimed as a nudge in the ribs to all the researchers examining bias. ‘I wanted to jolt the field into realising that it had been ignoring individual differences,’ Stanovich told me.

  Stanovich emphasises that dysrationalia is not just limited to system 1 thinking. Even if we are reflective enough to detect when our intuitions are wrong, and override them, we may fail to use the right ‘mindware’ – the knowledge and attitudes that should allow us to reason correctly.7 If you grow up among people who distrust scientists, for instance, you may develop a tendency to ignore empirical evidence, while putting your faith in unproven theories.8 Greater intelligence wouldn’t necessarily stop you forming those attitudes in the first place, and it is even possible that your greater capacity for learning might then cause you to accumulate more and more ‘facts’ to support your views.9

  Circumstantial evidence would suggest that dysrationalia is common. One study of the high-IQ society Mensa, for example, showed that 44 per cent of its members believed in astrology, and 56 per cent believed that the Earth had been visited by extra-terrestrials.10 But rigorous experiments, specifically exploring the link between intelligence and rationality, were lacking.

  Stanovich has now spent more than two decades building on those foundations with a series of carefully controlled experiments.

  To understand his results, we need some basic statistical theory. In psychology and other sciences, the relationship between two variables is usually expressed as a correlation coefficient between 0 and 1. A perfect correlation would have a value of 1 – the two parameters would essentially be measuring the same thing; this is unrealistic for most studies of human health and behaviour (which are determined by so many variables), but many scientists would consider a ‘moderate’ correlation to lie between 0.4 and 0.59.11

  Using these measures, Stanovich found that the relationships between rationality and intelligence were generally very weak. SAT scores revealed a correlation of just 0.1 and 0.19 with measures of the framing bias and anchoring, for instance.12 Intelligence also appeared to play only a tiny role in the question of whether we are willing to delay immediate gratification for a greater reward in the future – a tendency known as ‘temporal discounting’. In one test, the correlation with SAT scores was as small as 0.02. That’s an extraordinarily modest correlation for a trait that many might assume comes hand in hand with a greater analytical mind. The sunk cost bias also showed almost no relationship to SAT scores in another study.13

  Gui Xue and colleagues at Beijing Normal University, meanwhile, have followed Stanovich’s lead, finding that the gambler’s fallacy is actually a little more common among the more academically successful participants in his sample.14 That’s worth remembering: when playing roulette, don’t think you are smarter than the wheel.

  Even trained philosophers are vulnerable. Participants with PhDs in philosophy are just as likely to suffer from framing effects, for example, as everyone else – despite the fact that they should have been schooled in logical reasoning.15

  You might at least expect that more intelligent people could learn to recognise these flaws. In reality, most people assume that they are less vulnerable than other people, and this is equally true of the ‘smarter’ participants. Indeed, in one set of experiments studying some of the classic cognitive biases, Stanovich found that people with higher SAT scores actually had a slightly larger ‘bias blind spot’ than people who were less academically gifted.16 ‘Adults with more cognitive ability are aware of their intellectual status and expect to outperform others on most cognitive tasks,’ Stanovich told me. ‘Because these cognitive biases are presented to them as essentially cognitive tasks, they expect to outperform on them as well.’

  From my interactions with Stanovich, I get the impression that he is extremely cautious about promoting his findings, meaning he has not achieved the same kind of fame as Daniel Kahneman, say – but colleagues within his field believe that these theories could be truly game-changing. ‘The work he has done is some of the most important research in cognitive psychology – but it’s sometimes underappreciated,’ agreed Gordon Pennycook, a professor at the University of Regina, Canada, who has also specialised in exploring human rationality.

  Stanovich has now refined and combined many of these measures into a single test, which is informally called the ‘rationality quotient’. He emphasises that he does not wish to devalue intelligence tests – they ‘work quite well for what they do’ – but to improve our understanding of these other cognitive skills that may also determine our decision making, and place them on an equal footing with the existing measures of cognitive ability.

  ‘Our goal has always been to give the concept of rationality a fair hearing – almost as if it had been proposed prior to intelligence’, he wrote in his scholarly book on the subject.17 It is, he says, a ‘great irony’ that the thinking skills explored in Kahneman’s Nobel Prize-winning work are still neglected in our most well-known assessment of cognitive ability.18

  After years of careful development and verification of the various sub-tests, the first iteration of the ‘Comprehensive Assessment of Rational Thinking’ was published at the end of 2016. Besides measures of the common cognitive biases and heuristics, it also included probabilistic and statistical reasoning skills – such as the ability to assess risk – that could improve our rationality, and questionnaires concerning contaminated mindware such as anti-science attitudes.

  For a taster, consider the following question, which aims to test the ‘belief bias’. Your task is to consider whether the conclusion follows, logically, based only on the opening two premises.

  All living things need water.

  Roses need water.

  Therefore, roses are living things.

  What did you answer? According to Stanovich’s work, 70 per cent of university students believe that this is a valid argument. But it isn’t, since the first premise only says that ‘all living things need water’ – not that ‘all things that need water are living’.

  If you still struggle to understand why that makes sense, compare it to the following statements:

  All insects need oxygen.

  Mice need oxygen.

  Therefore mice are insects.

  The logic of the two statements is exactly the same – but it is far easier to notice the flaw in the reasoning when the conclusion clashes with your existing knowledge. In the first example, however, you ha
ve to put aside your preconceptions and think, carefully and critically, about the specific statements at hand – to avoid thinking that the argument is right just because the conclusion makes sense with what you already know.19 That’s an important skill whenever you need to appraise a new claim.

  When combining all these sub-tests, Stanovich found that the overall correlation with measures of general intelligence, such as SAT scores, was modest: around 0.47 on one test. Some overlap was to be expected, especially given the fact that several of these measures, such as probabilistic reasoning, would be aided by mathematical ability and other aspects of cognition measured by IQ tests and SATs. ‘But that still leaves enough room for the discrepancies between rationality and intelligence that lead to smart people acting foolishly,’ Stanovich said.

  With further development, the rationality quotient could be used in recruitment to assess the quality of a potential employee’s decision making; Stanovich told me that he has already had significant interest from law firms and financial institutions, and executive head-hunters.

  Stanovich hopes his test may also be a useful tool to assess how students’ reasoning changes over a school or university course. ‘This, to me, would be one of the more exciting uses,’ Stanovich said. With that data, you could then investigate which interventions are most successful at cultivating more rational thinking styles.

  While we wait to see that work in action, cynics may question whether RQ really does reflect our behaviour in real life. After all, the IQ test is sometimes accused of being too abstract. Is RQ – based on artificial, imagined scenarios – any different?

  Some initial answers come from the work of Wändi Bruine de Bruin at Leeds University. Inspired by Stanovich’s research, her team first designed their own scale of ‘adult decision-making competence’, consisting of seven tasks measuring biases like framing, measures of risk perception, and the tendency to fall for the sunk cost fallacy (whether you are likely to continue with a bad investment or not). The team also examined over-confidence by asking the subjects some general knowledge questions, and then asking them to gauge how sure they were that each answer was correct.

 

‹ Prev