The Intelligence Trap

Home > Other > The Intelligence Trap > Page 8
The Intelligence Trap Page 8

by David Robson


  Mayfield was freed the next day, and by the end of the month, the FBI would have to release a humiliating public apology.

  What went wrong? Of all the potential explanations, a simple lack of skill cannot be the answer: the FBI’s forensics teams are considered to be the best in the world.5 Indeed, a closer look at the FBI’s mistakes reveals that they did not occur despite its examiners’ knowledge – they may have occurred because of it.

  The previous chapters have examined how general intelligence – the capacity for abstract reasoning measured by IQ or SATs – can backfire. The emphasis here should be on the word general, though, and you might hope we could mitigate those errors through more specialised knowledge and professional expertise, cultivated through years of experience. Unfortunately, the latest research shows that these can also lead us to err in unexpected ways.

  These discoveries should not be confused with some of the vaguer criticisms that academics (such as Paul Frampton) live in an ‘ivory tower’ isolated from ‘real life’. Instead, the latest research highlights dangers in the exact situations where most people would hope that experience protects you from mistakes.

  If you are undergoing heart surgery, flying across the globe or looking to invest an unexpected windfall, you want to be in the care of a doctor, pilot or accountant with a long and successful career behind them. If you want an independent witness to verify a fingerprint match in a high-profile case, you choose Moses. Yet there are now various social, psychological and neurological reasons that explain why expert judgement sometimes fails at the times when it is most needed – and the sources of these errors are intimately entwined with the very processes that normally allow experts to perform so well.

  ‘A lot of the cornerstones, the building blocks that make the expert an expert and allow them to do their job efficiently and quickly, also entail vulnerabilities: you can’t have one without the other,’ explains the cognitive neuroscientist Itiel Dror at University College London, who has been at the forefront of much of this research. ‘The more expert you are, the more vulnerable you are in many ways.’

  Clearly experts will still be right the majority of times, but when they are wrong, it can be disastrous, and a clear understanding of the overlooked potential for expert error is essential if we are to prevent those failings.

  As we shall soon discover, those frailties blinded the FBI examiners’ judgement – bringing about the string of bad decisions that led to Mayfield’s arrest. In aviation they have led to the unnecessary deaths of pilots and civilians, and in finance they contributed to the 2008 financial crisis.

  Before we examine that research, we first need to consider some core assumptions. One potential source of expert error could be a sense of over-confidence. Perhaps experts over-reach themselves, believing their powers are infallible? The idea would seem to fit with the descriptions of the bias blind spot that we explored in the last chapter.

  Until recently, however, the bulk of the scientific research suggested the opposite was true: it’s the incompetents who have an inflated view of their abilities. Consider a classic study by David Dunning at the University of Michigan and Justin Kruger at New York University. Dunning and Kruger were apparently inspired by the unfortunate case of McArthur Wheeler, who attempted to rob two banks in Pittsburgh in 1995. He committed the crimes in broad daylight, and the police arrested him just hours later. Wheeler was genuinely perplexed. ‘But I wore the juice!’ he apparently exclaimed. Wheeler, it turned out, believed a coating of lemon juice (the basis of invisible ink) would make him imperceptible on the CCTV footage.6

  From this story, Dunning and Kruger wondered if ignorance often comes hand in hand with over-confidence, and set about testing the idea in a series of experiments. They gave students tests on grammar and logical reasoning, and then asked them to rate how well they thought they had performed. Most people misjudged their own abilities, but this was particularly true for the people who performed the most poorly. In technical terms, their confidence was poorly calibrated – they simply had no idea just how bad they were. Crucially, Dunning and Kruger found that they could reduce that over-confidence by offering training in the relevant skills. Not only did the participants get better at what they did; their increased knowledge also helped them to understand their limitations.7

  Since Dunning and Kruger first published their study in 1999, the finding has been replicated many times, across many different cultures.8 One survey of thirty-four countries – from Australia to Germany, and Brazil to South Korea – examined the maths skills of fifteen-year-old students; once again, the least able were often the most over-confident.9

  Unsurprisingly, the press have been quick to embrace the ‘Dunning?Kruger Effect’, declaring that it is the reason why ‘losers have delusions of grandeur’ and ‘why incompetents think they are awesome’ and citing it as the cause of President Donald Trump’s more egotistical statements.10

  The Dunning-Kruger Effect should have an upside, though. Although it may be alarming when someone who is highly incompetent but confident reaches a position of power, it does at least reassure us that education and training work as we would hope, improving not just our knowledge but our metacognition and self-awareness. This was, incidentally, Bertrand Russell’s thinking in an essay called ‘The Triumph of Stupidity’ in which he declared that ‘the fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt’.

  Unfortunately, these discoveries do not paint the whole picture. In charting the shaky relationship between perceived and actual competence, these experiments had focused on general skills and knowledge, rather than the more formal and extensive study that comes with a university degree, for example.11 And when you do investigate people with an advanced education, a more unsettling vision of the expert brain begins to emerge.

  In 2010, a group of mathematicians, historians and athletes were tasked with identifying certain names that represented significant figures within each discipline. They had to discern whether Johannes de Groot or Benoit Theron were famous mathematicians, for instance, and they could answer, Yes, No, or Don’t Know. As you might hope, the experts were better at picking out the right people (such as Johannes de Groot, who really was a mathematician) if they fell within their discipline. But they were also more likely to say they recognised the made-up figures (in this case, Benoit Theron).12 When their self-perception of expertise was under question, they would rather take a guess and ‘over-claim’ the extent of their knowledge than admit their ignorance with a ‘don’t know’.

  Matthew Fisher at Yale University, meanwhile, quizzed university graduates on their college major for a study published in 2016. He wanted to check their knowledge of the core topics of the degree, so he first asked them to estimate how well they understood some of the fundamental principles of their discipline; a physicist might have been asked to gauge their understanding of thermodynamics; a biologist, to describe Kreb’s Cycle.

  Unbeknown to the participants, Fisher then sprung a surprise test: they now had to write a detailed description of the principles they claimed to know. Despite having declared a high level of knowledge, many stumbled and struggled to write a coherent explanation. Crucially, this was only true within the topic of their degree. When graduates also considered topics beyond their specialism, or more general, everyday subjects, their initial estimates tended to be far more realistic.13

  One likely reason is that the participants simply had not realised how much they might have forgotten since their degree (a phenomenon that Fisher calls meta-forgetfulness). ‘People confuse their current level of understanding with their peak level of knowledge,’ Fisher told me. And that may suggest a serious problem with our education. ‘The most cynical reading of it is that we’re not giving students knowledge that stays with them,’ Fisher said. ‘We’re just giving them the sense they know things, when they actually don’t. And that seems to be counter-productive.’

  The i
llusion of expertise may also make you more closed-minded. Victor Ottati at Loyola University in Chicago has shown that priming people to feel knowledgeable means that they were less likely to seek or listen to the views of people who disagreed with them.* Ottati notes that this makes sense when you consider the social norms surrounding expertise; we assume that an expert already has the credentials to stick to their opinions, which he calls ‘earned dogmatism’.14

  * The Japanese, incidentally, have encoded these ideas in the word shoshin, which encapsulates the fertility of the beginner’s mind and its readiness to accept new ideas. As the Zen monk Shunryu Suzuki put it in the 1970s: ‘In the beginner’s mind there are many possibilities; in the expert’s, there are few.’

  In many cases, of course, experts really may have better justifications to think what they do. But if they over-estimate their own knowledge – as Fisher’s work might suggest – and then stubbornly refuse to seek or accept another’s opinion, they may quickly find themselves out of their depth.

  Ottati speculates that this fact could explain why some politicians become more entrenched in their opinions and fail to update their knowledge or seek compromise – a state of mind he describes as ‘myopic over-self-confidence’.

  Earned dogmatism might also further explain the bizarre claims of the scientists with ‘Nobel Disease’ such as Kary Mullis. Subrahmanyan Chandrasekhar, the Nobel Prize-winning Indian-American astrophysicist, observed this tendency in his colleagues. ‘These people have had great insights and made profound discoveries. They imagine afterwards that the fact that they succeeded so triumphantly in one area means they have a special way of looking at science that must be right. But science doesn’t permit that. Nature has shown over and over again that the kinds of truth which underlie nature transcend the most powerful minds.’15

  Inflated self-confidence and earned dogmatism are just the start of the expert’s flaws, and to understand the FBI’s errors, we have to delve deeper into the neuroscience of expertise and the ways that extensive training can permanently change our brain’s perception – for good and bad.

  The story begins with a Dutch psychologist named Adriaan de Groot, who is sometimes considered the pioneer of cognitive psychology. Beginning his career during the Second World War, de Groot had been something of a prodigious talent at school and university – showing promise in music, mathematics and psychology – but the tense political situation on the eve of the war offered few opportunities to pursue academia after graduation. Instead, de Groot found himself scraping together a living as a high-school teacher, and later, as an occupational psychologist for a railway company.16

  De Groot’s real passion was chess, however. A considerably talented player, he had represented his country at an international tournament in Buenos Aires,17 and decided to interview other players about their strategies to see if they could reveal the secrets of exceptional performance.18 He began by showing them a sample chess board before asking them to talk through their mental strategies as they decided on the next move.

  De Groot had initially suspected that their talents might arise from the brute force of their mental calculations: perhaps they were simply better at crunching the possible moves and simulating the consequences. This didn’t seem to be the case, however: the experts didn’t report having cycled through many positions, and they often made up their minds within a few seconds, which would not have given them enough time to consider the different strategies.

  Follow-up experiments revealed that the players’ apparent intuition was in fact an astonishing feat of memory, achieved through a process that is now known as ‘chunking’. The expert player stops seeing the game in terms of individual pieces and instead breaks the board into bigger units – or ‘complexes’ – of pieces. In the same ways that words can be combined into larger sentences, those complexes can then form templates or psychological scripts known as ‘schemas’, each of which represents a different situation and strategy. This transforms the board into something meaningful, and it is thought to be the reason that some chess grandmasters can play multiple games simultaneously – even while blindfolded. The use of schemas significantly reduces the processing workload for the player’s brain; rather than computing each potential move from scratch, experts search through a vast mental library of schemas to find the move that fits the board in front of them.

  De Groot noted that over time the schemas can become deeply ‘engrained in the player’, meaning that the right solution may come to mind automatically with just a mere glance at the board, which neatly accounts for those phenomenal flashes of brilliance that we have come to associate with expert intuition. Automatic, engrained behaviours also free up more of the brain’s working memory, which might explain how experts operate in challenging environments. ‘If this were not the case,’ de Groot later wrote, ‘it would be completely impossible to explain why some chess players can still play brilliantly while under the influence of alcohol.’19

  De Groot’s findings would eventually offer a way out of his tedious jobs at the high school and railway, earning him a doctorate from the University of Amsterdam. And it has since inspired countless other studies in many domains – explaining the talent of everyone from Scrabble and poker champions to the astonishing performances of elite athletes like Serena Williams, and the rapid coding of world-class computer programmers.20

  Although the exact processes will differ depending on the particular skill, in each case the expert is benefiting from a vast library of schemas that allows them to extract the most important information, recognise the underlying patterns and dynamics, and react with an almost automatic response from a pre-learnt script.21

  This theory of expertise may also help us to understand less celebrated talents, such as the extraordinary navigation of London taxi drivers through the city’s 25,000 streets. Rather than remembering the whole cityscape, they have built schemas of known routes, so that the sight of a landmark will immediately suggest the best path from A to B, depending on the traffic at hand – without them having to recall and process the entire map.22

  Even burglars may operate using the same neural processes. Asking real convicts to take part in virtual reality simulations of their crimes, researchers have demonstrated that more experienced burglars have amassed a set of advanced schemas based on the familiar layouts of British homes, allowing them to automatically intuit the best route through a house and to alight on the most valuable possessions.23 As one prison inmate told the researchers: ‘The search becomes a natural instinct, like a military operation – it becomes routine.’24

  There is no denying that the expert’s intuition is a highly efficient way of working in the vast majority of situations they face – and it is often celebrated as a form of almost superhuman genius.

  Unfortunately, it can also come with costly sacrifices.

  One is flexibility: the expert may lean so heavily on existing behavioural schemas that they struggle to cope with change.25 When tested on their memories, experienced London taxi drivers appeared to struggle with the rapid development of Canary Wharf at the end of the twentieth century, for instance; they just couldn’t integrate the new landmarks and update their old mental templates of the city.26 Similarly, an expert games champion will find it harder to learn a new set of rules and an accountant will struggle to adapt to new tax laws. The same cognitive entrenchment can also limit creative problem solving if the expert fails to look beyond their existing schemas for new ways to tackle a challenge. They become entrenched in the familiar ways of doing things.

  The second sacrifice may be an eye for detail. As the expert brain chunks up the raw information into more meaningful components, and works at recognising broad underlying patterns, it loses sight of the smaller elements. This change has been recorded in real-time scans of expert radiologists’ brains: they tend to show greater activity in the areas of the temporal lobe associated with advanced pattern recognition and symbolic meaning, but less activity in regions of the vis
ual cortex that are associated with combing over fine detail.27 The advantage will be the ability to filter out irrelevant information and reduce distraction, but this also means the expert is less likely to consider all the elements of a problem systematically, potentially causing them to miss important nuances that do not easily fit their mental maps.

  It gets worse. Expert decisions, based on gist rather than careful analysis, are also more easily swayed by emotions and expectations and cognitive biases such as framing and anchoring.28 The upshot is that training may have actually reduced their rationality quotient. ‘The expert’s mindset – based on what they expect, what they hope, whether they are in a good mood or bad mood that day – affects how they look at the information,’ Itiel Dror told me. ‘And the brain mechanisms – the actual cognitive architecture – that give an expert their expertise are especially vulnerable to that.’

  The expert could, of course, override their intuitions and return to a more detailed, systematic analysis. But often they are completely unaware of the danger – they have the bias blind spot that we observed in Chapter 2.29 The result is a kind of ceiling to their accuracy, as these errors become more common among experts than those arising from ignorance or inexperience. When that fallible, gist-based processing is combined with over-confidence and ‘earned dogmatism’, it gives us one final form of the intelligence trap – and the consequences can be truly devastating.

  The FBI’s handling of the Madrid bombings offers the perfect example of these processes in action. Matching fingerprints is an extraordinarily complex job, with analyses based on three levels of increasingly intricate features, from broad patterns, such as whether your prints have a left- or right-facing swirl, a whorl or an arch, to the finer details of the ridges in your skin – whether a particular line splits in two, breaks into fragments, forms a loop called an ‘eye’ or ends abruptly. Overall, examiners may aim to detect around ten identifying features.

 

‹ Prev