Book Read Free

The Intelligence Trap

Page 9

by David Robson


  Eye-tracking studies reveal that expert examiners often go through this process semi-automatically,30 chunking the picture in much the same way as de Groot’s chess grandmasters31 to identify the features that are considered the most useful for comparison. As a result, the points of identification may just jump out at the expert analyst, while a novice would have to systematically identify and check each one – making it exactly the kind of top-down decision making that can be swayed by bias.

  Sure enough, Dror has found that expert examiners are prone to a range of cognitive errors that may arise from such automatic processing. They were more likely to find a positive match if they were told a suspect had already confessed to the crime.32 The same was true when they were presented with emotive material, such as a gory picture of a murder victim. Although it should have had no bearing on their objective judgement, the examiners were again more likely to link the fingerprints, perhaps because they felt more motivated and determined to catch the culprit.33 Dror points out that this is a particular problem when the available data is ambiguous and messy – and that was exactly the problem with the evidence from Madrid. The fingerprint had been left on a crumpled carrier bag; it was smeared and initially difficult to read.

  The FBI had first run the fingerprint through a computer analysis to find potential suspects among their millions of recorded prints, and Mayfield’s name appeared as the fourth of twenty possible suspects. At this stage, the FBI analysts apparently had no idea of his background – his print was only on file from a teenage brush with the law. But it seems likely that they were hungry for a match, and once they settled on Mayfield they became more and more invested in their choice – despite serious signs that they had made the wrong decision.

  While the examiners had indeed identified around fifteen points of similarity in the fingerprints, they had consistently ignored significant differences. Most spectacularly, a whole section of the latent print – the upper left-hand portion – failed to match Mayfield’s index finger. The examiners had argued that this area might have come from someone else’s finger, who had touched the bag at another time; or maybe it came from Mayfield himself, leaving another print super-imposed on the first one to create a confusing pattern. Either way, they decided they could exclude that anomalous section and simply focus on the bit that looked most like Mayfield’s.

  If the anomalous section had come from another finger, however, you would expect to see tell-tale signs. The two fingers would have been at different angles, for instance, meaning that the ridges would be overlapping and criss-crossing. You might also expect that the two fingers would have touched the bag with varying pressure, affecting the appearance of the impressions left behind; one section might have seemed fainter than the first. Neither sign was present in this case.

  For the FBI’s story to make sense, the two people would have gripped the bag with exactly the same force, and their prints would have had to miraculously align. The chances of that happening were tiny. The much likelier explanation was that the print came from a single finger – and that finger was not Mayfield’s.

  These were not small subtleties but glaring holes in the argument. A subsequent report by the Office of the Inspector General (OIG) found that the complete neglect of this possibility was completely unwarranted. ‘The explanation required the examiners to accept an extraordinary set of coincidences’, the OIG concluded.34 Given those discrepancies, some independent fingerprint examiners reviewing the case concluded that Mayfield should have been ruled out right away.35

  Nor was this the only example of such circular reasoning in the FBI’s case: the OIG found that across the whole of their analysis, the examiners appeared far more likely to dismiss or ignore any points of interest that disagreed with their initial hunch, while showing far less scrutiny for details that appeared to suggest a match.

  The two marked-up prints above, taken from the freely available OIG report, show just how many errors they made. The Madrid print is on the left; Mayfield’s is on the right. Admittedly the errors are hard to see for a complete novice, but if you look very carefully you can make out some notable features that are present in one but not the other.

  The OIG concluded that this was a clear case of the confirmation bias, but given what we have learnt from the research on top-down processing and the selective attention that comes with expertise, it is possible that the examiners weren’t even seeing those details in the first place. They were almost literally blinded by their expectations.

  These failings could have been uncovered with a truly independent analysis. But although the prints moved through multiple examiners, each one knew their colleague’s conclusions, swaying their judgement. (Dror calls this a ‘bias cascade’.36) This also spread to the officers performing the covert surveillance of Mayfield and his family, who even mistook his daughter’s Spanish homework for travel documents placing him in Madrid at the time of the attack.

  Those biases will only have been strengthened once the FBI looked into Mayfield’s past and discovered that he was a practising Muslim, and that he had once represented one of the Portland Seven terrorists in a child custody case. In reality, it had no bearing on his presumed guilt.37

  The FBI’s confidence was so great that they ignored additional evidence from Spain’s National Police (the SNP). By mid-April the SNP had tried and failed to verify the match, yet the FBI lab quickly disregarded their concerns. ‘They had a justification for everything,’ Pedro Luis Mélida Lledó, head of the fingerprint unit for the SNP, told the New York Times shortly after Mayfield was exonerated.38 ‘But I just couldn’t see it.’

  Records of the FBI’s internal emails confirm that the examiners were unshaken by the disagreement. ‘I spoke with the lab this morning and they are absolutely confident that they have the match to the print ? No doubt about it!!!!!’ one FBI agent wrote. ‘They will testify in any court you swear them into.’39

  That complete conviction may have landed Mayfield in Guantanamo Bay – or death row – if the SNP had not succeeded in finding their own evidence that he was innocent. A few weeks after the original bombings, they raided a house in suburban Madrid. The suspects detonated a suicide bomb rather than submitting to arrest, but the police managed to uncover documents bearing the name of Ouhnane Daoud: an Algerian national, whose prints had been on record for an immigration event. Mayfield was released, and within a week, he was completely exonerated of any connection to the attack. Challenging the lawfulness of his arrest, he eventually received $2 million in compensation.

  The lesson here is not just psychological, but social. Mayfield’s case perfectly illustrates the ways that the over-confidence of experts themselves, combined with our blind faith in their talents, can amplify their biases – with potentially devastating effect. The chain of failures within the FBI and the courtroom should not have been able to escalate so rapidly, given the lack of evidence that Mayfield had even left the country.

  With this knowledge in mind, we can begin to understand why some existing safety procedures – although often highly effective – nevertheless fail to protect us from expert error.

  Consider aviation. Commonly considered to be one of the most reliable industries on Earth, airports and pilots already make use of numerous safety nets to catch any momentary lapses of judgement. The use of checklists as reminders of critical procedures – now common in many other sectors – originated in the cockpit to ensure, for instance, safer take-offs and landings.

  Yet these strategies do not account for the blind spots that specifically arise from expertise. With experience, the safety procedures are simply integrated into the pilot’s automatic scripts and shrink from conscious awareness. The result, according to one study of nineteen serious accidents, is ‘an insidious move towards less conservative judgement’ and it has led to people dying when the pilot’s knowledge should have protected them from error.40

  This was evident at Blue Grass Airport in Lexington, Kentucky, on 25 August 2007 at 6 a.m.
in the morning. Comair Flight 5191 had been due to take off from runway 22 around 6 a.m., but the pilot lined up on a shorter runway. Thanks to the biases that came with their extensive experience, both the pilot and co-pilot missed all the warning signs that they were in the wrong place. The plane smashed through the perimeter fence, before ricocheting off an embankment, crashing into a pair of trees, and bursting into flames. Forty-seven passengers – and the pilot – died as a result.41

  The curse of expertise in aviation doesn’t end there. As we saw with the FBI’s forensic scientists, experimental studies have shown that a pilot’s expertise may even influence their visual perception – causing them to under-estimate the depth of cloud in a storm, for instance, based on their prior expectations.42

  The intelligence trap shows us that it’s not good enough to be fool proof; procedures need to be expert proof too. The nuclear power industry is one of the few sectors to account for the automatisation that comes with experience, with some plants routinely switching the order of procedures in their safety checks to prevent inspectors from working on auto-pilot. Many other industries, including aviation, could learn the same lesson.43

  A greater appreciation of the curse of expertise – and the virtues of ignorance – can also explain how some organisations weather chaos and uncertainty, while others crumble in the changing wind.

  Consider a study by Rohan Williamson of Georgetown University, who recently examined the fortunes of banks during financial crises. He was interested in the roles of ‘independent directors’– people recruited from outside the organisation to advise the management. The independent director is meant to offer a form a self-regulation, which should require a certain level of expertise, and many do indeed come from other financial institutions. Due to the difficulties of recruiting a qualified expert without any other conflicting interests, however, some of the independent directors may be drawn from other areas of business, meaning they may lack the more technical knowledge of the processes involved in the bank’s complex transactions.

  Bodies such as the Organisation for Economic Cooperation and Development (OECD) had previously argued that this lack of financial expertise may have contributed to the 2008 financial crisis.44

  But what if they’d got it the wrong way around, and this ignorance was actually a virtue? To find out, Williamson examined the data of 100 banks before and after the crisis. Until 2006, the results were exactly as you might expect if you assume that greater knowledge always aids decision making: banks with an expert board performed slightly better than those with fewer (or no) independent directors holding a background in finance since they were more likely to endorse risky strategies that paid off.

  Their fortunes took a dramatic turn after the financial crash, however; now it was the banks with the least expertise that performed better. The ‘expert’ board members, so deeply embedded in their already risky decision making, didn’t pull back and adapt their strategy, while the less knowledgeable independent directors were less entrenched and biased, allowing them to reduce the banks’ losses as they guided them through the crash.45

  Although this evidence comes from finance – an area not always respected for its rationality – the lessons could be equally valuable for any area of business. When the going gets tough, the less experienced members of your team may well be the best equipped to guide you out of the mess.

  In forensic science, at least, there has been some movement to mitigate the expert errors behind the FBI’s investigations into Brandon Mayfield.

  ‘Before Brandon Mayfield, the fingerprint community really liked to explain any errors in the language of incompetence,’ says the UCLA law professor Jennifer Mnookin. ‘Brandon Mayfield opened up a space for talking about the possibility that really good analysts, using their methods correctly, could make a mistake.’46

  Itiel Dror has been at the forefront of the work detailing these potential errors in forensic judgements, and recommending possible measures that could mitigate the effects. For example, he advocates more advanced training that includes a cognitively informed discussion of bias, so that every forensic scientist is aware of the ways their judgement may be swayed, and practical ways to minimise these influences. ‘Like an alcoholic at an AA meeting, acknowledging the problem is the first step in the solution,’ he told me.

  Another requirement is that forensic analysts make their judgements ‘blind’, without any information beyond the direct evidence at hand, so that they are not influenced by expectation but see the evidence as objectively as possible. This is especially crucial when seeking a second opinion: the second examiner should have no knowledge of the first judgement.

  The evidence itself must be presented in the right way and in the right sequence, using a process that Itiel Dror calls ‘Linear Sequential Unmasking’ to avoid the circular reasoning that had afflicted the examiners’ judgement after the Madrid bombings.47 For instance, the examiners should first mark up the latent print left on the scene before even seeing the suspect’s fingerprint, giving them predetermined points of comparison. And they should not receive any information about the context of a case before making their forensic judgement of the evidence. This system is now used by the FBI and other agencies and police departments across the United States and other countries.

  Dror’s message was not initially welcomed by the experts he has studied; during our conversation at London’s Wellcome Collection, he showed me an angry letter, published in a forensics journal, from the Chair of the Fingerprint Society, which showed how incensed many examiners were at the very idea that they may be influenced by their expectations and their emotions. ‘Any fingerprint examiner who comes to a decision on identification and is swayed either way in that decision-making process under the influence of stories and gory images is either totally incapable of performing the noble tasks expected of him/her or is so immature he/she should seek employment at Disneyland’, he wrote.

  Recently, however, Dror has found that more and more forces are taking his suggestions on board. ‘Things are changing . . . but it’s slow. You will still find that if you talk to certain examiners, they will say “Oh no, we’re objective.” ’

  Mayfield retains some doubts about whether these were genuine unconscious errors, or the result of a deliberate set-up, but he supports any work that helps to highlight the frailties of fingerprint analysis. ‘In court, each piece of evidence is like a brick in a wall,’ he told me. ‘The problem is that they treat the fingerprint analysis as if it is the whole wall – but it’s not even a strong brick, let alone a wall.’

  Mayfield continues to work as a lawyer. He is also an active campaigner, and has co-written his account of the ordeal, called Improbable Cause, with his daughter Sharia, in a bid to raise awareness of the erosion of US civil liberties in the face of more stringent government surveillance. During our conversation, he appeared to be remarkably stoic about his ordeal. ‘I’m talking to you – I’m not locked in Guantanamo, in some Kafkaesque situation . . . So in that sense, the justice system must have worked,’ he told me. ‘But there may be many more people who are not in such an enviable position.’

  With this knowledge in mind, we are now ready to start Part 2. Through the stories of the Termites, Arthur Conan Doyle, and the FBI’s forensic examiners, we have seen four potential forms of the intelligence trap:

  We may lack the necessary tacit knowledge and counter-factual thinking that are essential for executing a plan and pre-empting the consequences of your actions.

  We may suffer from dysrationalia, motivated reasoning and the bias blind spot, which allow us to rationalise and perpetuate our mistakes, without recognising the flaws in our own thinking. This results in us building ‘logic-tight compartments’ around our beliefs without considering all the available evidence.

  We may place too much confidence in our own judgement, thanks to earned dogmatism, so that we no longer perceive our limitations and over-reach our abilities.

  Finally, thanks to our
expertise, we may employ entrenched, automatic behaviours that render us oblivious to the obvious warning signs that disaster is looming, and more susceptible to bias.

  If we return to the analogy of the brain as a car, this research confirms the idea that intelligence is the engine, and education and expertise are its fuel; by equipping us with the basic abstract reasoning skills and specialist knowledge, they put our thinking in motion, but simply adding more power won’t always help you to drive that vehicle safely. Without counter-factual thinking and tacit knowledge – you may find yourself up a dead end; if you suffer from motivated reasoning, earned dogmatism, and entrenchment, you risk simply driving in circles, or worse, off a cliff.

  Clearly, we’ve identified the problem, but we are still in need of some lessons to teach us how to navigate these potential pitfalls more carefully. Correcting these omissions is now the purpose of a whole new scientific discipline – evidence-based wisdom – which we shall explore in Part 2.

  Part 2

  Escaping the intelligence trap: A toolkit for reasoning and decision making

  4

  Moral algebra: Towards the science of evidence-based wisdom

  We are in the stuffy State House of Pennsylvania, in the summer of 1787. It is the middle of a stifling heatwave, but the windows and doors have been locked against the prying eyes of the public, and the sweating delegates – many dressed in thick woollen suits1 – are arguing fiercely. Their aim is to write the new US Constitution – and the stakes could not be higher. Just eleven years after the American colonies declared independence from England, the country’s government is underfunded and nearly impotent, with serious infighting between the states. It’s clear that a new power structure is desperately needed to pull the country together.

 

‹ Prev