The Intelligence Trap

Home > Other > The Intelligence Trap > Page 7
The Intelligence Trap Page 7

by David Robson


  Many other great intellects may have lost their minds thanks to blinkered thinking. Their mistakes may not involve ghosts and fairies, but they still led to years of wasted effort and disappointment as they toiled to defend the indefensible.

  Consider Albert Einstein, whose name has become a synonym for genius. While still working as young patent clerk in 1905, he outlined the foundations for quantum mechanics, special relativity, and the equation for mass?energy equivalence (E=MC2) – the concept for which he is most famous.47 A decade later he would announce his theory of general relativity – tearing through Isaac Newton’s laws of gravity.

  But his ambitions did not stop there. For the remainder of his life, he planned to build an even grander, all-encompassing understanding of the universe that melded the forces of electromagnetism and gravity into a single, unified theory. ‘I want to know how God created this world. I am not interested in this or that phenomenon, in the spectrum of this or that element, I want to know his thoughts’, he had written previously – and this was his attempt to capture those thoughts in their entirety.

  After a period of illness in 1928, he thought he had done it. ‘I have laid a wonderful egg . . . Whether the bird emerging from this will be viable and long-lived lies in the lap of the gods’, he wrote. But the gods soon killed that bird, and many more dashed hopes would follow over the next twenty-five years, with further announcements of a new Unified Theory, only for them all to fall like a dead weight. Soon before his death, Einstein had to admit that ‘most of my offspring end up very young in the graveyard of disappointed hopes’.

  Einstein’s failures were no surprise to those around him, however. As his biographer, the physicist Hans Ohanian, wrote in his book Einstein’s Mistakes: ‘Einstein’s entire program was an exercise in futility . . . It was obsolete from the start.’ The more he invested in the theory, however, the more reluctant he was to let it go. Freeman Dyson, a colleague at Princeton, was apparently so embarrassed by Einstein’s foggy thinking that he spent eight years deliberately avoiding him on campus.

  The problem was that Einstein’s famous intuition – which had served him so well in 1905 – had led him seriously astray, and he had become deaf and blind to anything that might disprove his theories. He ignored evidence of nuclear forces that were incompatible with his grand idea, for instance, and came to despise the results of quantum theory – a field he had once helped to establish.48 At scientific meetings, he would spend all day trying to come up with increasingly intricate counter-examples to disprove his rivals, only to have been disproved by the evening.49 He simply ‘turned his back on experiments’ and tried to ‘rid himself of the facts’, according to his colleague at Princeton, Robert Oppenheimer.50

  Einstein himself realised as much towards the end of his life. ‘I must seem like an ostrich who forever buries its head in the relativistic sand in order not to face the evil quanta’, he once wrote to his friend, the quantum physicist Louis de Broglie. But he continued on his fool’s errand, and even on his deathbed, he scribbled pages of equations to support his erroneous theories, as the last embers of his genius faded. All of which sounds a lot like the sunk cost fallacy exacerbated by motivated reasoning.

  The same stubborn approach can be found in many of his other ideas. Having supported communism, he continually turned a blind eye to the failings of the USSR, for instance.51

  Einstein, at least, had not left his domain of expertise. But this single-minded determination to prove oneself right may be particularly damaging when scientists stray outside their usual territory, a fact that was noted by the psychologist Hans Eysenck. ‘Scientists, especially when they leave the particular field in which they are specialized, are just as ordinary, pig-headed, and unreasonable as everybody else’, he wrote in the 1950s. ‘And their unusually high intelligence only makes their prejudices all the more dangerous.’52 The irony is that Eysenck himself came to believe theories of the paranormal, showing the blinkered analysis of evidence he claimed to deplore.

  Some science writers have even coined a term – Nobel Disease – to describe the unfortunate habit of Nobel Prize winners to embrace dubious positions on various issues. The most notable case is, of course, Kary Mullis, the famous biochemist with the strange conspiracy theories who we met in the introduction. His autobiography, Dancing Naked in the Mind Field, is almost a textbook in the contorted explanations the intelligent mind can conjure to justify its preconceptions.53

  Other examples include Linus Pauling, who discovered the nature of chemical bonds between atoms, yet spent decades falsely claiming that vitamin supplements could cure cancer;54 and Luc Montagnier, who helped discover the HIV virus, but who has since espoused some bizarre theories that even highly diluted DNA can cause structural changes to water, leading it to emit electromagnetic radiation. Montagnier believes that this phenomenon can be linked to autism, Alzheimer’s disease and various serious conditions, but many other scientists reject these claims, leading to a petition of 35 other Nobel laureates asking for him to be removed from his position in an AIDS research centre.55

  Although we may not be working on a Grand Unified Theory, there is a lesson here for all of us. Whatever your profession, the toxic combination of motivated reasoning and the bias blind spot could still lead us to justify prejudiced opinions about those around us, pursue failing projects at work, or rationalise a hopeless love affair.

  As two final examples, let’s look at two of history’s greatest innovators: Thomas Edison and Steve Jobs.

  With more than a thousand patents to his name, Thomas Edison was clearly in possession of an extraordinarily fertile mind. But once he had conceived an idea, he struggled to change his mind – as shown in the ‘battle of the currents’.

  In the late 1880s, having produced the first working electric lightbulb, Edison sought to find a way to power America’s homes. His idea was to set up a power grid using a steady ‘direct current’ (DC), but his rival George Westinghouse had found a cheaper means of transmitting electricity with the alternating current (AC) we use today. Whereas DC is a flat line of a single voltage, AC oscillates rapidly between two voltages, which stops it losing energy over distance.

  Edison claimed that AC was simply too dangerous, since it more easily leads to death by electrocution. Although this concern was legitimate, the risk could be reduced with proper insulation and regulations, and the economic arguments were just too strong to ignore: it really was the only feasible way to provide electricity to the mass market.

  The rational response would have been to try to capitalise on the new technology and improve its safety, rather than continuing to pursue DC. One of Edison’s own engineers, Nikola Tesla, had already told him as much. But rather than taking his advice, Edison dismissed Tesla’s ideas and even refused to pay him for his research into AC, leading Tesla to take his ideas to Westinghouse instead.56

  Refusing to admit defeat, Edison engaged in an increasingly bitter PR war to try to turn public opinion against AC. It began with macabre public demonstrations, electrocuting stray dogs and horses. And when Edison heard that a New York court was investigating the possibility of using electricity for executions, he saw yet another opportunity to prove that point, as he advised the court on the development of the electric chair – in the hope that AC would be forever associated with death. It was a shocking moral sacrifice for someone who had once declared that he would ‘join heartily in an effort to totally abolish capital punishment’.57

  You may consider these to be simply the actions of a ruthless businessman, but the battle really was futile. As one journal stated in 1889: ‘It is impossible now that any man, or body of men, should resist the course of alternating current development . . . Joshua may command the sun to stand still, but Mr Edison is not Joshua.’58 By the 1890s, he had to admit defeat, eventually turning his attention to other projects.

  The historian of science Mark Essig writes that ‘the question is not so much why Edison’s campaign failed as why he thought i
t might succeed’.59 But an understanding of cognitive errors such as the sunk cost effect, the bias blind spot and motivated reasoning helps to explain why such a brilliant mind may persuade itself to continue down such a disastrous path.

  The co-founder of Apple, Steve Jobs, was similarly a man of enormous intelligence and creativity, yet he too sometimes suffered from a dangerously skewed perception of the world. According to Walter Isaacson’s official biography, his acquaintances described a ‘reality distortion field’ – ‘a confounding mélange of charismatic rhetorical style, indomitable will, and eagerness to bend any fact to fit the purpose at hand’, in the words of his former colleague Andy Hertzfeld.

  That single-minded determination helped Jobs to revolutionise technology, but it also backfired in his personal life, particularly after he was diagnosed with pancreatic cancer in 2003. Ignoring his doctor’s advice, he instead opted for quack cures such as herbal remedies, spiritual healing and a strict fruit juice diet. According to all those around him, Jobs had convinced himself that his cancer was something he could cure himself, and his amazing intelligence seems to have allowed him to dismiss any opinions to the contrary.60

  By the time he finally underwent surgery, the cancer had progressed too far to be treatable, and some doctors believe Jobs may still have been alive today if he had simply followed medical advice. In each case, we see that greater intellect is used for rationalisation and justification, rather than logic and reason.

  We have now seen three broad reasons why an intelligent person may act stupidly. They may lack elements of creative or practical intelligence that are essential for dealing with life’s challenges; they may suffer from ‘dysrationalia’, using biased intuitive judgements to make decisions; and they may use their intelligence to dismiss any evidence that contradicts their views thanks to motivated reasoning.

  Harvard University’s David Perkins described this latter form of the intelligence trap to me best when he said it was like ‘putting a moat around a castle’. The writer Michael Shermer, meanwhile, describes it as creating ‘logic-tight compartments’ in our thinking. But I personally prefer to think of it as a runaway car, without the right steering or navigation to correct its course. As Descartes had originally put it: ‘those who go forward but very slowly can get further, if they always follow the right road, than those who are in too much of a hurry and stray off it’.

  Whatever metaphor you choose, the question of why we evolved this way is a serious puzzle for evolutionary psychologists. When they build their theories of human nature, they expect common behaviours to have had a clear benefit to our survival. But how could it ever be an advantage to be intelligent but irrational?

  One compelling answer comes from the recent work of Hugo Mercier at the French National Centre for Scientific Research, and Dan Sperber at the Central European University in Budapest. ‘I think it’s now so obvious that we have the myside bias, that psychologists have forgotten how weird it is,’ Mercier told me in an interview. ‘But if you look at it from an evolutionary point of view, it’s really maladaptive.’

  It is now widely accepted that human intelligence evolved, at least in part, to deal with the cognitive demands of managing more complex societies. Evidence comes from the archaeological record, which shows that our skull size did indeed grow as our ancestors started to live in bigger groups.61 We need brainpower to keep track of others’ feelings, to know who you can trust, who will take advantage and who you need to keep sweet. And once language evolved, we needed to be eloquent, to be able to build support within the group and bring others to our way of thinking. Those arguments didn’t need to be logical to bring us those benefits; they just had to be persuasive. And that subtle difference may explain why irrationality and intelligence often go hand in hand.62

  Consider motivated reasoning and the myside bias. If human thought is primarily concerned with truth-seeking, we should weigh up both sides of an argument carefully. But if we just want to persuade others that we’re right, then we’re going to seem more convincing if we can pull as much evidence for our view together. Conversely, to avoid being duped ourselves, we need to be especially sceptical of others’ arguments, and so we should pay extra attention to interrogating and challenging any evidence that disagrees with our own beliefs – just as Kahan had shown.

  Biased reasoning isn’t just an unfortunate side effect of our increased brainpower, in other words – it may have been its raison d’être.

  In the face-to-face encounters of our ancestors’ small gatherings, good arguments should have counteracted the bad, enhancing the overall problem solving to achieve a common goal; our biases could be tempered by others. But Mercier and Sperber say these mechanisms can backfire if we live in a technological and social bubble, and miss the regular argument and counterargument that could correct our biases. As a result, we simply accumulate more information to accommodate our views.

  Before we learn how to protect ourselves from those errors, we must first explore one more form of the intelligence trap – ‘the curse of expertise’, which describes the ways that acquired knowledge and professional experience (as opposed to our largely innate general intelligence) can also backfire. As we shall see in one of the FBI’s most notorious mix-ups, you really can know too much.

  3

  The curse of knowledge: The beauty and fragility of the expert mind

  One Friday evening in April 2004, the lawyer Brandon Mayfield made a panicked call to his mother. ‘If we were to somehow suddenly disappear . . . if agents of the government secretly sweep in and arrest us, I would like your assurance that you could come to Portland on the first flight and take the kids back to Kansas with you,’ he told her.1

  An attorney and former officer in the US Army, Mayfield was not normally prone to paranoia, but America was still reeling from the fallout of 9/11. As a Muslim convert, married to an Egyptian wife, Mayfield sensed an atmosphere of ‘hysteria and islamophobia’, and a series of strange events now led him to suspect that he was the target of investigation.

  One day his wife Mona had returned home from work to find that the front door was double-locked with a top bolt, when the family never normally used the extra precaution. Another day, Mayfield walked into his office to find a dusty footprint on his office desk, under a loose tile on the ceiling, even though no one should have entered the room overnight. On the road, meanwhile, a mysterious car, driven by a stocky fifty- or sixty-year-old, seemed to have followed him to and from the mosque.

  Given the political climate, he feared he was under surveillance. ‘There was this realisation that it could be a secret government agency,’ he told me in an interview. By the time Mayfield made that impassioned phone call to his mother, he said, he had begun to feel an ‘impending doom’ about his fate, and he was scared about what that would mean for his three children.

  At around 9:45 a.m. on 6 May, those fears were realised with three loud thumps on his office door. Two FBI agents had arrived to arrest Mayfield in connection with the horrendous Madrid bombings, which had killed 192 people and injured around two thousand on 11 March that year. His hands were cuffed behind his back, and he was bundled into a car and taken to the local courthouse.

  He pleaded that he knew nothing of the attacks; when he first heard the news he had been shocked by the ‘senseless violence’, he said. But FBI agents claimed to have found his fingerprint on a blue shopping bag containing detonators, left in a van in Madrid. The FBI declared it was a ‘100% positive match’; there was no chance they were wrong.

  As he describes in his book, Improbable Cause, Mayfield was held in a cell while the FBI put together a case to present to the Grand Jury. He was handcuffed and shackled in leg irons and belly chains, and subjected to frequent strip searches.

  His lawyers painted a bleak picture: if the Grand Jury decided he was involved in the attacks, he could be shipped to Guantanamo Bay. As the judge stated in his first hearing, fingerprints are considered the gold standard of forensic evi
dence: people had previously been convicted for murder based on little more than a single print. The chances of two people sharing the same fingerprint were considered to be billions to one.2

  Mayfield tried to conceive how his fingerprint could have appeared on a plastic carrier bag more than 5,400 miles away – across the entire American continent and Atlantic Ocean. But there was no way. His lawyers warned that the very act of denying such a strong line of evidence could mean that he was indicted for perjury. ‘I thought I was being framed by unnamed officials – that was the immediate thought,’ Mayfield told me.

  His lawyers eventually persuaded the court to employ an independent examiner, Kenneth Moses, to re-analyse the prints. Like those of the FBI’s own experts, Moses’ credentials were impeccable. He had served with the San Francisco Police Department for twenty-seven years, and had garnered many awards and honours during his service.3 It was Mayfield’s last chance, and on 19 May – after nearly two weeks in prison – he returned to the tenth floor of the courthouse, to hear Moses give his testimony by video conference.

  As Moses’ testimony unfurled, Mayfield’s worst fears were confirmed. ‘I compared the latent prints to the known prints that were submitted on Brandon Mayfield,’ Moses told the court. ‘And I concluded that the latent print is the left index finger of Mr Mayfield.’4

  Little did he know that a remarkable turn of events taking place on the other side of the Atlantic Ocean would soon save him. That very morning, the Spanish National Police had identified an Algerian man, Ouhnane Daoud, connected with the bombings. Not only could they show that his finger better fitted the print previously matched to Mayfield – including some ambiguous areas dismissed by the FBI ? but his thumb also matched an additional print found on the bag. He was definitely their man.

 

‹ Prev