A Mind of Its Own
Page 11
10 D.N. Anderson and E. Williams (1994), ‘The delusion of inanimate doubles’, Psychopathology, 27: 220–5.
11 H.D. Ellis, A.W. Young, A.H. Quayle and K.W. De Pauw (1997), ‘Reduced autonomic responses to faces in Capgras delusion’, Proceedings of the Royal Society of London Series B, Biological Sciences, 264: 1085–92.
12 H.D. Ellis and A.W. Young (1990), ‘Accounting for delusional misidentifications’, British Journal of Psychiatry, 157: 239–48.
13 For example, M.P. Alexander, D.T. Stuss and D.F. Benson (1979), ‘Capgras syndrome: a reduplicative phenomenon’, Neurology, 28: 334–9.
14 See, for example, P.A. Garety and D. Freeman (1999), ‘Cognitive approaches to delusions: A critical review of theories and evidence’, British Journal of Clinical Psychology, 38: 113–54.
15 P.A. Garety, D.R. Hemsley and S. Wessely (1991), ‘Reasoning in deluded schizophrenic and paranoid patients: biases in performance on a probabilistic inference task’, The Journal of Nervous and Mental Disease, 179: 194–201.
16 For criticisms of the ‘jumping to conclusions’ hypothesis, see B.A. Maher and M. Spitzer (1993), ‘Delusions’, in P.B. Sutker and H.E. Adams (eds), Comprehensive handbook of psychopathology, 2nd edn, New York: Plenum Press (pp. 263–93).
17 P.C. Wason and P.N. Johnson-Laird (1972), Psychology of reasoning: Structure and content. UK: BT Batsford (pp. 229–39).
18 For example, R. Kemp, S. Chua, P. McKenna and A. David (1997), ‘Reasoning and delusions’, British Journal of Psychiatry, 170: 398– 405; R.P. Bentall and H.F. Young (1996), ‘Sensible hypothesis testing in deluded, depressed and normal subjects’, British Journal of Psychiatry, 168: 372–5.
19 These are the ‘two-factor’ models, for example, M. Davies, M. Coltheart, R. Langdon and N. Breen (2001), ‘Monothematic delusions: Towards a two-factor account’, Philosophy, Psychiatry and Psychology, 8: 133–58.
20 B.A. Maher (1999), ‘Anomalous experience in everyday life: its significance for psychopathology’, The Monist, 82: 547–70.
21 See P.G. Zimbardo (1999), ‘Discontinuity theory: cognitive and social searches for rationality and normality – may lead to madness’, in M.P. Zanna (ed), Advances in Experimental Social Psychology, 31: 345–486.
22 It should be noted that phobias are classified as anxiety disorders, rather than delusions. Delusional disorder: somatic type is the delusion that one has a physical defect or medical condition. Delusional disorder: persecutory type is the delusion that oneself (or someone close to you) is being treated malevolently.
23 C.D. Frith (1992), The cognitive neuropsychology of schizophrenia, Hove, UK: LEA.
24 Reported in B.A. Maher (1988), ‘Anomalous experience and delusional thinking: the logic of explanations’, in T.F. Oltmanns and B.A. Maher (eds), Delusional beliefs, New York: John Wiley and Sons.
25 E.R. Peters, S.A. Joseph and P.A. Garety (1999), ‘Measurement of delusional ideation in the normal population: introducing the PDI (Peters et al. Delusions Inventory)’, Schizophrenia Bulletin, 25: 553–76.
26 Time/CNN (15 June 1997), Poll: U.S. hiding knowledge of aliens [CNN Interactive poll posted on the internet], retrieved on 22 November 2004 from: http://www.cnn.com/US/9706/15/ufo.poll/ index.html
27 E.R. Peters, S.A. Joseph and P.A. Garety (1999), ‘Measurement of delusional ideation in the normal population: introducing the PDI (Peters et al. Delusions Inventory)’, Schizophrenia Bulletin, 25: 553–76.
28 See S. Sanderson, B. Vandenberg and P. Paese (1999), ‘Authentic religious experience or insanity?’ Journal of Clinical Psychology, 55: 607– 16; S. O’Connor and B. Vandenberg (2005), ‘Psychosis or faith? Clinicians’ assessments of religious beliefs’, Journal of Consulting and Clinical Psychology, 73: 610–16. The overall ratings of psychotic pathology for Catholic beliefs were significantly lower than were those ratings for Mormon beliefs.
29 Suggested by, for example, B.A. Maher (1999), ‘Anomalous experience in everyday life: its significance for psychopathology’, The Monoist, 82: 547–70.
30 For example, non-psychiatric patients who experience hallucinations are more likely to be married, to be happy to talk about their voices, and to have positive voice experiences, than psychiatric hallucinators. There don’t appear to be great differences in the hallucinatory experiences per se. See R.P. Bentall (2000), ‘Hallucinatory experiences’, in E. Cardeña, S. J. Lynn and S. Krippner (eds), Varieties of anomalous experience: examining the scientific evidence, Washington, DC: American Psychological Association (pp. 85–120).
CHAPTER 5
The Pigheaded Brain
Loyalty a step too far
On the matter of the correct receptacle for draining spaghetti, my husband demonstrates a bewildering pigheadedness. He insists that the colander is the appropriate choice, despite the manifest ease with which the strands escape through the draining holes. Clearly the sieve, with its closer-knit design, is a superior utensil for this task. Yet despite his stone blindness to the soggy tangle of spaghetti clogging the plug-hole in the sink that results from his method, my husband claims to be able to observe starchy molecules clinging to the weave of the sieve for weeks and weeks after I’ve drained pasta in it. We have had astonishingly lengthy discussions on this issue – I have provided here merely the briefest of overviews – but after four years of marriage, the problem remains unresolved. By which of course I mean that my husband hasn’t yet realised that I’m right.
The longevity of these sorts of disagreements is well known to us all. I can confidently predict that until somebody invents a colander–sieve hybrid, we will not be able to serve spaghetti to guests. The writer David Sedaris, describing an argument with his partner over whether someone’s artificial hand was made of rubber or plastic, also foresaw no end to their disagreement:
‘I hear you guys broke up over a plastic hand’, people would say, and my rage would renew itself. The argument would continue until one of us died, and even then it would manage to wage on. If I went first, my tombstone would read IT WAS RUBBER. He’d likely take the adjacent plot and buy a larger tombstone reading NO, IT WAS PLASTIC.1
What is it about our brains that makes them so loyal to their beliefs? We saw in ‘The Vain Brain’ how we keep unpalatable information about ourselves from deflating our egos. The same sorts of tricks that keep us big-headed also underlie our tendency to be pigheaded. The brain biases, evades, twists, discounts, misinterprets, even makes up evidence – all so that we can retain that satisfying sense of being in the right. It’s not only our long-cherished beliefs that enjoy such devoted loyalty from our brains. Even the most hastily formed opinion receives undeserved protection from revision. It takes only a few seconds to formulate the unthinking maxim that ‘a sieve should never get its bottom wet’, but a lifetime isn’t long enough to correct it. I think what I like most about everything you’ll find in this chapter is that if you find it unconvincing, that simply serves better to prove its point.
Our pigheadedness begins at the most basic level – the information to which we expose ourselves. Who, for example, reads the Daily Mail? It’s – well, you know – Daily Mail readers. People who like to begin sentences with, ‘Call me politically incorrect if you will, but …’. We don’t seek refreshing challenges to our political and social ideologies from the world; we much prefer people, books, newspapers and magazines that share our own enlightened values. Surrounding ourselves with yes men in this way limits the chances of our views being contradicted. Nixon supporters had to take this strategy to drastic levels during the US Senate Watergate hearings. As evidence mounted of political burglary, bribery, extortion and other hobbies unseemly for a US President, a survey showed that the Nixon supporters developed a convenient loss of interest in politics.2 In this way, they were able to preserve their touching faith in Nixon’s suitability as a leader of their country. (In contrast, Americans who had opposed Nixon’s presidency couldn’t lap up the hearings quick enough.)
Our blinkered surveying of the world is on
ly the beginning, however. Inevitably we are directly confronted with challenges to our beliefs, be it the flat-earther’s view of the gentle downward curve of the sea at the horizon, a weapons inspector returning empty-handed from Iraq, or a plughole clogged with spaghetti. Yet even in the face of counter-evidence, our beliefs are protected as tenderly as our egos. Like any information that pokes a sharp stick at our self-esteem, evidence that opposes our beliefs is subjected to close, critical and almost inevitably dismissive scrutiny. In 1956, a physician called Alice Stewart published a preliminary report of a vast survey of children who had died of cancer.3 The results from her work were clear. Just one X-ray of an unborn baby doubled the risk of childhood cancer. A mere 24 years later, the major US medical associations officially recommended that zapping pregnant women with ionising radiation should no longer be a routine part of prenatal care. (Britain took a little extra time to reach this decision.)
Why did it take so long for the medical profession to accept that a dose of radiation might not be what the doctor should be ordering for pregnant women? A strong hint comes from several experiments showing that we find research convincing and sound if the results happen to confirm our point of view. However, we will find the exact same research method shoddy and flawed if the results fail to accord with our opinions. For example, people either for or against the death penalty were asked to evaluate two research studies.4 One showed that the death penalty was an effective deterrent against crime, the other showed that it was not. One research design compared crime rates in the same US states before and after the introduction of capital punishment. The other compared crime rates across neighbouring states with and without the death penalty. Which research strategy people found the most scientifically valid depended mostly on whether or not the study supported their views on the death penalty. Evidence that fits with our beliefs is quickly waved through the mental border control. Counter-evidence, on the other hand, must submit to close interrogation and even then will probably not be allowed in.5 As a result, people can wind up holding their beliefs even more strongly after seeing counter-evidence. It’s as if we think, ‘Well, if that’s the best that the other side can come up with then I really must be right.’ This phenomenon, called belief polarisation, may help to explain why attempting to disillusion people of their perverse misconceptions is so often futile.
It would be comforting to learn that scientists and doctors, in whose hands we daily place our health and lives, are unsusceptible to this kind of partisanship. I remember being briskly reprimanded by Mr Cohen, my A-level physics teacher, for describing the gradient of a line in a graph as ‘dramatic’. Mr Cohen sternly informed me that there was no element of the dramatic in science. A fact was a plain fact, not some thespian prancing around on a stage. Yet a graph that contradicts the beliefs, publications and career of a scientist is anything but a ‘plain fact’. Which is why scientific papers, identical in all respects but the results, are far more likely to be found to be flawed and unpublishable if the findings disagree with the reviewer’s own theoretical viewpoint.6
Was this part of the reason that Alice Stewart’s research on X-rays received such a stony reception? In her biography she recalls, ‘I became notorious. One radiobiologist commented, “Stewart used to do good work, but now she’s gone senile.”’7 Unfortunately for Stewart, a later study run by a different researcher failed to find a link between prenatal X-rays and childhood cancer. Even though the design of this study had substantial defects – as the researcher himself later admitted – the medical community gleefully acclaimed it as proof that they were right and Alice Stewart was wrong. The similarity of this story to the experimental demonstrations of biased evaluation of evidence is, well, dramatic.
Eventually, of course, we got to the point we are at today, where a pregnant woman is likely to start rummaging in her handbag for her mace should an obstetrician even breathe the word ‘X-ray’ in earshot. But it took a very long time to get there. By 1977, there was a huge amount of research showing a link between prenatal X-rays and childhood cancer. Yet the US National Council on Radiation Protection remained stubbornly convinced that X-rays were harmless. They suggested an alternative explanation. It wasn’t that radiation caused cancer. Ludicrous idea! No, the relationship between X-rays and cancer was due to the supernatural prophetic diagnostic powers of obstetricians. The obstetricians were X-raying babies they somehow knew would get cancer. This logically possible, yet nonetheless stubbornly porcine, hypothesis merits but one response: Oink, oink.
It’s not just other people’s arguments to which we turn the cold shoulder. Once we have made up our minds on a matter, arguments in favour of a contrary view – even points generated by our own brains – are abandoned by the wayside. Remember the volunteers in the study described in ‘The Vain Brain’, who were set to work thinking about a choice in their life?8 Some students, you may recall, were asked to reflect on a decision they had already made (to book a holiday, or end a relationship, for example). In retrospect, had they done the right thing? Other students deliberated over a dilemma they had yet to resolve. As they sat in quiet contemplation, both groups jotted down all their thoughts. Afterwards, the researchers counted up the different sorts of thoughts listed by the students, in order to build up a picture of what their minds were up to during this phase of the experiment. The people who were still uncertain as to whether to forge ahead with a particular course of action were impressively even-handed in their weighing-up of the pros and cons, the risks and benefits. But the other students, in response to the experimenter’s request to them to inwardly debate the wisdom of their choice, were careful to avoid overhearing any whispered regrets of their mind. Presumably they too had once pondered both sides of the matter before making their final decision. But they were mulishly reluctant to do so now. The researchers, totting up tallies of the different sorts of thoughts the thinkers produced, found that the ‘post-decision’ volunteers were far less likely to set their wits to work on the potentially awkward issue of whether or not they had done the right thing. And on the rare occasions their minds did roam towards this dangerous area, they far preferred to dwell on the positive, rather than negative, repercussions of what they had done. So what were their minds up to? Procrastinating, it seemed. Rather than risk being proved wrong, even by themselves, their minds instead distracted them with a remarkable number of thoughts (such as ‘I like the experimenter!’) that were safely irrelevant to the task in hand.
Twisting information and self-censoring arguments – strategies we unconsciously use to keep the balance of evidence weighing more heavily on our own side of the scales – keep us buoyantly self-assured. And what is more, the faith we hold in the infallibility of our beliefs is so powerful that we are even capable of creating evidence to prove ourselves right – the self-fulfilling prophecy. The placebo effect – in which a fake treatment somehow makes you better simply because you think you are receiving an effective remedy for your complaint – is probably the best-known example of this.9 And when a genuine treatment doesn’t enjoy the benefit of the brain’s high hopes for it, it becomes remarkably less effective. When you toss down a few painkillers, it is in no small way your confidence that the drug will relieve your headache that makes the pain go away. A group of patients recovering from lung surgery were told by their doctor that they would be given morphine intravenously for the pain.10 Within an hour of the potent painkiller entering their bloodstream their pain intensity ratings had halved. A second group of post-surgery patients were given exactly the same dose of morphine via their drip, but weren’t told about it. An hour later, these uninformed patients’ ratings of the intensity of the pain had reduced only half as much as in the other group. However, ignorance was bliss (relatively speaking) in a second experiment in which the intravenous morphine was withdrawn. Patients not told that their supply of pain relief had been interrupted remained comfortable for longer than patients who had been apprised of the change in drug regimen. Even ten hours late
r, twice as many uninformed patients were still willing to battle on with the pain without requesting more relief.
Even more extraordinary are the influences that other people’s beliefs can have on you. Psychologists first of all directed their interest in the self-fulfilling prophecy upon themselves. Could a psychologist be unwittingly encouraging her volunteers to act in line with her beliefs about what should happen in the experiment? Psychologists found that they did indeed have this strange power over their experimentees.11 Exactly the same experimental set-up reliably yields different results, depending on the beliefs of the researcher who is running the experiment and interacting with the participants. (In fact, even rats are susceptible to the expectations of experimenters.) Researchers can also unknowingly affect the health of participants in clinical drug trials. In a twist on the placebo effect, the researcher’s beliefs about the effectiveness of a drug influence how effective it actually is. For this very reason, good clinical trials of drugs are now run double-blind. Neither the patient nor the researcher knows what treatment the patient is getting.
Psychologists then got curious about whether the self-fulfilling prophecy might be silently at work outside the lab in the real world. In a notorious experiment, two psychologists, Robert Rosenthal and Lenore Jacobsen, turned their attention to the school classroom.12 They gave a group of schoolchildren a fake test, which they claimed was a measure of intellectual potential. Then, supposedly on the basis of the test results, they told teachers that little Johnny, Eddy, Sally and Mary would be displaying an intellectual blossoming over the next few months. In fact, these children had been plucked randomly from the class list. Yet the teachers’ mere expectation that these children would shortly be unfurling their mental wings actually led to a real and measurable enhancement of their intelligence. Teachers ‘teach more and teach it more warmly’ to students of whom they have great expectations, concludes Rosenthal. It’s extraordinary to consider what a powerful impact a teacher’s particular prejudices and stereotypes must have on your child. And the prophecy is not only self-fulfilling, it’s self-perpetuating as well. When your son unwittingly fulfils his teacher’s belief that ‘boys don’t like reading’, that belief will become yet more comfortably established in the teacher’s mind.