A Mind of Its Own
Page 9
Evidence that our brains are deluded begins with a seemingly innocuous question: Are you happy with your social life? Or, to put it another way, are you unhappy with your social life?
Your answer, you may be surprised to learn, is astonishingly sensitive to which way the question is phrased. People asked if they are happy, rather than unhappy, with their social lives report greater satisfaction.3 Responsibility for this peculiar irrationality in our self-knowledge lies with what is known as the positive test strategy. As we contemplate that fascinating inner tangle of our attitudes, personality traits and skills, we ask our internal oracle questions to divine what we suppose to be the truth about ourselves. Am I happy with my social life? Do I want to stay married? Would I make a good parent? You then trawl through your store of self-knowledge searching for evidence that the hypothesis in question is correct. You remember that party you enjoyed last weekend. The touching interest your spouse takes in the small potatoes of your life. Your remarkable talent for manipulating balloons into the shape of animals.
Phrase the question the other way round, however, and your memory throws up a very different pile of evidence. Am I unhappy with my social life? Now you remember what bores you find most of your friends. Do I want a divorce? You think of that dreadful silent meal on your anniversary. Would I make a bad parent? Suddenly your unfortunate tendency to leave precious possessions behind on public transport springs to mind. And that’s why people asked if they’re happy – rather than unhappy – with their social lives believe themselves to be more blessed on that front. (The positive test strategy is also the reason you should never ask someone you want to stay with, ‘Don’t you love me anymore?’)
We use the positive test strategy to test hypotheses about others as well as ourselves, with similarly distorting effects. Crucial decisions may fall one way or another as a consequence of something as trivial as which way round the question is phrased. People’s views about child custody cases, for example, can yield very different outcomes depending on whether they are asked ‘Which parent should have custody of the child?’, or ‘Which parent should be denied custody of the child?’4 In this classic experiment, Parent A was moderately well equipped for custody in all respects: income, health, working hours, rapport with the child and social life. Parent B, in contrast, had a rather more sporadic parental profile. On the one hand, Parent B had an above average income and a very close relationship with the child. But on the other hand, this parent had an extremely active social life, a great deal of work-related travel and minor health problems. When people were asked who should have custody of the child, they followed the positive test strategy of searching for evidence that each parent would be a good custodian. As a result, Parent B’s impressive credentials with regard to income and relationship with the child shone out over Parent A’s more modest abilities on these fronts, and nearly two-thirds of participants plumped for Parent B as the best custodian.
Ask who should be denied custody, however, and a very different picture emerges. The positive test strategy yielded evidence of Parent B’s inadequacies as a guardian: the busy social and work life, and the health problems. By comparison, a positive test strategy search of Parent A’s more pedestrian profile offered no strong reasons for rejection as a guardian. The result: the majority of participants decided to deny Parent B custody.
You may be relieved to be assured that the positive test strategy has an effect only if there is genuine uncertainty in your mind about the issue you’re considering. It’s not going to make much difference whether you ask a feminist if they approve of unequal pay for men and women, or whether they disapprove. Nonetheless, the implication of the positive test strategy research is rather worrisome, suggesting as it does that many difficult choices in our lives, based on our inferences about ourselves and others, might perhaps have swung the other way if we had only considered them from the opposite angle.
A second damaged tool in all of our personal scientific tool-boxes is the brain software we use to spot correlations. Correlation is what put the warning messages onto packets of cigarettes. There are plenty of 80-year-olds puffing away on a couple of packs a day but, on the whole, the more you smoke the more likely it is that the Grim Reaper will scythe in your direction sooner rather than later. If everyone who smoked died instantly from lung cancer then the tobacco companies might not have been able to kid on for so long that smoking was a harmless hobby. But because nature is messy and complicated, correlations are very difficult to spot by eye. It took statistical analysis to pinpoint the relationship between smoking and cancer.
You may not want to blame your brain for not coming equipped with the full functionality of a statistical analysis program. However, you may want to get a little shirty about your brain’s little habit of making up statistical results. Your brain has a sneaky tendency to ‘see’ the correlations that it expects to see, but which aren’t actually there. This is called illusory correlation and the classic demonstration of it in action was provided way back in 1969, using the Rorschach inkblot test.5 At that time, Rorschach’s inkblots were very much in vogue as a diagnostic tool for psychoanalysts. The idea behind this hoary and infamous test is that what you see in the carefully designed splodges of ink reveals some well-hidden horror of your psyche to the psychoanalyst. While you are innocently spotting butterflies and faces, thinking it a pleasant ice-breaker before the real work begins, the psychoanalyst is listening to the sweet ker-CHING! of the therapy till.
Back in the sixties when this experiment took place, homosexuality was still regarded as a mental illness, and therapists had all sorts of ideas about what homosexuals tended to see in the inkblots. The experimenters surveyed 32 experienced clinicians, asking them what they had noticed in their homosexual clients when they used the inkblots. Almost half of the clinicians said that these patients tended to see ‘anal content’, to use the unhappily evocative phrase employed in the field. However, scientific research even at that time showed that there was no such relationship: homosexual men are no more likely to see butts in blots than are heterosexuals. To try to understand why the clinicians were making this mistake, the researchers gave first-year psychology students some fake clinical experience. The students read through 30 fictitious case-notes, like the example overleaf. Each case-note showed first an inkblot, then what the patient claimed to have seen in the inkblot (in this example, ‘horse’s rear end’), and the patient’s two chief emotional symptoms. (Remember, we’re back in the era when homosexuality was regarded as a mental illness.)
The case-notes were cleverly designed to ensure that, over all the case-notes, there was no correlation whatsoever between having homosexual feelings and seeing something to do with bottoms in the blots. Yet when the researchers asked the students whether they’d noticed any relationship between homosexual tendencies and seeing certain sorts of things in the blots, over half of the students reported seeing a correlation with rear-ends. The students saw the very same illusory correlation as did the experienced clinicians. They saw what wasn’t there. In fact, this mistaken belief persisted even when, on another occasion, the case-notes were arranged such that homosexuals were less likely to report anal content than were heterosexual clients.
Note: For reasons of copyright, the above inkblot is not a genuine Rorschach inkblot and has been created for this book for illustration purposes only.
This experiment should have had the clinicians blushing into their beards. (It was the dawn of the seventies and they were psychoanalysts: of course they had beards.) Despite their many years of professional experience, the clinicians turned out to be working with the same facile and erroneous hypothesis that first-year psychology students developed during a 30-minute experiment. The reason was illusory correlation. On the surface it seemed like a plausible hypothesis. Gay men talking about bottoms: who needs Dr Freud to work that one out? With a deceptively convincing hypothesis embedded in your skull, it is but one short step for your brain to start seeing evidence for tha
t hypothesis. Your deluded brain sees what it expects to see, not what is actually there. The moral? Treat with the greatest suspicion the proof of your own eyes.
Our memories also warrant a guarded scepticism since they, too, can weakly succumb to our mistaken expectations. We might, for example, eagerly look forward to impressive improvements in our concentration, note-taking, reading, study and work-scheduling skills after investing our time in one of those ‘study skills’ courses so frequently offered by universities. Students about to be treated to a three-week study skills programme were asked to rate their studying abilities before the course started.6 Similarly able students who were put on a waiting list for this popular self-improvement programme were asked to do exactly the same. Then, after the first group had completed the course, both groups were asked to say whether they felt that their scholarly talents had improved over the period of the programme. (The only skill the waiting-list students had got to practise during this time was, of course, waiting.) Everyone was also asked to remember, as accurately as they could, how they had rated those very same skills three weeks before. Their heads buzzing with handy tips on skimming, power-listening and mind-maps, the students fresh from the programme were confident that they were now a superior breed of scholar. Yet curiously, they did no better in the exams and term grades that followed than did the students uninitiated in the secrets of successful swotting. So how, then, were they able to convince themselves of real increase in skills? Despite the course being ineffective, the students managed to persuade themselves by exaggerating how poor their study skills were before the programme. Asked to recall how they had evaluated their learning abilities before the programme, they remembered giving themselves worse ratings than they actually had. In other words, by memory’s sleight of hand they gave themselves a little extra room for improvement.
Nor did the collusion of memory with blithe optimistic hope end there. Six months later, a researcher rang to ask them about their academic performance following the course. So willing were the students’ memories to fall in with their great expectations for the study skills course that unlike the waiting-list students they remembered doing better than they actually had. Working on the assumption that the techniques they had zealously mastered on the course must have helped their grades, the students manufactured evidence to prove it. The researchers speculate that this sort of helpful rewriting of personal history to fit in with people’s expectations of self-improvement might help to explain the enduring popularity of self-help programmes of dubious objective value.
A further problem with our beliefs is the irrational loyalty that we show towards them. Once acquired, even the most erroneous beliefs enjoy an undeserved degree of protection from rejection and revision (as revealed in the next chapter).
So, what with our proclivity towards seeking evidence that supports whichever hypothesis we happen to be entertaining, our penchant for simply inventing supporting evidence, and our pigheaded retention of beliefs, it’s easy to see how our unsound scientific strategies can have unhappy consequences. It all bodes very ill for the accuracy of the beliefs to which we are led.7 Yet these distortions pale into insignificance when stood beside clinical delusions. Thinking yourself a little less happy with your social life than you actually are is not in the same ballpark as believing yourself dead (the Cotard delusion described in ‘The Emotional Brain’). Falling prey to an illusory correlation between your moods and your menstrual cycle8 simply does not compare with the delusional belief that your thoughts are being controlled by the devil. And misjudging your spouse’s fitness to continue in the role as your life companion does not hold a candle to the belief, known as the Capgras delusion, that your spouse (or other family member) has been replaced by an alien, robot or clone.
The false beliefs of the delusional patient are simply of a different order of magnitude to our own modest misconceptions. Yet it has proved remarkably difficult to establish what the difference is between, say, the Capgras patient who is convinced that her husband has been replaced by a robot, and the person who goes no further than occasionally fantasising about the joys of a Stepford spouse. Until quite recently, the psychoanalytic crew were having a field day with the Capgras delusion. According to their way of looking at the delusion, it is the subconsciously held feelings of ambivalence towards a family member that are helpfully resolved by the belief that the person has been replaced by an impostor. Voila! A bona fide reason to no longer love your mother. However, recent progress in cognitive neuropsychiatry has put a few spanners in the psychodynamic works.9 For one thing, Capgras patients often show signs of brain injury, which suggests that it isn’t simply their subconscious playing up. Moreover, some Capgras patients also claim that personal belongings have been replaced – and it’s hard to describe convincingly the subconscious hatred a patient has towards his watch or, as in one curious case, a tube of Polyfilla.10
Then an exciting discovery was made: Capgras patients aren’t emotionally aroused by familiar people.11 Normally, when you see someone you know, your skin conductance response increases, showing that that person is of some emotional significance to you. But Capgras patients don’t produce this emotional buzz. Could this be the key to their delusion? Some psychologists have suggested that it is. The Capgras patient recognises the person in front of them (‘Well, it certainly looks like my husband …’) but, because of brain injury, gets no emotional tingle from the experience (‘… but it doesn’t feel like my husband’). In order to explain this strange emotional lack, the patient concludes that the person in front of them must be an impostor of some sort.12 In other words, at least part of the reason that you have never woken up one morning, looked at your husband, and then twitched open the nets in search of the spaceship he came in on is that your brain is intact. You may not be thrown into a fit of passion by his crazy bedhead hairstyle, but you will at least produce the minimally required level of sweat when you see his face.
But can this really be the whole story? The Capgras belief is so irrational, so impossible, so – let’s just say it – nutty, that it’s hard to understand why the patients themselves don’t immediately reject as ludicrous nonsense the idea that their husband or wife has been replaced by an alien. Especially since the patients themselves can be intelligently coherent, and well aware of how far their assertion strains credulity.13 Nonetheless they politely maintain that, in their case, it just so happens to be true. What is it, then, that pushes delusional patients over the brink?
One idea is that part of the problem for delusional patients is that they are even worse everyday scientists than we are. One hypothesis along these lines is that delusional patients jump to conclusions.14 Instead of sampling a decent amount of data before forming a belief, the delusional patient leaps foolhardily to their half-baked conclusion on the flimsiest of evidence. Intuitively, this makes sense. After all, how much evidence can the Capgras patient actually have for his claim that his wife has been replaced by a robot? The classic test used to put to the proof the jumping-to-conclusions hypothesis is known as the Beads Task.15 Follow the instructions on the next page and take a turn yourself – if you dare.
Here are two jars of beads – A and B. Jar A has 85 white beads and 15 black beads. Jar B has 85 black beads and 15 white beads. Beads will be drawn from the same jar each time. Your task is to decide which jar the beads are being drawn from. You can see as many beads as you like to be completely sure which jar has been chosen.
On the following page is a list of beads drawn from the mystery jar. Place your hand over it. Then, when you’re ready, slide your hand down until you can see the first bead. Keep on slowly moving down until you have seen enough beads to be confident which jar they came from. Then count the number of beads you saw and turn to the next page.
black bead
black bead
black bead
white bead
black bead
black bead
black bead
black bead
r /> black bead
white bead
white bead
In these studies people generally ask for between three and four beads before they feel confident enough to say that the beads are being drawn from Jar B (you did choose Jar B, I hope?). It’s probably close to the number of beads that you yourself chose. However, in the eyes of a statistician you would have been going on looking at bead after bead for a pathetically timid length of time. The probability of the bead being from Jar B after the first black bead is a whopping 85 per cent. After the second black bead, this increases to 97 per cent. At this point, the statistician claims to have seen enough, and impatiently waves the jars away. You and I, however, carry on to the next bead, and the next, just to get that additional tiny extra likelihood of being correct. In contrast, people suffering from delusions only request about two beads before making their decision. In other words, they are better ‘scientists’ than we are.16 Back to the drawing board.
But wait! In a study of all that can go wrong with reasoning, Professors Wason and Johnson-Laird describe the ‘repetition, asseveration, self-contradiction, outright denial of the fact, and ritualistic behaviour’ that they observed in a group of people whose reasoning was so poor that they fetched up as material for a book chapter bluntly entitled ‘Pathology of Reasoning’.17 This sounds promising. Here’s one of the tasks. Participants were told that the sequence of three numbers (called a triad) ‘2 4 6’ fulfilled a simple relational rule chosen by the experimenter. The participants’ task was to try to work out what the rule was by offering their own patterns of three numbers. After each triad they were told whether or not it conformed to the rule. People were told to announce their hypothesis about what the rule was only when they were confident that they were correct. The rule was that the numbers had to get bigger as they went along (or, as the professors preferred to put it, ‘numbers increase in order of magnitude’). It could hardly have been simpler. (As Professor Johnson-Laird may well have remarked to his colleague, ‘Elementary, my dear Wason.’) Yet take a look at the tortured performance of the person who proffered three increasingly convoluted hypotheses before giving up in defeat nearly an hour later (a few examples of triads offered are given before the hypotheses):