Madness Explained
Page 38
Figure 12.4 Internality scores on the ASQ for paranoid, depressed and normal participants (from Kaney and Bentall, 1989).
their lives and that they were powerless to change it.) However, we were more intrigued by their scores on the internality dimension of the questionnaire. Recall that this part of the scale measures whether the perceived causes of events are internal (to do with the self) or external (to do with other people or circumstances). Our findings are shown in Figure 12.4. Replicating the results of previous investigators, we found that depressed patients had lower internality scores for hypothetical positive events than for negative events – if something went wrong they tended to blame themselves whereas, if something good happened, they tended to assume that the cause was something to do with other people or circumstances. Again, as observed by other researchers, the ordinary people who served as controls tended to be more willing to attribute positive than negative events to themselves. This self-serving bias was markedly exaggerated in our paranoid patients, who appeared to be experts at blaming themselves for positive events and at avoiding blame for negative events.
This observation has been repeated by other investigators in Britain,87 Canada,88 Australia89 and even South Korea.90 However, an abnormal attributional style is not found in all deluded patients. In a study carried out in North Wales, Helen Sharp, Chris Fear and David Healy tested patients who expressed a variety of unusual beliefs. They observed an abnormal self-serving bias in patients with persecutory or grandiose delusions, but not in those who reported other kinds of delusional ideas.91 This makes intuitive sense, because only paranoid and grandiose delusions appear to involve preoccupations about success and failure.
The studies we have considered so far have all used the ASQ. However, there are other ways of assessing attributions. In a later study, Sue Kaney and I asked paranoid, depressed and ordinary people to play a computer game in which they were required to discover a rule. On forty occasions, two pictures were presented on the computer screen and they could choose one or the other by pressing button 1 or button 2. The participants started with 20 points and, when they made a ‘correct’ choice, another point was awarded. When they made an ‘incorrect’ choice, a point was deducted and the computer made an angry chimpanzee noise. Of course, we had programmed the computer so that the participants’ choices had no effect on whether they won or lost points. No matter what they did, one game (known to us as the ‘win game’) awarded more points than it deducted, leaving the player with 33 points at the end, whereas the other (known to us as the ‘lose game’) deducted more points than it awarded, leaving the player with 6. After each game the participants were asked to indicate on a 0– 100 scale the extent to which they believed they had controlled the outcomes (see Figure 12.5). The depressed patients were sadder but wiser and claimed little control over either of the games (their judgements were therefore accurate). The normal control participants were thoroughly irrational, claiming control when they were winning but not when they were losing. As we had expected, the paranoid patients showed this self-serving bias to a significantly greater degree. (One of the paranoid patients, who played the win game first, was annoyed to discover that he was losing the second game and complained that we had ‘rigged’ it. As he was perfectly correct about this, he could be said to be suffering from paranoid realism.)92
So far in this discussion, we have assumed that attributions can be catalogued on a one-dimensional scale running from internal (self-blaming) to external (blaming other people or circumstances). However, many people find the distinction between ‘internal’ and ‘external’ difficult to comprehend93 so it is not surprising that patients’ judgements about internality are sometimes inconsistent.94 Part of the problem seems to be that the ASQ and similar measures fail to distinguish between two types of external attributions that are different: explanations that implicate circumstances (which can be called external-
Figure 12.5 Paranoid, depressed and normal participants’ estimates of personal control over the outcome of computer games that were ‘rigged’ so that they would either win or lose (from Kaney and Bentall, 1992).
situational attributions) and those which implicate other people (external-personal attributions). The importance of this distinction becomes obvious when we think about common events that provoke attributions. For example, most people explain being late for a meeting in situational terms (‘The traffic was dreadful’).* From what we learnt in Chapter 10, we might expect a depressed person to point to an internal cause (‘I’m lousy at keeping track of time’). A paranoid person, however, might blame the police for maliciously setting all the local traffic lights to red. The implication of this simple thought-experiment is that external-situational attributions may be psychologically benign because they allow us to avoid blaming ourselves while, at the same time, blaming no one else. This is presumably why they are the essence of a good excuse. External-personal attributions, on the other hand, appear to be psychologically toxic: they allow us to avoid blaming ourselves only at the expense of blaming someone else.
Peter Kinderman and I conducted a series of studies to explore this distinction.95 When we compared paranoid, depressed and ordinary people, we obtained the pattern of results shown in Figure 12.6. Although this figure may seem fairly complex, there are only three important features to focus on. First, as shown in the left panel of the
Figure 12.6 Locus of attributions made by paranoid, depressed and non-patient participants for positive (+) and negative (-) events (Kinderman and Bentall, 1996).
figure, the depressed patients we tested uniquely tended to blame themselves for negative events. Second, as shown in the centre panel, and as we had expected, the paranoid patients uniquely tended to blame other people for negative events. The right panel of the figure shows external-situational attributions. Although these might seem unimportant, a glance will reveal that the paranoid patients made fewer explanations of this sort, either for positive or for negative events, than the other groups. It was as if they just did not know how to make a good excuse. Their excessive use of external-personal (other-blaming) explanations can be partly understood in the light of this deficit. Unable to attribute negative events to situations, and with an excessive tendency to avoid attributing such events to themselves, the only choice left is to blame someone else.
The Epistemological Impulsiveness of the Deluded Patient
We can now move to the final box in the back-of-an-envelope model of belief acquisition that I introduced earlier in the chapter. This part of the model concerns the search for further information that might illuminate our beliefs, causing us to hold on to them more tightly, or to modify them on the basis of new evidence. Researchers have suggested two ways in which this process may be handicapped in deluded patients. First, they may be unable to adjust their beliefs appropriately in the light of additional information. Second, they may avoid seeking additional information altogether.
Evaluating hypotheses
When we try to figure out what is going on around us, the evidence rarely points exclusively in one direction. For example, imagine that you are struggling to make sense of a tense relationship with your employer. Perhaps she has taken a dislike to you. On the other hand, perhaps her occasional scowls reflect the constant pressure of her job. When you saw her earlier today she seemed irritable, but when you bumped into her last week she was cheerful and encouraging. Under these kinds of circumstances it can be very difficult to balance different strands of information to decide which explanation is most likely to be correct.
The ability of deluded patients to weigh up inconsistent evidence was first studied by clinical psychologists Philippa Garety and David Hemsley, at the University of London’s Institute of Psychiatry. They argued that an inability to think clearly about the probable truth of incompatible hypotheses might lead people to hold on to bizarre beliefs and reject beliefs that are more reasonable.*96 In order to test this theory, they conducted a series of experiments in which pati
ents were asked to estimate the probability of a different hypothesis in the light of evidence that changed as the experiments progressed.
The task they chose for this purpose may seem very abstract and removed from the demands of everyday life. The people taking part in their experiments were shown two jars, each containing beads of the same two colours. In one jar, one colour far outnumbered the other (by a ratio of 85:15) whereas, in the other, the proportions were reversed. The jars were then hidden away, and the participants were shown a predetermined sequence of beads (some of one colour and some of the other). Their task was to guess which jar the beads had been drawn from. (In some experiments, they made a guess after seeing each bead, in others they were required to guess whenever they had seen enough beads to feel confident about their decision.)
In their first study, Garety and her colleagues demonstrated that deluded patients made guesses on the basis of less evidence and with greater confidence, in comparison with a mixed group of psychiatric patients without delusions.97 A more surprising result was obtained in a second study, in which the sequence of beads was designed so that it favoured one jar early in the sequence (when one colour appeared much more often than the other), but the other jar later (when most beads were of the colour which had initially been least frequent). In this experiment, Garety found that her deluded patients more rapidly changed their minds about the jar of origin than her non-deluded control participants.98 This finding is paradoxical because it implies that deluded patients can sometimes be excessively flexible in their beliefs.
Garety’s findings provoked other investigators to carry out experiments that have generally supported her initial observations. Yvonne Linney, Emmanuelle Peters and Peter Ayton at the Institute of Psychiatry in London found that ordinary people who scored highly on a questionnaire measure of delusional thinking also showed a tendency to jump to conclusions on a range of reasoning tasks.99 Psychologists Caroline John and Guy Dodgson in Newcastle tested deluded patients and ordinary people using a variant of the well-known twenty-questions game, in which participants pose a series of questions (for example, ‘Is it vegetable?’) to elicit yes/no answers, until they have enough evidence to guess the identity of a hidden object. When taking part in this game deluded patients asked fewer questions than ordinary people before making their first guess.100
Some studies have also shown that this tendency in deluded patients is more pronounced when they are asked to reason about meaningful stimuli. Robert Dudley and his colleagues asked participants to guess whether a series of statements had been made by a person with mostly negative or mostly positive opinions about someone similar to them-selves.101 In a comparable study conducted by Heather Young and myself, we asked participants to guess whether a list of descriptions concerned someone who had mainly positive traits or someone who had mainly negative traits.102 In both of these experiments, all of those who took part made more hasty and extreme judgements in these emotionally salient conditions than on Garety and Hemsley’s original beads task.
Unfortunately, the cause of the deluded patient’s epistemological impulsivity is unknown. One possibility, which might explain why they change their minds quickly when the weight of evidence on the beads task changes, is that they respond only to recent information. This explanation would be consistent with a neglected theory of schizophrenia proposed by American psychologist Kurt Salzinger, who argued that many of the cognitive deficits experienced by patients are a consequence of their tendency to respond to the most immediate stimuli in their environment.103 Robert Dudley and his colleagues104 and Heather Young and I105 carried out experiments to test whether this kind of deficit could account for Garety’s results, by seeing how patients managed when the balance of the evidence was less clear-cut. For example, Heather Young and I repeated the beads-in-a-jar experiment using colour ratios of 90:10, 75:25 and 60:40. We found that deluded patients, like ordinary people, become more cautious in their judgements as the ratio of the colours approached 50:50. This increasing caution in uncertain situations is completely rational and would not be expected if the patients based their judgements only on the last beads that they had seen.
A second possibility is that deluded people do not understand how to go about testing hypotheses. Earlier in this chapter, I discussed Sir Karl Popper’s famous proposal that the most logical way of evaluating a theory is to look for evidence against it.106 According to Popper, a single piece of disconfirmatory evidence can be enough to bring a theory to its knees, whereas confirmatory evidence may be equally consistent with rival theories. Although the observation that ordinary people typically seek confirmatory data when testing their hypotheses (the so-called confirmation bias) has sometimes been cited as evidence that we can be highly irrational, it has recently become clear that this bias may reflect ‘sensible reasoning’ strategies which, although illogical in the formal sense, are highly effective in real life.
Typically, the strategy we adopt when testing a hypothesis depends on the nature of the hypothesis. When some kind of choice is believed to have resulted in a good outcome (for example, when we believe that a cake is particularly nice because we used honey instead of sugar), it is sensible to test the hypothesis by looking for confirmatory evidence (for example, by baking another cake with honey but changing some of the other ingredients). This is because the result of the test is likely to be a further positive outcome (another nice cake). However, if the outcome is negative (for example, if we think that a cake tastes awful because we used margarine instead of butter) it is sensible to devise a disconfirmatory test (for example, by baking a cake with butter) as this will reduce the possibility of another disappointing result. Studies show that both children and ordinary adults vary their hypothesis-testing strategies according to the expected outcome in just this way.107 When Heather Young and I asked paranoid patients, depressed patients and ordinary people to choose methods of testing a range of hypotheses about positive and negative outcomes, we found no differences. Like ordinary people, our deluded patients consistently chose confirmatory tests of positive outcomes and disconfirmatory tests of negative outcomes.108
Emotional and motivational factors provide a third possible explanation for the deluded patient’s tendency to jump to conclusions. The idea that people vary in their ability to cope with ambiguous information was familiar to Milton Rokeach, who saw it as a cause of political dogmatism. More recently, this idea has been explored by American social psychologists Arie Kruglanski and Donna Webster, who have proposed the term need for closure to describe the general desire for ‘an answer on a given topic, any answer compared to confusion and ambiguity’.109 In experiments with ordinary people Kruglanski and Webster have shown that this need can be provoked by circumstances (most obviously, when we have to work against the clock) but also that some people tend to be less tolerant of ambiguity than others. In order to measure this tendency, they developed a simple questionnaire, which assesses a subjective need for order and structure, emotional discomfort in the face of uncertainty, decisiveness, the inability to cope with unpredictability, and closed-mindedness.
Do deluded patients have a high need for closure? One observation that we have already considered suggests that they might. In Chapter 5, I briefly described a study carried out by British psychiatrist Glen Roberts, in which deluded patients were compared with trainee Anglican priests. Both the priests and the patients scored very high on a measure of their need for meaning in their lives. In the same study, the patients were asked whether they would welcome evidence that their delusions – which often caused them considerable distress – were false. Surprisingly, the majority said that they would not. Roberts interpreted this finding as evidence that, for the patients, their delusional worlds were ‘preferred realities’, embraced perhaps because they were more predictable than the real world.110
Direct evidence of a high need for closure in deluded patients has recently been obtained in a study conducted by my Ph.D. student Rebecca Swarbrick, who used Krug
lanski and Webster’s questionnaire to test patients suffering from paranoid delusions, patients who had recovered from paranoid delusions and ordinary people.111 The currently deluded patients and those who had recovered scored very highly on the questionnaire compared to the ordinary people, suggesting that they experience emotional discomfort when confronted by uncertainty. This finding from the remitted patients is particularly interesting because it suggests that a high need for closure may be a trait that predisposes people to developing delusions, and which remains even after their delusions have remitted. However, whether a high need for closure can explain the findings obtained from Philippa Garety’s beads-in-a-jar task seems doubtful. In a recent study, Emmanuelle Peters found that ordinary people who scored highly on a delusions questionnaire also scored highly on Kruglanski and Webster’s need-for-closure questionnaire. However, in the same study no relationship was found between scores on the need-for-closure questionnaire and scores on Garety’s beads-in-a-jar test.112
Back to the reaction-maintenance principle