Madness Explained

Home > Other > Madness Explained > Page 36
Madness Explained Page 36

by Richard P. Bental


  Detailed studies have shown that delusional beliefs, like ordinary beliefs and attitudes, vary across a number of dimensions, such as their bizarreness, the conviction with which they are held, the extent to which the patient is preoccupied by them, and the extent to which they cause distress.34 It is true, as psychologist Martin Harrow and his colleagues in Chicago have documented in carefully conducted long-term investigations, that delusions sometimes persist apparently unchanged for many years.35 However, as Milton Rokeach recognized when conducting his ‘three Christs’ experiment at Ypsilanti Hospital, the same might be said of any beliefs that are important to an individual’s identity, such as, for example, political and religious convictions. In fact, over short periods of time, the conviction with which delusions are held may fluctuate, so that beliefs that are held to be absolutely true on one day may be described as only possibly true on the next.36

  Unfortunately, clinicians and researchers schooled in the biomedical approach to psychiatry have often assumed that delusions and ordinary beliefs are completely different. As recently as 1991, for example, the Cambridge psychiatrist German Berrios asserted that delusions are not beliefs at all, but ‘empty speech acts, whose informational content refers to neither world nor self’.37 Consequently, throughout most of the twentieth century, delusions were generally ignored by experimental psychologists.

  My own work on this topic began a few years after I had qualified as a clinical psychologist, by which time I had published several studies of auditory hallucinations, and was keen to see whether my general approach could be brought to bear on other symptoms. Delusions seemed an obvious candidate, and I was amazed to discover the extent to which researchers had neglected them. Fortunately, I had just moved from a National Health Service post to a junior lectureship at the University of Liverpool, which ran a small fund for junior staff who wished to embark on new lines of research. My application was written in a couple of hours, without much optimism, was brief and could be summarized in two sentences: ‘No one has done much research on delusions. Give me some money and I’ll do some.’ To my amazement, they did.

  The funds provided to me by the University allowed me to employ a research assistant for six months. Sue Kaney, a recent graduate, was recruited and proved to have a knack for persuading suitable patients to co-operate. The initial studies that we completed together led to further experiments, which have kept me busy ever since. As psychologists in other universities had also spotted the dearth of research on delusions, by the late 1990s there had emerged a small cottage industry of investigators struggling to explain why psychotic patients hold unusual beliefs.

  The Psychotic as Scientist

  Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everyone gets busy on the proof.

  John Kenneth Galbraith38

  Most people probably regard delusional and scientific thinking as completely incompatible phenomena. After all, it is easy for scientific researchers to assume that there is something unique or special about the systematic way in which we approach our task. According to this orthodox view, the scientific approach involves the careful and dispassionate collection and evaluation of evidence, common sense reasoning is open to all sorts of self-interested biases, fuzzy thinking and naked prejudices, and delusional reasoning exemplifies these biases in excess. Of course, it is not that simple. The inhabitants of this planet cannot easily be divided into different species on the basis of their scientific skills, any more than they can be easily divided into the mad and sane.

  On the one hand, studies of what scientists actually do (as opposed to what undergraduate textbooks say that they do) have shown that research is far from an emotionless activity. In pursuit of the recognition of their peers, famous scientists have deviated from prescribed methods so frequently that some historians have suggested that the history of science should be X-rated.39 At the same time, sociologists have observed that the language employed by modern scientists, when talking freely about their work, often reveals intense rivalries between different research groups, and portrays the scientific process as an epic struggle in which heroes and villains battle for high ground.40 At moments when the scientific ego is especially threatened, the dividing line between the practice of science and paranoid thinking becomes almost undetectable. On opening the letter from a journal that tells us that the editor has on this occasion declined to accept our manuscript for publication, we scan the accompanying anonymous referees’ reports to see if we can work out who to blame. (‘It must be John Doe! He’s always been envious of my work!’)

  On the other hand, ordinary people sometimes approach problems in their lives in ways that are no less systematic than the strategies of trained researchers, as highlighted in the following quotation from the clinical psychologist George Kelly:

  One of my tasks in the 1930s was to direct graduate studies leading to the Masters Degree. A typical afternoon might find me talking to a graduate student at one o’clock, doing all those familiar things that thesis directors have to do – encouraging the student to pin-point the issues, to observe, to become intimate with the problem, to form hypotheses either inductively or deductively, to make some preliminary test runs, to control his experiments so he will know what led to what, to generalize cautiously and to revise his thinking in the light of experience.

  At two o’clock I might have an interview with a client. During this interview, I would not be taking the role of the scientist but rather helping the distressed person to sort out some solutions to his life’s problems. So what would I do? Why, I would try to get him to pin-point the issues, to observe, to become intimate with the problem, to form hypotheses, to make test runs, to relate outcomes to anticipations, to control his ventures so that he knows what led to what, to generalize cautiously and to revise his dogma in the light of experience.41

  When exploring these parallels, a good starting point would be a simple framework for understanding how beliefs and attitudes are formed. Figure 12.1 shows a simple ‘heuristic’ model I devised in order to guide my own research when I obtained my small priming grant.42 (I literally drew it up on the back of an envelope one day.) According to this model, beliefs about the world are not plucked out of the blue but are based on events (data). The events have to be perceived and attended to (those which we fail to notice cannot influence our thinking). Once we have noticed them, we can make inferences about their importance and meaning and this leads to beliefs about the world. Finally we may seek further information to either support or refute our beliefs, and so the cycle is repeated.

  (The philosopher of science Sir Karl Popper argued that data that refute a hypothesis are nearly always more informative than data that appear to support it.43 This is because negative evidence can be decisive whereas positive evidence may support several alternative hypotheses. Popper therefore suggested that scientists should vigorously seek evidence that disconfirms their pet theories in the hope that there will be no such evidence. Ordinary people, however, usually seek

  Figure 12.1 A simple heuristic model of the processes involved in acquiring and maintaining a belief (from R. P. Bentall (ed.) (1990) Reconstructing Schizophrenia. London: Routledge, pp. 23–60).

  evidence that favours their ideas, a phenomenon known to psychologists as the confirmation bias.44 Interestingly, some studies have shown this bias in professional scientists, although whether it has an adverse affect on their work remains a matter of debate.)45

  It is probably worth noting that the model shown in Figure 12.1, although superficially different from the model of depression I developed in Chapter 10, is really quite similar to it (after all, an attribution could be described as a type of inference). Of course, it is not a model of delusions per se. It is a very crude account of the way in which beliefs and attitudes are acquired by scientists, ordinary people and psychiatric patients. However, by considering each part of this account in turn, we can begin to explore how different factors may le
ad an individual to develop beliefs that appear strange or unusual to other people.

  The Nugget of Truth in Paranoia

  When accused by Henry Kissinger of being paranoid about the Arabs, Golda Meir, the Israeli prime minister, retorted that ‘Even paranoids have enemies.’46 Of course, if patients’ accounts of their lives were completely realistic, it would be wrong to regard them as delusional at all. In fact, it seems that clinicians sometimes do mistake patients’ well-founded but unusual fears for delusions. A group of psychiatrists working on Long Island, USA, reported their attempts to help a distressed woman who had been brought into their clinic by friends. The woman said that ‘something horrible’ would happen to her if she did not leave the Islandby the end of the day. Although the psychiatrists diagnosed her as suffering from a psychotic illness, she was later able to record a series of unpleasant phone calls, thereby confirming that her life had been threatened by an acquaintance.47

  It is possible that delusional beliefs, even if clearly unrealistic, contain a nugget of truth that is distorted by the delusional process. Evidence in favour of this hypothesis has come to light from modern research into the case of Daniel Schreber. Following up Freud’s insight that Schreber’s delusions had something to do with his father, American psychoanalyst William Niederland investigated the judge’s family.48 Schreber’s father, it transpired, was a well-known physician and educationalist who had unusual views about childrearing, which he proselytized in books that were widely read in his day. Believing that children should learn to hold rigid postures when sitting, walking or even sleeping, he invented a series of braces to force them to adopt the desired positions. For example, one brace was designed to make children sit upright while eating. Bolted to the table in front of the child, it consisted of an iron bar that extended up to the child’s face, which it gripped by means of a leather strap under the chin.

  Although subsequent research by the historian Zvi Lothane49 has suggesteda more positive view of the Schreber family than that painted by Niederland, it seems likely that the younger Schreber spent much of his childhood restrained by his father’s devices. Moreover, the contents of the bizarre ideas he developed later in his life seem to reflect these experiences. For example, a delusion about a ‘chest compression miracle’ seemed to relate to his father’s practice of securing the child to his bed by means of a specially designed strap. Similarly, the miracle of the ‘head compressing machine’ seems much more understandable when we known that Schreber’s father invented a kopfhalter (head-holder, consisting of a strap, with one end clamped to the child’s hair and the other to his underwear, so his hair was pulled if he did not hold his head straight) and a chin band (a helmet-like device designed to ensure proper growth of the jaw and teeth, see Figure 12.2).

  Of course, it is difficult to gauge the extent to which real experiences colour most delusions, because it is hard to verify deluded patients’ accounts of their lives. In an attempt to circumvent this problem, University of Illinois sociologists John Mirowsky and Catherine Ross studied persecutory beliefs (which they assessed by a brief interview)

  Figure 12.2 Schreber’s chin band (reprinted from M. Schatzman (1973), Soul Murder: Persecution in the Family. London: Penguin Books).

  in a survey of 500 randomly selected residents of El Paso, Texas, and Juarez in Mexico. As sociologists, Mirowsky and Ross were interested in objective circumstances that would encourage feelings of external control and mistrust, which, they believed, would lead to paranoid thinking. Such circumstances might include experiences of victimization and powerlessness which, previous research had shown, are very common in people of low socio-economic status who lack educational opportunities. Mirowsky and Ross were able to show that, in their sample, persecutory beliefs, beliefs about external control, and mistrust were connected to socio-economic status and educational attainment in roughly the manner they had expected.50

  Although the sceptical reader might be forgiven for wondering whether Mirowsky and Ross’s findings can be generalized to psychiatric patients, two further lines of evidence support this possibility. The first concerns the role of stressful experiences in psychosis. British sociologist Tirril Harris studied patients living in the community and found a high rate of events that she described as ‘intrusive’ in the weeks preceding psychotic and especially paranoid relapses.51 These kinds of events, by Harris’s definition, involve someone – not a close relative – imposing demands on the patient. Examples include threats from landlords, police inquiries, burglaries and unwanted sexual propositions. Harris’s finding was replicated in some but not all of the centres participating in the WHO study of the Determinants of Outcome of Severe Mental Disorders.52 More recently, Thomas Fuchs of the University of Heidelberg has collected biographical data from elderly patients diagnosed as suffering from late paraphrenia (a term sometimes used to describe paranoid-like psychotic illnesses which begin late in life) and depression, finding evidence of a higher frequency of discriminating, humiliating and threatening experiences in the paranoid group.53

  The second relevant source of evidence concerns the high rate of psychotic, and especially paranoid, illness seen in Afro-Caribbean people living in Britain,54 which I discussed in Chapter 6. Of course, immigrants living in a racially intolerant society are especially likely to have experiences of victimization and powerlessness of the kind described by Mirowsky and Ross. The evidence that this kind of stress is really responsible for the high rates of psychosis observed in British Afro-Caribbeans will be discussed in detail in a later chapter. For the moment, we can merely note that the elevated risk of psychosis experienced by this particular group can certainly be interpreted as consistent with the ‘nugget of truth’ hypothesis.

  My own clinical experience is that delusional ideas rarely lack a nugget of truth. No matter how bizarre the ideas expressed by patients, it is usually possible to identify events in their lives that have contributed to their content. Of course, this observation does not imply that delusions are rational. (Very few people really are victims of government plots.) The nugget of truth is usually distorted in some way, and the challenge to psychologists is to discover how this happens.

  Seeing is Believing

  The next part of our back-of-an-envelope model concerns perceptual and attentional processes. The idea that delusions might be the product of rational attempts to make sense of anomalous perceptions was considered by Kraepelin, but later elaborated in detail by Harvard psychologist Brendan Maher. Maher’s theory consists of two separable hypotheses. The first, which we will consider here, is that delusions are always a reaction to some kind of unusual perception. The second, which we will consider later, is that delusions are never the product of abnormal reasoning. Clearly the second hypothesis does not follow inevitably from the first, although Maher sometimes seems to suggest that it does.55

  Maher has pointed to case study evidence that appears to support the first part of his theory.56 For example, he has interpreted Schreber’s delusions as attempts to explain unusual bodily sensations. (It seems likely that many somatic delusions develop in this way.) In clinical practice, it is not unusual to encounter patients whose delusions seem to be interpretations of other types of peculiar experiences. An unpleasant auditory hallucination, for instance, may be attributed to a malevolent spirit or to the Devil.

  Most of the research on Maher’s theory has focused on the effects of perceptual deficits. Following clinical observations of an apparent association between the slow onset of deafness in later life and paranoia, for example, it was suggested that elderly patients’ failure to recognize that they have hearing difficulties might cause them to be mistrustful and suspicious of others (if you notice that your friends have begun to speak in whispers in your presence you might incorrectly infer that they are talking about you).57 In an attempt to test this hypothesis, psychologist Philip Zimbardo and his colleagues at Stanford University in California used hypnosis to induce a temporary state of deafness in hypnotically su
sceptible students, allowing some to remain aware that they had been hypnotized. Those who were unaware of the source of their deafness, but not those who knew that they had been hypnotized, showed an increase in paranoid ideas, as indicated by their responses on various questionnaires, and their attitudes towards others who were present during the experiment.58 Unfortunately, these findings are difficult to interpret because it is not clear how well hypnotic deafness simulates real hearing loss. Moreover, recent, more carefully conducted studies have failed to show the correlation between deafness and paranoid symptoms reported by earlier researchers.59

  Clearer evidence in favour of the anomalous perception model exists for the Capgras delusion. The case of Madame M, described by Joseph Capgras and J. Reboul-Lachaux (and recently translated into English), is worth considering in some detail because it illustrates the complexity of the condition.60 Mme M’s illness began with the conviction that she was a member of the French aristocracy and had been dispossessed of her inheritance, having been replaced by an impostor. This grandiose delusion was transformed into a series of delusional misidentifications after the death of her twin sons and the deterioration of her marriage. Looking into the grave as a small coffin descended into the earth, she believed that she was witnessing the burial of a young boy who was not her own. Later, she came to believe that an impostor had been substituted for her husband and, furthermore, that the impostor had been replaced by further impostors on no less than eighty occasions. During the First World War, Mme M believed that tunnels beneath Paris contained thousands of people who had been abducted and replaced by substitutes, and that the aircraft that flew overhead were attempting to drive people below so that they would suffer a similar fate. It is not difficult to see why Capgras offered a psychoanalytical account of Mme M’s illness. Many later psychologists and psychiatrists have similarly assumed that the delusion results from some kind of ambivalence towards the person who is believed to have been replaced.61

 

‹ Prev