Saving Normal : An Insider's Revolt Against Out-of-control Psychiatric Diagnosis, Dsm-5, Big Pharma, and the Medicalization of Ordinary Life (9780062229274)

Home > Nonfiction > Saving Normal : An Insider's Revolt Against Out-of-control Psychiatric Diagnosis, Dsm-5, Big Pharma, and the Medicalization of Ordinary Life (9780062229274) > Page 14
Saving Normal : An Insider's Revolt Against Out-of-control Psychiatric Diagnosis, Dsm-5, Big Pharma, and the Medicalization of Ordinary Life (9780062229274) Page 14

by Frances, Allen


  Tarantism and Saint Vitus’s dance persisted for four hundred years, from about 1300 to 1700, and then disappeared, with only sporadic cases reported since. This “little ice age” was a difficult time to be alive, with recurrent cycles of famine, pestilence, war, and brigandage. The dancing fads may have offered a cause and a cure for both individual psychopathology and the rampant social breakdown.3

  Vampire Hysteria (Circa: 1720 to 1770)

  The fear of vampires goes far back in time and deep into the human psyche. We have always faced a fundamental problem: what to do with the dead and how to understand what has happened to them. Every culture finds its own solution to this most existential of questions—elaborate burial rituals and folkloric theories are developed to manage the possibly permeable boundary between life and death.

  The problem gained poignancy when nomadic hunters and gatherers settled down to farm and began to live quite literally on top of their dead. Previously, corpses were conveniently left behind when the tribe moved on. But forced proximity to dead ancestors gave rise to fear and worship of them. If someone got sick, if something went wrong, it made sense to worry that the dead (who were living underfoot and perhaps feeling envious or vengeful or dissatisfied) might rise up to exact their pound of flesh.

  In vampirism, this was believed literally—illness in the living was attributed to the blood drinking or flesh eating of the (not so) departed (not so loving) former loved one. The fad started and ended in central Europe during a fifty-year period in the eighteenth century. The surface intellectual calm of the Age of Enlightenment was a thin veneer covering the raging superstition of turbulent, barely postfeudal, mostly rural Europe. Vampirism emerged when Slavic folktales of the “undead” were carried by word of mouth to new neighbors in the expanding Austrian empire. Dedicated, but credulous, Hapsburg officials made the mistake of being too bureaucratic. Following the methods recommended by their Serbian advisers, they made diligent investigations, conducted exhumations, and dutifully staked corpses. Careful reports detailing the best local practices used in executing vampires were widely circulated. Thus legitimized, terror of the “undead” spread like wildfire from village to village. Soon the writers got into the act and invented a lurid vampire literature that fed the flames and precipitated a frenzy of sightings. Alleged “attacks” were reported in East Prussia in 1721 and throughout the Austrian empire during the 1720s and 1730s. The word vampire first entered the English language in 1734, introduced by a travelogue on central Europe. This was the first media-driven fad in history, but not the last.

  Vampire worries were heightened by the difficulty of distinguishing between the quick and the dead. Until fairly recently, a lack of medical technology made this a matter of guesswork and dispute. In a world without stethoscopes, there was no clear boundary. People feared the “undead” and also the risk of themselves being buried alive. Graveside vigils were common—not only to show respect, but also to detect renewed signs of life and to discourage grave robbers. The close observation of corpses contributed to legends of their continued appetites and prowess. Corpses vary dramatically in their rates and style of decomposition. People may temporarily look better dead than they ever did alive—the wasted living body more pleasing to the eye when filled out by the gases of decomposition. The ruddy dusky color might suggest the corpse had enjoyed the blood of the living. This logical enough suspicion would be confirmed if blood was seen seeping from the mouth or nose of the undead.

  Efforts to extirpate the vampires were unkind to the living and to the dead. For vampires captured on the hoof, there were public executions preceded by tortures of the cruelest kind. The victims were the usual cast of suspects—the mentally ill and mentally challenged; the presumed witches (probably women good with herbs); the suicidal, who flirted with death; rebels from church doctrine; and anyone who might be at the wrong place at the wrong time or have the wrong enemy.

  The cure to this craziness came in the person of Maria Theresa, the wise queen of Austria. Her court physician did a thorough study to determine whether vampires actually existed and concluded that there was no basis for any of the claims. The empress then outlawed exhumation under severe penalty, and vampirism died off.

  Small, isolated epidemics of vampirism do sometimes still arise—in recent decades, in Puerto Rico, Haiti, Mexico, Malawi, and of all places, London.4

  Werther Fever Creates Epidemics of Suicide (First Occurrence 1774, and in Flurries Since)

  History presents no clearer testimony to the power of great literature (or the danger of fads) than the deadly effect of Goethe’s novel The Sorrows of Young Werther, published in 1774.5 This partly autobiographical tale of unrequited love and romantic suicide created a new phenomenon—the author as celebrity, the book as fashion guide. A contagion of Werther fever infected Europe, influencing dress, speech, and manners and provoking a fatal chain of copycat suicides. Ironically, Goethe overcame his own lovesickness, lived into ripe old age, renounced the book, and regretted the harms it had caused. His older and wiser hero, Faust, forsakes the temptations of fickle romance for the safer pleasures of building dikes in Holland.

  Imitative suicide has two quite different patterns—cluster and mass. Suicide clusters occur when people copy either a celebrity suicide or that of a relative, friend, classmate, or coworker. Fears of suicide contagion are real enough to have prompted the Centers for Disease Control to issue a set of guidelines on media reporting—suggesting concise, factual stories that avoid glorifying sensationalism, glamorous romance, detailed how-to descriptions, promoting fame by suicide, or any suggestion that suicide can be a rational choice or a gateway to lasting fame.6

  Mass suicide has a socially sanctioned motivation. There have been dozens of episodes throughout history of defeated armies (or their women and children) committing suicide together rather than face murder or enslavement. Less frequent is the protest suicide, when a group jointly decides to make their point in the strongest possible way. A third variant is the group suicide (kamikaze) mission in defense of a religion, ideal, or nation. Then there are the mass religious suicides: follow-the-leader behavior commanded by some would-be messiah.

  Natural selection has been ruthless and largely successful in pruning out suicidal DNA. Despite all of life’s vicissitudes, only one in a thousand people take death into their own hands. The self-destructive often die young, taking their genes with them into oblivion. Life-affirming DNA wins the procreation sweepstakes and keeps us struggling to stay in the game, whatever the hardship and pain. In suicide epidemics, the herd instinct overwhelms our powerful instinct for self-preservation. There is no better illustration of the countervailing power of fads than the fact that the urge to join one sometimes trumps staying alive.

  Neuroscience Promotes Clinical Fads

  Neurasthenia, hysteria, and multiple personality disorder were three late-nineteenth-century fads all started by charismatic neurologists (Beard and Charcot) to explain the puzzlingly, nonspecific presentations of many of their patients. Why three epidemics all at once? And why all three started by neurologists? This is a cautionary (and currently very relevant) tale of how the brilliance of neuroscience findings can sometimes give undeserved authority to half-cocked clinical ideas. The conditions then were similar to conditions now: there was a revolution in understanding how the brain works. The neuron had just been discovered, and scientists (including Freud) were busy tracing the paths of its complex web of synaptic connections. The brain was revealed to be an electrical machine, much more complex but not fundamentally different from the many new electrical devices just then being invented and entering the mainstream of daily life.

  The new biology of brain would explain behaviors previously considered to be within the abstract provinces of the philosophers and the theologians. It might be impossible to plumb the depths of the human soul, but it should be possible to figure out the structural specifications and electrical connections of the human brain. Symptoms were not the result of d
emonic possession, curse, sin, vampires, or tarantula bites. They were understandable as malfunctions in the wiring of the brain machine. This was, and is, a powerful and accurate model. But (then as now) the problem was underestimating just how difficult it is to probe the secrets of this remarkably complex machine. The authority of compelling neuroscience gave undeserved dignity to daffy clinical concepts that don’t make much sense.

  Thus were born the three fads “neurasthenia,” “hysteria,” and “multiple personality.” Each was a different way of labeling and pretending to understand otherwise confusingly nonspecific human suffering. None turned out to be useful; in some ways all were harmful. Their causal explanations were wrong and the treatment recommendations at best had the efficacy of placebo and more often worsened the problems they were meant to cure. But the labels flourished for decades because they sounded convincing, stood on the high authority of the emerging science of neurology, were promoted by charismatic thought leaders, and met the human need for explanation. This all sounds very current and provides an important lesson. A powerfully convincing, but incorrect and harmful, set of labels and causal theories fooled the smartest doctors and the smartest patients in the world. These were revolutionary best guesses that turned out to be dead wrong—as will many of ours.

  Neurasthenia (Circa Late 1800s to Early 1900s)

  Starting in 1869, an American neurologist named George Miller Beard defined and successfully promoted neurasthenia—literally, weak nerves. He was attempting to fill a diagnostic black hole—how to label the many people with the nonspecific bodily and psychological symptoms of fatigue, loss of energy, weakness, dizziness, fainting, dyslexia, flatulence, headache, generalized aches and pains, trouble sleeping, and impotence; depression or anxiety or both. Neurasthenia had the attraction of seeming to explain this wide waterfront of commonplace symptoms.

  Beard’s causal theory followed a hydraulics model analogous to a power failure in any electrical machine—physical and mental exhaustion was plausibly linked to a depletion of the central nervous system’s energy supply. Beard attributed this depletion to social causes—how hard it was for people to adjust to a rapidly changing technological civilization, the stresses of urbanization, and the increasingly competitive business environment. People were getting sick because they were pushing themselves beyond their tolerance and reserves. Most cases occurred in the striving classes of sedentary workers—because they were tiring their minds when nature meant them to be tiring their bodies. Sigmund Freud, then a neurologist, accepted the usefulness of neurasthenia as a descriptive diagnosis because it so well described many of his patients. But he developed a completely different theory to explain the energy depletion—depleted libido. This could be constitutional or result from having too many orgasms (most often from excessive masturbation).

  The treatments for neurasthenia have been remarkably varied, nonspecific, and silly. Beard’s preferred treatment was a biological juice-up of the system with electrotherapy. Freud mocked this as “pretense treatment” and instead suggested a reduction of libidinal depleting activities like masturbation or excessive intercourse. He did not recommend psychoanalysis for neurasthenia because it was caused by libidinal deficit, not psychological conflict. Treatments suggested by others included rest cures, bathing cures, dietary changes, and distraction from the cares of everyday life. All probably had at best the impact of a good placebo.

  Neurasthenia was a vague and nondescript diagnosis with vague, nondescript, and useless treatments. That this did not reduce its enormous worldwide popularity should tell us a great deal about the seductiveness of clinical confabulations. We have an intellectual need to find an elephant in the cloud. The label we create, however inaccurate, provides a comforting explanation of the patient’s suffering and a target for treatment. It is a metaphor of distress appropriate to the technology and worldview of a particular time and place. When everyone is interested in electrical power, the metaphor of distress becomes energy depletion. When people get interested in neurotransmitters (as is the case now), the glib metaphor becomes “chemical imbalance.”

  Neurasthenia disappeared suddenly—probably because psychiatrists replaced neurologists as the major caregivers of patients with nonspecific and vague physical and mental symptoms. The switch from neurology to psychiatry occasioned a parallel switch from physical to psychological symptoms as the preferred mode of communication between patient and doctor. The psychiatric nomenclature was also expanding and soon became much more specific in describing a wide variety of different outpatient presentations that were previously lumped under the unifying, but essentially meaningless, label of neurasthenia. The fad had run its course.7

  Hysteria/Conversion Disorder (Circa Late 1800s to Early 1900s)

  Of all epidemics, this one had the highest pedigree, promoted as it was by the four most famous neurologists of the time—Jean-Martin Charcot, Pierre-Marie-Félix Janet, Josef Breuer, and Freud. Hysteria described patients presenting with neurological symptoms that were puzzling because they did not conform to the distribution of the nervous system or to established neurological disease. Most common were paralysis, sensory loss, strange sensations, posturing, speech loss or alteration, gagging, convulsions, dizziness, or loss of consciousness.

  Charcot was a great showman who addressed hysteria with an enthusiasm and flair that should have set off alarms. He gathered a stable of highly suggestible patients and an army of students (including Freud) who came to Paris from all of Europe to see the master at work in demonstrations that were well attended, highly dramatic, and photogenic. Charcot reveled in proving that hypnosis could both cure and create the symptoms—he could hypnotize the halt and the lame and make them well; or he could hypnotize the well and make them halt and lame. His patients, many of them housed together, became wonderful mimics of one another’s symptoms even when outside the auspices of the great man. Somehow, Charcot missed the central point of all this. His power of suggestion and the effort to please him had turned his patients into performers, just as he had also become one. Not recognizing his own causative role, Charcot evolved vague theories of brain disease that presumably made one susceptible to both hypnosis and to hysteria.

  Meanwhile, back in Vienna, Breuer (Freud’s other teacher) was having trouble hypnotizing Anna O. She was a creative, intelligent, suggestible, and lonely woman with the usual panoply of nonspecific, mostly neurological symptoms. Under her guidance, the “talking cure” (or psychoanalysis) was invented as an alternative to hypnosis. Instead of going into a hypnotic trance, Anna associated seemingly random thoughts. Then patient and doctor made psychological connections linking her fantasies and unconscious impulses to her past life. It worked! Symptoms promptly improved. But Anna would get sick again whenever her recovery threatened to end the cherished relationship with Breuer. There was an obvious explanation. Anna got better to please Breuer and got worse to avoid losing him. This made Breuer nervous (to say nothing of his jealous wife, who probably understood Anna’s motives a lot more clearly than he or Freud ever did).

  It was clear, as with Charcot’s hypnosis, that symptoms could be made to disappear or to appear based on the patient’s suggestibility and that suggestion was an important force in every powerful doctor-patient relationship. Freud coined the term “transference” for the parental role that tied patients to their doctors and made them so vulnerable to influence. But Freud failed to understand how big a role suggestion also plays in psychoanalysis. He understood and overvalued intrapsychic conflict and transference; he failed to understand and undervalued the current interpersonal relationship.8

  Psychoanalysis was an ineffective treatment for conversion disorder and, like hypnosis, contributed to propagating it. Ironically, a shaman would be a more effective therapist for Anna O. than a psychoanalyst. He would understand her symptom as metaphor and would find a more effective way of suggesting it away. The treatment of Anna O. ended badly, with Anna still sick and angry at Breuer. But there was a happy endi
ng. In real life, Anna O. was Bertha Pappenheim, who recovered and went on to become one of the founders of the profession of social work.

  As with neurasthenia, conversion hysteria disappeared when psychiatrists replaced neurologists as the primary caregivers for this population of help seekers. Suggestible patients seeing neurologists naturally enough presented with neurological symptoms. The same patient seeing a psychoanalyst would express suffering with more emotional and cognitive symptoms. Conversion symptoms continue to be seen in parts of the world where there are few mental health workers and patients can best access help through physical symptoms.9

  Multiple Personality Disorder

  Multiple personality disorder first became common in Europe at the turn of the twentieth century. Charcot was again the pied piper. He had helped to make hypnosis a popular medical treatment (when it wasn’t also doubling as a popular parlor trick). The hypnotic trance brought to light unacceptable feelings, fantasies, memories, and urges previously kept outside conscious awareness. A collaboration of suggestible patients and suggestible doctors elaborated the notion that the individual was harboring a hidden personality (or two or three or more). Through a process of “dissociation,” the hidden personalities had established their own independent existence and might even at times temporarily take charge and do things that were outside the control, or even the awareness, of the dominant personality. This was a way of converting a metaphor of distress and discomfort with oneself into a seemingly coherent disease that would also reduce any personal responsibility for the disowned feelings.

  Paradoxically, the way to treat the dissociation that presumably caused the multiplication of personality was to encourage even more dissociation through hypnosis. The goal was to induce the “alter” personalities to enter the light of day so that they might be melded together into a cohesive whole. Not surprisingly, the overall effect of hypnotic treatment was to promote, rather than to cure, the presumed illness—the submerged personalities continued to divide and multiply. Fortunately, therapists and patients eventually caught on that hypnosis was doing more harm than good, and it became less popular. Multiple personality disorder disappeared when hypnotists were replaced by psychoanalysts, who focused the patient’s attention on fragmented repressed impulses and memories, rather than on integrating repressed personalities.

 

‹ Prev