My Age of Anxiety: Fear, Hope, Dread, and the Search for Peace of Mind

Home > Other > My Age of Anxiety: Fear, Hope, Dread, and the Search for Peace of Mind > Page 26
My Age of Anxiety: Fear, Hope, Dread, and the Search for Peace of Mind Page 26

by Scott Stossel


  How did SSRIs go from being considered ineffective to being one of the best-selling drug classes in history? In the answer to that question lies a story about how dramatically our understanding of anxiety and depression has changed in a short period of time.

  Once again the story begins at Steve Brodie’s laboratory at the National Institutes of Health. After leaving Brodie’s lab for the University of Gothenburg in Sweden in 1959, Arvid Carlsson gave tricyclic antidepressants to mice with artificially depleted serotonin levels. Would the antidepressants boost serotonin levels? Yes; imipramine had serotonin-reuptake-inhibiting effects. In the 1960s, Carlsson tried similar experiments with antihistamines. Would they also inhibit the reuptake of serotonin? Again, yes. Carlsson found that an antihistamine called chlorpheniramine had a more powerful and precise effect on the brain’s serotonin receptors than did either imipramine or amitriptyline, the two most commonly prescribed tricyclics. Carlsson invoked this finding as evidence to support what he called the serotonin hypothesis of depression. He then set about applying this discovery in pursuit of a more potent antidepressant. “This,” the medical historian Edward Shorter has written, “was the birthing hour of the SSRIs.”§

  Carlsson next experimented with a different antihistamine, brompheniramine (the active ingredient in the cough medication Dimetapp). It, too, blocked the reuptake of serotonin and norepinephrine more robustly than imipramine did. He modified the antihistamine to create compound H102-09, which blocked only the reuptake of serotonin. Working with a team of researchers at Astra, a Swedish pharmaceutical company, Carlsson applied for a patent for H102-09—which had by then been renamed zimelidine—on April 28, 1971. Early clinical trials suggested zimelidine had some effectiveness in reducing depression, and in 1982 Astra started selling it in Europe as the antidepressant Zelmid. Astra licensed Zelmid’s North American rights to Merck, which began preparing to release the drug in the United States. Then tragedy struck: some patients taking Zelmid became paralyzed; a few died. Zelmid was pulled from pharmacy shelves in Europe and was never distributed in America.

  Executives at Eli Lilly watched these developments with interest. Some ten years earlier, biochemists at the company’s labs in Indiana had fiddled with chemical derivatives of a different antihistamine, diphenhydramine (the active ingredient in the allergy medication Benadryl), to create a compound called LY-82816, which had a potent effect on serotonin but only a weak effect on norepinephrine levels. This made LY-82816 the most “clean,” or “selective,” of the several compounds the researchers tested.‖ David Wong, an Eli Lilly biochemist, reformulated LY-82816 into compound LY-110140 and wrote up his findings in the journal Life Sciences in 1974. “At this point,” Wong would later recall, “work on [LY-110140] was an academic exercise.” Nobody knew whether there would be a market for even one serotonin-boosting psychiatric medication—and since Zelmid already had a head start of several years in getting through clinical trials and onto the market, Eli Lilly put LY-110140, now called fluoxetine, aside.

  But when Zelmid started paralyzing people, Eli Lilly executives realized fluoxetine now had a chance to be the first SSRI on the market in America, so they restarted the research machinery. Though many of the early clinical trials were not notably successful, the drug was approved and released in Belgium in 1986. In January 1988, fluoxetine was released in the United States, marketed as “the first highly specific, highly potent blocker of serotonin uptake.” Eli Lilly gave it the trade name Prozac, which a branding firm had thought had “zap” to it.

  Two years later, the pill graced the cover of Newsweek. Three years after that, Peter Kramer, the Brown psychiatrist, published Listening to Prozac.

  When Listening to Prozac came out in the summer of 1993, I was twenty-three and on my third tricyclic antidepressant—this time desipramine, whose trade name is Norpramin. I read the book with fascination, marveling at the transformative effects Prozac had had on Kramer’s patients. Many of his patients became, as he put it, “better than well”: “Prozac seemed to give social confidence to the habitually timid, to make the sensitive brash, to lend the introvert the social skills of a salesman.” Hmm, I thought. This sounds pretty good. My longtime psychiatrist, Dr. L., had been suggesting Prozac to me for months. But reading Kramer, I worried about what Faustian exchange was being made here—what got lost, in selfhood or the more idiosyncratic parts of personality, when Prozac medicated away the nervousness or the melancholy. In his book, Kramer concluded forcefully that for most severely anxious or depressed patients, the bargain was worthwhile. But he worried, too, about what he called “cosmetic psychopharmacology”—the use of psychiatric drugs by “normal” or “healthy” people to become happier, more social, more professionally effective.

  Before long, I joined the millions of other Americans taking SSRIs—and I’ve been on one or another pretty much continuously for going on twenty years. Nevertheless, I can’t say with complete conviction that these drugs have worked—or that they’ve been worth the costs in terms of money, side effects, drug-switching traumas, and who knows what long-term effects on my brain.

  After the initial flush of enthusiasm for SSRIs, some of the fears that had surrounded tranquilizers in the 1970s began clustering around antidepressants. “It is now clear,” David Healy, the historian of psychopharmacology, has written, “that the rates at which withdrawal problems have been reported on [Paxil] exceed the rates at which withdrawal problems have been reported on any other psychotropic drug ever.”a

  “Paxil is truly addictive,” Frank Berger, the inventor of Miltown, said not long before his death in 2008. “If you have somebody on Paxil, it’s not so easy to get him off.… This is not the case with Librium, Valium and Miltown.” A few years ago, my primary care physician told me she had stopped prescribing Paxil because so many of her patients had reported such severe withdrawal effects.

  Even leaving aside withdrawal effects, there is now a large pile of evidence suggesting—in line with those early studies of the ineffectiveness of Prozac and Paxil—that SSRIs may not work terribly well. In January 2010, almost exactly twenty years after introducing Americans to SSRIs, Newsweek published a cover story reporting on studies that suggested these drugs are barely as effective as sugar pills for the treatment of anxiety and depression. Two massive studies from 2006 showed most patients do not get better taking antidepressants; only about a third of the patients in these studies improved dramatically after a first trial. After reviewing dozens of studies on SSRI effectiveness, the British Medical Journal concluded that Prozac, Zoloft, Paxil, and the other drugs in the SSRI class “do not have a clinically meaningful advantage over placebo.” b

  How can this be? Tens of millions of Americans—including me and many people I know—collectively consume billions of dollars’ worth of SSRIs each year. Doesn’t this suggest that these drugs are effective?

  Not necessarily. At the very least, these massive rates of SSRI consumption have not caused rates of self-reported anxiety and depression to go down—and in fact all this pill popping seems to correlate with substantially higher rates of anxiety and depression.

  “If you’re born around World War I, in your lifetime the prevalence of depression is about 1 percent,” says Martin Seligman, a psychologist at the University of Pennsylvania. “If you’re born around World War II the lifetime prevalence of depressions seemed to be about 5 percent. If you were born starting in the 1960s, the lifetime prevalence seemed to be between 10 percent and 15 percent, and this is with lives incomplete”—meaning that in the end the actual rates will be higher. That’s at least a tenfold increase in the diagnosis of depression across just two generations.

  The same trend is evident in other countries. In Iceland, the incidence of depression nearly doubled between 1976 (before the arrival of SSRIs) and 2000. In 1984, four years before the introduction of Prozac, Britain reported 38 million “days of incapacity” (sick days) resulting from depression and anxiety disorders; in 1999, after a decade of wid
espread SSRI use, Britain attributed 117 million days of incapacity to the same disorders—an increase of 300 percent. Health surveys in the United States show that the percentage of working-age Americans who reported being disabled by depression tripled in the 1990s. Here’s the most striking statistic I’ve come across: Before antidepressants existed, some fifty to one hundred people per million were thought to suffer from depression; today, between one hundred thousand and two hundred thousand people per million are estimated to have depression. In a time when we have more biochemically sophisticated treatments than ever for combating depression, that’s a 1,000 percent increase in the incidence of depression.

  In his 2010 book, Anatomy of an Epidemic, the journalist Robert Whitaker marshaled evidence suggesting that SSRIs actually cause depression and anxiety—that SSRI consumption over the last twenty years has created organic changes in the brains of tens of millions of drug takers, making them more likely to feel nervous and unhappy. (Statistics from the World Health Organization showing that the worldwide suicide rate has increased by 60 percent over the last forty-five years would seem to give weight to the idea that the quotient of unhappiness in the world has risen in tandem with SSRI consumption.) Whitaker’s argument about drugs causing mental illness is controversial—most experts would dispute it, and it’s certainly not proven. What’s clear, though, is that the explosion of SSRI prescriptions has caused a drastic expansion in the definitions of depression and anxiety disorder (as well as more widespread acceptance of using depression and anxiety as excuses for skipping work), which has in turn caused the number of people given these diagnoses to increase.

  We may look back 150 years from now and see antidepressants as a dangerous and sinister experiment.

  —JOSEPH GLENMULLEN, Prozac Backlash (2001)

  In America, the question of when and whether to prescribe medications for routine neurotic suffering is bound up with two competing intellectual traditions: our historical roots in the self-denial and asceticism of our Puritan forebears versus the post-baby-boom belief that everyone is entitled to the “pursuit of happiness” enshrined in our founding document. In modern psychiatry, the tension between these two traditions plays out in the battle between Peter Kramer’s cosmetic psychopharmacology and what’s known as pharmacological Calvinism.

  Critics of cosmetic psychopharmacology (including, to some extent, Kramer himself) worry about what happens when millions of mildly neurotic patients seek medication to make themselves “better than well” and when competition to get and stay ahead in the workplace creates a pharmaceutical arms race. The term “pharmacological Calvinism” was coined in 1971 by Gerald Klerman, a self-described “angry psychiatrist” who was out to combat the emerging consensus that if a drug makes you feel good, it must be bad. Life is hard and suffering is real, Klerman and his allies argued, so why should ill-founded Puritanism be allowed to interfere with nervous or unhappy Americans’ quest for peace of mind?

  The pharmacological Calvinists believe that to escape psychic pain without quest or struggle is to diminish the self or the soul; it’s getting something for nothing, a Faustian bargain at odds with the Protestant work ethic. “Psychotherapeutically,” Klerman wrote sardonically, “the world is divided into the first-class citizens, the saints who can achieve their cure or salvation by willpower, insight, psychoanalysis or by behavior modification, and the rest of the people, who are weak in their moral fiber and need a crutch.” Klerman angrily dismissed such concerns, wondering why we would, out of some sense of misguided moral propriety, deny anxious, depressed Americans relief from their suffering and the opportunity to pursue higher, more meaningful goals. Why remain mired in the debilitating self-absorption of your neuroses if a pill can free your mind?

  Americans are ambivalent about all this. We pop tranquilizers and antidepressants by the billions—yet at the same time we have historically judged reliance on psychiatric medication to be a sign of weakness or moral failure.c A study conducted by researchers at the National Institute of Mental Health in the early 1970s concluded that “Americans believe tranquilizers are effective but have serious doubts about the morality of using them.”

  Which sounds like a somewhat illogical and self-contradictory position—but it happens to be the one I hold myself. I reluctantly take both tranquilizers and antidepressants, and I believe that they work—at least a little, at least some of the time. And I acknowledge that, as many psychiatrists and psychopharmacologists have told me, I may have a “medical condition” that causes my symptoms and somehow “justifies” the use of these medications. Yet at the same time, I also believe (and I believe that society believes) that my nervous problems are in some way a character issue or a moral failing. I believe my weak nerves make me a coward and a wimp, with all the negative judgment those words imply, which is why I have tried to hide evidence of them—and which is why I worry that resorting to drugs to mitigate these problems both proves and intensifies my moral weakness.

  “Stop judging yourself!” Dr. W. says. “You’re making your anxiety worse!”

  He’s right. And yet I can’t help concurring with the 40 percent of respondents to that NIMH survey who agreed with the statement “Moral weakness causes mental illness and taking tranquilizers to correct or ameliorate the condition is further evidence of that weakness.”

  Of course, as we learn more about how genes encode certain temperamental traits and dispositions into our personalities, it becomes harder to sustain the moral weakness argument in quite the same way. If my genes have encoded in me an anxious physiology, how responsible can I be held for the way that I quiver in the face of frightening situations or tend to crumble under stress? With the evidence for a strong genetic basis to psychiatric disorders accumulating, more recent surveys about American attitudes toward reliance on psychiatric medication reveal a dramatic shift of opinion. In 1996, only 38 percent of Americans saw depression as a health problem—versus 62 percent who saw depression as evidence of personal weakness. A decade later, those numbers had more than reversed: 72 percent saw depression as a health problem, and only 28 percent saw it as evidence of personal weakness.

  The serotonin theory of depression is comparable to the masturbatory theory of insanity.

  —DAVID HEALY, IN A 2002 SPEECH AT THE INSTITUTE OF PSYCHIATRY IN LONDON

  The deeper one digs into the entwined histories of anxiety and psychopharmacology, the clearer it becomes that anxiety has a direct and relatively straightforward biological basis. Anxiety, like all mental states, lives in the interstices of our neurons, in the soup of neurotransmitters that bathes our synapses. Relief from anxiety comes from resetting our nervous thermostats by adjusting the composition of that soup. Perhaps, as Peter Kramer mused in Listening to Prozac, what ailed Camus’s stranger—his anhedonia, his anomie—was merely a disorder of serotonin.

  And then one digs a little deeper still and none of that is very clear at all.

  Even as advances in neuroscience and molecular genetics have allowed us to get more and more precise in drawing connections between this protein and that brain receptor, or between this neurotransmitter and that emotion, some of the original underpinnings of biological psychiatry have been unraveling.

  The exaltation of Prozac a quarter century ago created a cult of serotonin as the “happiness neurotransmitter.” But from the start, some studies were failing to find a statistically significant difference between the serotonin levels of depressed and nondepressed people. One early study of a group of depressed patients, reported in Science in 1976, found that only half had atypical levels of serotonin—and only half of those had serotonin levels that were lower than average, meaning that only a quarter of the depressed patients could be considered serotonin deficient. In fact, an equally large number had serotonin levels that were higher than average. Many subsequent studies have produced results that complicate the notion of a consistent relationship between serotonin deficiency and mental illness.

  Evidently, the correlatio
n between serotonin and anxiety or depression is less straightforward than once thought. None other than the father of the serotonin hypothesis of depression, Arvid Carlsson, has announced that psychiatry must relinquish it. In 2002, at a conference in Montreal, he declared that we must “abandon the simplistic hypothesis” that a disordered emotion is the result of “either an abnormally high or abnormally low function of a given neurotransmitter.” Not long ago, George Ashcroft, who as a research psychiatrist in Scotland in the 1960s was one of the scientists responsible for promulgating the chemical imbalance theory of mental illness, renounced the theory when further research failed to support it. In 1998, Elliot Valenstein, a neuroscientist at the University of Michigan, devoted a whole book, Blaming the Brain, to arguing that “the evidence does not support any of the biochemical theories of mental illness.”

  “We have hunted for big simple neurochemical explanations for psychological disorders,” Kenneth Kendler, the editor in chief of Psychological Medicine and a professor of psychiatry at Virginia Commonwealth University, conceded in 2005, “and we have not found them.”

  What if the reason we haven’t been able to pinpoint how Prozac and Celexa work is that, in fact, they don’t work? “Psychiatric drugs do more harm than good,” says Peter Breggin, the Harvard-trained psychiatrist who is a frequent witness in lawsuits against the drug companies. He’s backed up by those studies showing that only about a third of patients get better on antidepressants.

 

‹ Prev