Book Read Free

Death Grip

Page 31

by Matt Samet


  But what I do tell my climber friends is this: Don’t rely solely on the meds, but also (or only!) engage with a good therapist. Educate yourself about the complications and side effects and withdrawal syndromes of every last pill, because they all have their cons and they all have washout periods persisting potentially much longer than per the official literature. They all, precisely because they act on neurotransmitter systems, exact changes in the brain that are slow to reverse. And I say, Keep your eye on the dosage and number of pills your child is taking because it is not something to let snowball. And finally, a piece of advice I would give anyone, adult or child alike: If you’re considering a psychiatric medicine, make sure you and your prescribing physician develop an exit plan that you both sanction, because someday you might want to go off the medication, at which point you may find your doctor is your worst enemy. You might get pregnant and not want to court the possible birth defects that come with taking SSRIs. Or you might realize that any pain you feel as a course of nature is preferable to that caused by psychiatry—you might realize, like me, that the depression and anxiety you feel on the drugs are orders of magnitude worse than they would be au naturale. You might realize, as the journalist Robert Whitaker postulates in his book Anatomy of an Epidemic, that the drugs are creating “chronicity”: relapsing, downward-spiraling mental illness in fact caused by drug-induced changes in neural receptors and a concurrent rebound in your original symptoms every time you try to quit that has your doctor—and hence you—convinced that you must never, ever stop. Or you might develop philosophical objections, concluding that, as with so many pillars of our quick-fix McCountry, psychiatry has metastasized into an untrustworthy, all-devouring Leviathan and you don’t want its tentacles wrapped around your skull another millisecond.

  One thing I don’t tell my friends, not wanting to sound like a crank, is that based on my interactions, I’ve found psychiatrists on the whole to be arrogant, power-mad bloviators who will dig in like a muddy dog being dragged to the bathtub if you so much as question the efficacy of their tools. I don’t tell them that the profession, in staking chemical claim to an organ as infinitely complex as the human brain, seems to attract mountebanks, charlatans, and voodoo witch doctors more interested in pharmaceutical tinkering than people. I don’t repeat the classic quote of a fellow ward of Hopkins as we watched a gaggle of doctors breeze past officiously one evening, white coats flapping in their wake: “I’ve come to realize they aren’t gods.” And I don’t bother mucking about with a discussion of “chemical imbalances,” which I must assume my friends have been told is the cause of their children’s woes. I heard this hypothesis trotted out again and again at the hospitals, an assertion as misleading, simpleminded, and jingoistic as a first-grade teacher drilling into her students’ heads that it was Columbus, and only Columbus, who discovered the Americas. “Take your meds,” the doctors and nurses would urge us. “It’s just like a diabetic needing his insulin.” I even have a worksheet, “Brain Chemistry and Mental Illness,” from Hopkins that frames the idea in grasping, idiot-manchild verbiage: “In mental illness, there is a chemical imbalance similar to diabetes. If you don’t have certain chemicals in your brain, it doesn’t work right and you have mental illness. In diabetes, a person needs insulin to live for the rest of his/her life. In mental illness, you need certain medicines to replace the chemicals in your brain for the rest of your life.”

  The rest of your life. As if from cradle to grave a person is no more than the sum of his neurochemical activity. As if, as per Hopkins’ incomplete metaphor, your brain like a malfunctioning pancreas isn’t producing enough Paxil.

  So much of modern psychiatry’s—or more aptly, psychopharmacology’s—drug-mongering can be traced back to this theory of a chemical imbalance, which arose in 1965 when Dr. Joseph Schildkraut published “The Catecholamine Hypothesis of Affective Disorders” in the American Journal of Psychiatry. To sum up what has become known as the “monoamine hypothesis,” depression is supposedly caused by a deficiency of certain neurotransmitters—in particular the monoamines serotonin and norepinephrine, as well as dopamine. This was deduced by the fact that antidepressants, proven effective in clinical and drug-trial settings, are known chemically to increase the amount of neurotransmitters available in the synapses. Thus, it was surmised, the drugs work by addressing a preexisting deficiency of those neurotransmitters. There are two main problems with this hypothesis, however: First of all, no direct diagnostic test has ever been devised to measure brain levels of neurotransmitters, or that shows any chemical link between a specific mental illness and a specific neurotransmitter system. The best psychiatrists have mustered is such indirect observation as, writes Daniel Carlat in Unhinged, measurements of neurotransmitter “breakdown products in the blood, urine, or cerebrospinal fluid (CSF).”1 A half-century after Thorazine, despite any perception of psychopharmacologists as supreme neural alchemists, you still cannot walk into a shrink’s office, have your neurotransmitter levels measured like car fluids at Jiffy Lube, and then walk out with the perfect, chemically tailored prescription.

  The second problem with the monoamine hypothesis is that antidepressants might, as demonstrated convincingly in Irving Kirsch’s The Emperor’s New Drugs, work mostly through the placebo effect anyway: Kirsch’s meta-analysis of forty-two clinical trials (including negative trials proprietary to the drug companies that were buried in the FDA archives and which Kirsch obtained through the Freedom of Information Act) of the six antidepressants most prescribed between 1987 and 1999 showed that placebos were 82 percent as efficacious as the drugs.2 Kirsch had reached similar findings through another meta-analysis, of thirty-eight studies, he and Guy Sapirstein undertook in 1998:3 that “improvement in patients who had been given a placebo was about 75 percent of the response to the real medication,” meaning that “only 25 percent of the benefit of antidepressant treatment was really due to the chemical effect of the drug”—a clinically meaningless difference and one that pointed to antidepressants performing little better than sugar pills.4 (As Kirsch frames it, the placebo effect, or the difference between a placebo and no treatment, was double that of the drug effect, or the difference between response to the placebo and response to the drug.5) Given all the troubling side effects, then, of antidepressants, it would seem to make little sense to take them. Moreover, Kirsch has postulated an “enhanced placebo effect” in the perceived effectiveness of antidepressants—namely that by causing noticeable side effects that cause patients to “break blind” in the classic double-blind, placebo-controlled studies used to bring the drugs to market, antidepressants convince a patient that he’s receiving an effective treatment, and hence not the placebo. The depressed subject, being in psycho-spiritual hell, of course wants to feel better and will, thus cued, go on to “get well.” Kirsch supports his case by arguing that active placebos—pills with side effects—that aren’t antidepressants like barbiturates, benzos, and synthetic thyroid hormone have produced similar findings.6

  With the monoamine hypothesis firmly in place from the mid-1960s on, psychiatrists quickly began to extrapolate the notion that all psychiatric illnesses were caused by similar neurochemical imbalances, using the known chemical action of psychoactive drugs as their proof. Therefore, if an antipsychotic like Thorazine or Haldol was known to suppress dopamine activity, then schizophrenic psychosis surely was caused by an excess of dopamine. And if an SSRI antidepressant like Prozac increased serotonin activity, then depression surely was caused by a deficit of serotonin. And if an anxiolytic like Valium augmented GABA activity, then anxiety surely was caused by a deficiency of GABA. And later, as happened to me, if you have a bad reaction to an SSRI (a class of drugs known to induce mania in those not formerly manic!), then surely you must be bipolar. This pseudoscientific approach attempts to reverse-engineer the root causes of mental illness using man-made molecules as diagnostic tools; it has also, conveniently for Big Pharma, evolved in lockstep with an explosion in new, “ever-better”
psychiatric chemicals over the past half century. However, the flawed, reductive lens that is the chemical-imbalance theory needs to be tossed on the scrap heap. Again, even though decades of research have not come up with a single direct diagnostic test to provide any evidence, the hypothesis has been continually touted by psychiatry and accepted at face value by the mainstream. Yet as Dr. Ashton once said of the initial enthusiasm for the theory, “Of course these naïve and simple hopes turned out to be in vain. Fifty years later we still do not know the cause of schizophrenia or depression or even how the drugs work.”7 And as Dr. Peter Breggin has famously written, “… the only biochemical imbalances that we can identify with certainty in the brains of psychiatric patients are the ones produced by psychiatric treatment itself.”8

  Another contributing factor to psychiatry’s current overreach is the Diagnostic and Statistical Manual of Mental Disorders, a tome with sixteen different groups of disorders that psychiatrists use to diagnose and hence prescribe for various conditions.9 Compiling all official mental illnesses and their symptoms, the DSM has metastasized to the point of pathologizing normal variances in personality, if not personality itself. The DSM was once but barely a pamphlet: The first edition, from 1952, was a small notebook, while its second iteration, in 1968, contained 134 spiral-bound pages and 182 diagnoses. Both had a quaint, archaic flavor, with an emphasis on Freudian notions like neurosis. Then came DSM-III, published in 1980, the field’s attempt to apply scientific rigor to the diagnosis of mental illness. DSM-III reached 494 pages and offered a vast menu of 265 different diagnoses, thanks largely to its then editor, the psychiatrist Robert Spitzer, who held raucous editorial gatherings at Columbia University during which fifteen psychiatrists, handpicked by Spitzer, hollered out symptom checklists and pet names for new disorders.10 To qualify for a specific disorder, a patient now had to evidence a certain number of symptoms—for example, five of the nine symptoms for major depression, a number picked more or less at random. (As Spitzer told Daniel J. Carlat, the author of Unhinged, “… four just seemed like not enough. And six seemed like too much.”11) The goal with these checklists was to introduce reliability into the profession—to increase the odds that a patient presenting with a certain set of symptoms to one psychiatrist would receive the same diagnosis from another. Because psychiatry has long been the redheaded stepchild of the medical world, its treatments and biological underpinnings shaky at best, this was the field’s big chance for self-legitimization. Writes Robert Whitaker, “With the publication of DSM-III, psychiatry had publicly donned a white coat.”12

  When the DSM-IV came out, in 1994, it had grown to 886 pages and boasted thirty-two tasty new entrees. The latest iteration, the DSM-IV-TR, which came out in 2000, contains 365 different diagnoses (and there is a DSM-V under way, due in 2013). But the book is not some pure, unsullied holy text, free of pharmaceutical industry complications: Ninety-five of DSM-IV-TR’s 170 contributors had financial ties to drug companies, including, wrote Marcia Angell in The New York Review of Books, “all of the contributors to the sections on mood disorders and schizophrenia,”13 and a recent study of 141 members of the work groups drafting the DSM-V found similar numbers, with 57 percent having such linkages.14 The book is now so heavy you could lobotomize someone with one whack to the head. The result of this disorder multiplication has been that the boundaries of mental illness are pushed farther into the realm of normalcy. It’s prime new real estate the drug companies have been happy to claim, for example GlaxoSmithKline marketing Paxil for the nebulous “social anxiety disorder,” a pathologization of what we once called “shyness.” Or most nefariously, “childhood bipolar disorder,” a murky catchall that lets parents and schools medicate the unholy hell out of “irritable” children, sometimes to death, as in 2006 with four-year-old Rebecca Riley of Boston, who had been prescribed Depakote, Seroquel, and Clonidine.

  We must imagine that psychiatrists keep a brain with perfect chemical equilibrium in a vault somewhere in the Psychiatry Super Friends Fortress of Power, probably next to the cryogenically frozen “Ideal Human Being”—one with no quirks of character, mood fluctuations, or anomalous personality traits. Otherwise, none of this polydrugging, electroshocking, overdiagnosing, off-label prescribing, overprescribing shitshow would make sense. Otherwise, this taking of marginally anxious, depressed people and this placing of them on four, five, six, or seven drugs designed for hardcore mental disorders like psychosis and schizophrenia would be morally reprehensible. Otherwise, this instatement and removal of powerful psychoactive drugs with careless haste, this abandonment of patients to find their way alone through the desert back to chemical-free living would be abominable. Otherwise, promiscuous psychiatry would be one of the most frightening and objectionable trends in the world, up there with corporations filing patents for the human genome. If we keep heading in this direction, you might someday be able to open your thirty-thousand-page DSM-XI and see a yearbook photo of yourself, right next to the label “Your Name Here Disorder,” perhaps even with a logo of the pharmaceutical company sponsoring your personalized genetic therapy.

  However, the good news at least for now is that you can still walk away. No one is forcing you to take drugs. They’re out there, they’re for sale, and they have their dangers, but unless the courts, jails, hospitals, your family, or you yourself have mandated it, no one can stand there and make you swallow pills. You do have some minor say in the matter. For this, I am grateful.

  I’m not oblivious to the fact that I put all those pills in my mouth and swallowed them thousands of times: for use, for abuse, for whatever you might call it. I do share some responsibility. And from a psychiatrist’s point of view, I’m sure my history looks somewhat inevitable: a continuum of anxiety and depression reaching back to my parents’ divorce at age ten, reappearing in high school as agoraphobia, and then thanks to poor self-care finally becoming chronic depression and a panic disorder in my early twenties, all with intermittent bouts of substance abuse that eventually fostered a mood-cycling disorder. To them, I am a classic addict-neurotic in need of ongoing pharmaceutical intervention. But again, from where I sit, this paradigm only served to make me sicker, crazed and fragile in a way that I have not felt since shedding the drugs. Bad things happened in my life that might or might not have sensitized me to anxiety, but that doesn’t mean they permanently define my identity. All such thinking—and a search for an external, chemical fix—ever did was strip away my personality and extinguish every last vestige of hope.

  In 2011, Dr. Ashton released an update to her original manual, called “The Ashton Manual Supplement,” an important document I’d urge anyone interested in the subject to read.15 In it, she laments the fact that little clinical progress has been made since 2002, when the manual first came out; that the pills are still overprescribed globally, “often in excessive doses, and frequently for too long,” with prescriptions on the rise in some countries; and that doctors on the whole still lack the expertise to help long-term users taper. However, Ashton also extends a sprig of hope: Based on a CAT-scan study of a small sample of longtime users, benzos likely don’t cause lasting structural damage to the brain—no “death of neurons, brain shrinkage or atrophy, etc.”—but instead might only cause functional changes. That is, even if you take benzos for years, you haven’t permanently fried your hardware; it’s just that the software runs a little differently. This is incredibly hopeful, especially given what I’ve been through. It affirms the fact that I can remain the primary architect of my reality now and forever. To me, despite any lingering symptoms, this more than anything equals a full recovery.

  “It is important to remember that by far the greatest majority of long-term benzodiazepine users do recover from withdrawal—given time,” writes Ashton in the supplement. “Even protracted symptoms tend to decrease gradually, sometimes over years. The brain, like the rest of the body, has an enormous capacity for adapting and self-healing. That is how life survives and how ex-benzodiazepine ‘addicts
’ can be optimistic about their future.”

  Optimism about one’s future: It’s all I ever asked for during the worst years, and it’s the one kindness that the doctors never extended. Hope: It used to be a four-letter word to me in my cynical, druggie years, but now it’s everything. If my friend Andrew hadn’t given me hope that day in September 2006 after my blowout at Rifle—if he’d yanked the wheel left to take us to the hospital instead of continuing straight up the highway to Carbondale—hope would have vanished for good. The doctors would have put me back on psychotropic medicines, maybe even benzos, and I wouldn’t have had the strength to taper again. I would not exist, this book would not exist, and you’d not be reading this last word here.

  APPENDICES

  Web Resources

 

‹ Prev