Book Read Free

Manufacturing depression

Page 33

by Gary Greenberg


  Still, psychotherapists, and particularly cognitive therapists, have not been content to take their prizes and go home. To the contrary, when psychopharmacologists like Klerman sounded off about the lack of evidence for therapy’s efficacy, or when Donald Klein, another leading antidepressant researcher, complained that “psychotherapies are not doing anything specific,” the professional guilds didn’t make the obvious point about the pot and the kettle. Nor did they take Klein’s comment that therapies are “nonspecifically beneficial to the final common pathway of demoralization” as an unintended compliment and trumpet the value of remoralization and their unique ability to bring it about.

  Instead, they panicked. “If clinical psychology is to survive in this heyday of biological psychiatry,” a task force of the American Psychological Association warned in 1993, “APA must act to emphasize the strength of what we have to offer—a variety of psychotherapies of proven efficacy.” The gauntlet had been thrown down, said the task force, and therapists had to pick it up by meeting drugs on their own ground—in controlled clinical trials that would identify empirically supported therapies (ESTs). It turns out that you can make at least one kind of therapy into something like a drug—a specific treatment that can be given in known doses, whose active ingredient attacks a specific disease, and whose effects can be measured. And the DSM-III provided empirical therapies with a perfect target: depression.

  It’s not an accident that more than 90 percent of EST trials focus on cognitive therapy. From the beginning, even before the DSM-III’s clinical-trial-friendly symptom lists, Aaron Beck had set out to create a therapy whose effects on depression could be validated scientifically. He did this by developing his theory that depression is caused by dysfunctional thoughts and core beliefs—and a treatment targeted directly at those causes, one that could be broken down into specific modules, standardized in a treatment manual, and taught to therapists, whose performance could in turn be evaluated by reviewing tapes of sessions and scoring them on the Cognitive Therapist Rating Scale. Beck also developed a test—the Beck Depression Inventory (BDI)—to measure the outcome. If you think there’s a circular logic at work here, not to mention a conflict of interest, you’re probably right. But it’s no worse than what Max Hamilton did when he fashioned his test to meet the needs of his drug company patrons. Besides, it’s easy to overlook such matters when the theory allows cognitive therapists to claim that they are attacking the psychological mechanisms of depression in the same precise way that antidepressants attack neurotransmitter imbalances.

  In the mid-1970s, Beck got a chance to put his theory to the gold-standard test—a clinical trial. His team got a government grant to compare cognitive therapy to antidepressant drugs as a treatment for neurotic depression (as defined in DSM-II). The study had a simple design. All forty-one subjects were given tests, including the BDI and the HAM-D, at the beginning of the trial. Half were then given tricyclic antidepressants, the other half cognitive therapy, and at the end of the twelve-week trial they were retested. Cognitive therapy won hands down. Therapy patients’ scores on the tests dropped significantly more than those of the subjects on drugs. And, presumably because of the unpleasant side effects of the drugs, many fewer people dropped out of the therapy cohort than the antidepressant cohort.

  The trial went on to have “a profound effect on the course of depression outcome research”—not only because of its results, but also because of how they were obtained. Beck and his team had done as much as possible to control for nonspecific factors. They had not only carefully measured the dose of therapy and continuously monitored therapists’ adherence to the treatment manual; they had also chosen inexperienced therapists, medical residents and psychology interns who presumably hadn’t yet picked up the tricks of the trade, who couldn’t command confidence or deploy empathy like thirty-year veterans do, and whose successes could thus be attributed more to what was in the treatment manual than what was in their personality or technique. Beck could then plausibly claim that he had obtained his results with a minimum of placebo effects and a maximum of “active ingredient,” that the reason CT outdistanced drugs was that there was something in the manual that was specifically therapeutic.

  This impression was only strengthened over the next fifteen years as researchers replicated the finding that CT was as good as or better than drug treatment and added studies testing it against no therapy at all (other than an intake interview and placing the subject on a waiting list), and even against other therapies. As the findings mounted, professional and public opinion followed. In 1996, the New York Times reported that cognitive therapy was “the most scientifically tested form of psychotherapy…as effective as medication and traditional psychotherapy in helping patients with depression.” In 2000, the American Psychiatric Association issued practice guidelines asserting that cognitive therapy was among the therapies with “the best-documented effectiveness in the literature for the specific treatment of major depressive disorder.” Gerald Klerman’s dream of government regulation of therapy hasn’t yet come true, but a therapist not using cognitive therapy for depression would find himself on the margins of his profession. At least according to the Times, by 2006, cognitive therapy had become “the most widely practiced approach in America.”

  Dig into the clinical trials that give cognitive therapy its stranglehold on depression treatment, however, and its claim to the status as the most effective therapy begins to seem less than scientific. It turns out that cognitive therapy resembles antidepressant treatment in a way that Aaron Beck couldn’t have intended: like the drugs, it owes its marketplace dominance less to science than to its unique suitability to the particulars of the scientific game, and much more to the placebo effect than anyone wants to admit.

  Some of the trouble is built into the idea of validating therapy. It’s hard to think of an enterprise less suited to lab testing than psychotherapy. What are the criteria of success and how do you measure them? How do you take all the thousands of words that are exchanged between therapist and patient—and for that matter all the nonverbal exchanges, the averted eyes and the fidgeting, the fleeting smile and brimming tears—and render them into data bits? The solution that researchers have hit upon is to ignore as much of that fuzzy stuff as possible and focus instead on what they can measure. This generally means doing exactly what Beck did: standardizing the treatment in a manual, aiming it at specific targets, such as the symptoms of depression found in the DSM, and then measuring the change in those symptoms after the therapy is implemented.

  Critics complain that while this approach may work well in the laboratory, it has precious little relationship to what goes on in the real world. The lab therapist, indeed, does exactly the opposite of what most real-life therapists do: refrains from clinical judgment in favor of the manual and limits his focus to a set of symptoms rather than to the patient as a whole. “Psychotherapy is essentially concerned with people, not conditions or disorders,” wrote one dissenting psychiatrist, “and its methods arise out of an intimate relationship…that cannot easily be reduced to a set of prescribed techniques.” Add to this objection the fact that for both subject and therapist the proceedings are framed as a research project rather than as an encounter whose intention is to ease psychic suffering, and you have to wonder if the therapy studied in clinical trials is merely an artifact, a bell jar version of the real thing.

  Cognitive therapists are aware of this disconnect, or at least Leslie Sokol is. On the first day of our workshop, she told us not to sweat the data too hard, at least not the part about people getting better after a prescribed dose of sessions. “Cognitive therapy is thought of as time limited because research demanded it,” she said. “We delivered this amount of sessions not because there was a magic number but because we were running trials and we can’t run them indefinitely. Time limited,” she added, “really means goal limited.” (It also evidently means “having it both ways,” as in claiming to have a lab-validated treatment model that
specifies a certain dose of therapy, but then, when out of the glare of the lab lights, not sticking to it.)

  Cognitive therapists don’t only claim that their treatment works; they also assert that it is superior to therapies that haven’t been tested. This is another advantage of adopting the drug model; according to the logic of clinical trials, absence of evidence is evidence of absence. That’s why Steven Hollon, an early collaborator with Aaron Beck and a leader in the field, can get away with writing that the fact that “empirically supported therapies are still not widely practiced…[means] that many patients do not have access to adequate treatments”—as if it had already been proved that the only adequate treatments are empirically supported therapies.

  That’s not the only way that the fix is in. Consider what happens when researchers try to institute placebo controls. In a drug trial, the placebo is a pill, and it is at least arguable that the only difference between the placebo and the drug is whatever is inside the two pills, so long as the patient is otherwise treated the same. Early EST trials used waiting lists as the placebo treatment; people do indeed get better merely by being told that help is on the way. But that procedure does not allow researchers to zero in on the active ingredient—assuming such a thing exists—of a given therapy.

  The remedy is to compare two kinds of therapy that differ only in their specific interventions. But most forms of psychotherapy weren’t designed to be manualized—not to mention that the people who practice them aren’t leading the charge to measure therapy outcomes. It has been left to cognitive therapists to invent their competition, with the predictable results. One study, for instance, pitted cognitive therapy against “supportive counseling”—a therapy made up by the researchers for their trial—as a treatment for rape victims. The subjects in the supportive counseling group were given “unconditional support,” taught “a general problem solving technique,” but “immediately redirected to focus on current daily problems if discussions of the assault occurred.” It’s not surprising that the patients who couldn’t talk about their assault didn’t fare as well as the patients who could (and who were getting cognitive therapy), but that does cast doubt on the conclusion that cognitive therapy should take home the prizes. Proving that a bona fide therapy provided by someone who believes in it, who is inculcated with its values and traditions, works better than an ersatz therapy, implemented by someone who doesn’t think it is going to work, may only show, as one critic put it, “that something intended to be effective works better than something intended to be ineffective.”

  Allegiances do matter, both the therapist’s and the client’s. Even in the lab, outcomes are consistently much higher when clinicians believe in what they are doing. I may not be entirely certain of why I want to talk with Eliza about her strawberries, I may indeed be flying by the seat of my pants, but I do believe that we’re going to land somewhere better than where we were in the first place, and I’m sure I convey that confidence to Eliza. This kind of confidence shows up in the numbers as clearly as Judy Beck’s belief that substituting Positive Triangles for Negative Rectangles will help cure depression. Furthermore, clients who don’t have some loyalty to their therapists, or who don’t believe that whatever is happening between them is going to help them, don’t stick around.

  This is why critics object to another statistical procedure common to clinical trials: excluding from the bottom line the subjects who don’t complete the study, people who presumably didn’t feel that confidence or loyalty. Rather than counting them as failures, most studies simply treat dropouts as if they never enrolled in the first place, which, mathematically speaking, makes the treatment look stronger than it would otherwise. And the numbers also exclude those people who were not allowed into the study because their case wasn’t diagnostically pure enough—a move that allows researchers to improve their numbers by cherry-picking the patients most likely to benefit from their treatment.

  Researchers can study the effect of these and other methodological problems by using meta-analysis, a statistical technique that allows them to determine the mean of means, or, in layman’s language, what all the studies lumped together say about a particular factor—even one that the original scientists didn’t necessarily intend to examine. So, for instance, two independent groups of researchers have used meta-analysis to factor out the advantages that cognitive therapy has when it goes up against treatments intended to fail. They scoured the literature for studies in which all treatment groups were given bona fide therapies. After crunching the numbers, they came to the conclusion that when the competition was fair, there was no difference in the effectiveness of the treatments.

  Two other psychologists—Drew Westen and Kate Morrison—meta-analyzed thirteen leading studies of psychotherapies for depression, eleven of which used some form of cognitive therapy. Overall, about half the subjects improved—results that put the treatment in the same ballpark as antidepressant drugs. But Westen and Morrison discovered that only one-third of the patients who tried to get into the studies were accepted—presumably because they didn’t pass diagnostic muster—which limits the generalizability of the study. And of that select few, so many dropped out before the trial ended that the overall number of subjects who improved was only 36.8 percent. And when they looked at the handful of studies that followed their subjects over the long haul (and this is another way that therapy trials mirror drug trials; the book is closed after eight or ten or twelve weeks, and only rarely does anyone ask if the treatment remained effective), of the 68 percent of completers originally reporting improvement in those trials, only half remained improved after two years.

  Westen and Morrison are quick to point out that they aren’t saying that the therapies don’t work. They help a carefully chosen portion of patients for a short time. That’s not trivial, but it is less than the claim that cognitive therapy is the scientifically proven treatment for the disease of depression, and far less than what you would expect of a therapy that has become the standard of care for the AMA or “the most widely practiced approach in America.”

  Westen and Morrison acknowledge that their work is not exempt from allegiance effects. They think that cognitive therapy’s success depends on a redefinition of psychotherapy with which they disagree. Their objection, they warn, may have unconsciously influenced their choice of studies to include or the hypotheses that guided their results. Indeed, all the critics of ESTs seem to be similarly inspired by a disagreement about how therapy ought to be practiced and evaluated and a distaste for cognitive therapy’s answer to that question, for the way that therapy, like Heisenberg’s subatomic particles, is changed by the very act of measuring it. The dispute, in other words, is not about the effects of therapy but the nature of therapy—and, by extension, the nature of human suffering and its relief. And like all ideological arguments, this one cannot be settled by numbers alone.

  But there is one set of numbers that bears particular weight: findings generated by a group of loyal cognitive therapists. The team, led by prominent cognitivists Neil Jacobson and Keith Dobson, set out to investigate Beck’s pivotal claim that his therapy has active ingredients that target the psychological cause of depression. Jacobson and Dobson wanted to determine whether some of those ingredients could be effective in isolation from the others—presumably because this might make an even more efficient therapy. They separated patients into three groups—one that received cognitive therapy according to Beck’s manual, one that was given only the component in the manual directed toward behavioral activation (using activity schedules and other interventions to get patients into contact with sources of positive reinforcement), and one that got the modules that focused on coping skills, and in particular on assessing and restructuring automatic negative thoughts. The experimenters, all of them seasoned cognitive therapists, had an average of fifteen years’ clinical experience, had spent a year training for this study, and were closely supervised by Dobson. And at the end of the twenty-week study, to everyone’s surpri
se, there was no difference between the groups. Everyone benefited equally, just as the dodo bird hypothesis would predict.

  Other studies, like one in which two cognitive therapists discovered that most improvement in cognitive therapy occurs in the first few sessions and before the introduction of cognitive restructuring techniques, strengthen the finding that to the extent that cognitive therapy works for depression, it is not because its specific ingredients act on specific pathologies. Instead, according to the meta-analysts, cognitive therapy’s success depends largely on the therapeutic alliance, therapist empathy, the allegiance of the therapist to his technique, and the expectations of the patient—the same nonspecific factors that Aaron Beck intended to eliminate in the first place. “How therapy is conducted is more important,” as one researcher put it, “than what therapy is conducted.” As it does in drug therapies for depression, the placebo effect deserves most of the prizes.

  But in real life, the prizes go to cognitive therapy, especially the prizes doled out by insurance companies. A therapist can’t get sued for not practicing cognitive therapy, at least not yet, but there are other, more direct ways, to persuade us. According to the New York Times, insurers “often prefer their consumers” to go to cognitive therapists. Only a few health plans—the ones that employ their own counselors—can directly enforce this preference. But they can all require, as most companies do on the treatment reports they make me fill out as a condition of reimbursement, that therapists specify a “definition of successful treatment,” with “desired observable outcomes” and deny coverage if those goals—themselves lifted from cognitive therapy manuals—don’t address dysfunctional thoughts and core beliefs. They can also limit therapy sessions on the grounds that it has been scientifically proven that depression can clear up in fifteen or twenty visits, and that if this didn’t happen then a therapist must not be providing adequate treatment.

 

‹ Prev