The Sober Truth

Home > Other > The Sober Truth > Page 5
The Sober Truth Page 5

by Lance Dodes


  It was a provocative result, but hardly definitive. After all, a good scientist could imagine any number of factors that might have confounded the numbers in this study. The nature of the “lay therapy” is never well defined, for instance, nor were any measures taken to ensure that this option was provided in a uniform way. The “choice” group is never broken out into subsets that might allow us to see which treatments they chose, if any. And, like almost all longitudinal studies, this one relied on self-reporting, which is a notoriously questionable metric.

  The results, however, did mirror what was concluded in later trials involving AA. A review of all such reports between 1976 and 1989 was performed by C. D. Emrick (of the School of Medicine of the University of Colorado) and colleagues. The researchers concluded:

  The effectiveness of AA as compared to other treatments for “alcoholism” has yet to be demonstrated. Reliable guidelines have not been established for predicting who among AA members will be successful. . . . Caution was raised against rigidly referring every alcohol-troubled person to AA.9

  It took until 1991 for another randomized study to be completed. This one found essentially the same results as the Brandsma study. In a paper published in the New England Journal of Medicine, the oldest continuously published medical journal in the world and widely considered the world’s most prestigious, D. C. Walsh and his co-researchers “randomly assigned a series of 227 workers newly identified as abusing alcohol to one of three rehabilitation regimens: compulsory inpatient treatment, compulsory attendance at AA meetings, and a choice of options. The findings were notable:

  On seven measures of drinking and drug use . . . we found significant differences at several follow-up assessments. The hospital group fared best and that assigned to AA the least well; those allowed to choose a program had intermediate outcomes. Additional inpatient treatment was required significantly more often . . . by the AA group (63 percent) and the choice group (38 percent) than by subjects assigned to initial treatment in the hospital (23 percent).10

  These results led the researchers to issue a warning in their final recommendations: “An initial referral to AA alone or a choice of programs, although less costly than inpatient care, involves more risk than compulsory inpatient treatment and should be accompanied by close monitoring for signs of incipient relapse.”

  THE MOST MEASURED REVIEW

  All scientists are aware of the dangers of non-controlled studies, of course, but often they have no choice. Randomizing individuals and controlling carefully for outside factors is extremely expensive, far more so than running an observational study. Controlled experiments can be conducted only with small sample sizes and with the help of deep pockets. As a result, proper clinical data is maddeningly hard to come by in many questions of public health.

  Yet one group exists solely to sort through the glut of studies and help caregivers tune out poorly designed or reported research: the Cochrane Collaboration, which comprises nearly thirty thousand researchers dedicated to pushing back against what medical pioneer David Sacket once called “the disastrous inadequacy of lesser evidence.”11 The Collaboration’s mission is quite simply to focus only on studies with proper protocols and minimal bias and to assemble the strongest data from a rigorously defined set of criteria. No purely observational studies or uncontrolled studies are permitted in a Cochrane Review; the organization’s goal, simply put, is to vet all the science out there and tell us what can actually be verified.

  In 2006, the Cochrane Collaboration undertook a characteristically careful and detailed look at studies of AA and 12-step recovery. First, the researchers recapped what had been determined to date:

  [A] meta-analysis [historic analysis of previous studies] by Kownacki (1999) identified severe selection bias in the available studies, with the randomised studies yielding worse results [for AA] than non-randomised studies. This meta-analysis is weakened by the heterogeneity of patients and interventions that are pooled together. Emrick 1989 performed a narrative review of studies about characteristics of alcohol-dependent individuals who affiliate with AA and concluded that the effectiveness of AA as compared to other treatments for alcoholism was not clear and therefore needed to be demonstrated.12

  The Collaboration then identified eight high-quality, controlled, randomized studies, with 3,417 subjects in all.13 Their conclusion was unambiguous: “No experimental studies unequivocally demonstrated the effectiveness of AA or TSF [Twelve Step Facilitation] approaches for reducing alcohol dependence or problems.”

  Despite the fact that the best designed studies have all questioned AA’s effectiveness, there remains a body of academic articles that are very frequently cited by supporters of the 12-step movement. To understand the arguments of 12-step proponents, we must give these studies an open hearing as well.

  In 1999, R. Fiorentine and colleagues ran a twenty-four-month longitudinal after-treatment study that “suggests the effectiveness of 12-step programs.” They concluded:

  the findings suggest that weekly or more frequent 12-step participation is associated with drug and alcohol abstinence. Less-than-weekly participation is not associated with favorable drug and alcohol use outcomes, and participation in 12-step programs seems to be equally useful in maintaining abstinence from both illicit drug and alcohol use. These findings point to the wisdom of a general policy that recommends weekly or more frequent participation in a 12-step program as a useful and inexpensive aftercare resource for many clients.14

  The authors of this paper based their recommendations on a clear correlation that has appeared many times in the literature, namely that the longer people attend AA meetings, the more likely they are to experience better outcomes for sobriety. Here is how they summarized their findings:

  In the 6-month period prior to the 24-month follow-up, approximately 27% of those participating in any 12-step meetings used an illicit drug compared to 44% of those not attending 12-step meetings. The results of the urinalysis support the same conclusion. About 28% of those attending any 12-step meetings tested positive for an illicit drug at the 24-month follow-up compared to 41% of those not attending 12-step meetings. Less than 4% of 12-step participants tested positive for alcohol at the 24-month follow-up compared to about 13% of nonattendees.

  In other words, the incidence of drinking was roughly 60 percent higher among nonattendees than attendees at the first two measurements and far higher at the final data point, to the tune of a 300 percent improvement for the AA attenders.

  It’s tempting to look at correlations like this and conclude, as many have, that AA must be responsible for this improvement. Yet Fiorentine and his colleagues themselves noted that their results were at odds with other recent studies like the Walsh study cited above and another by B. S. McCrady, who both found that “random assignment to AA or two other treatment condition groups did not reveal more-favorable drinking outcomes for AA participants.”15 The researchers were also mindful of the compliance effect:

  The findings suggest that 12-step programs may be an effective step in maintaining drug and alcohol abstinence. Unfortunately, the limitations of the design do not allow other variables, including the motivational confound, to be ruled out as possible influences on the drug and alcohol use outcomes of 12-step participants. (Emphasis added)

  Hence their highly qualified final recommendation, which is not often cited by 12-step proponents:

  More definitive answers to these questions may come from randomized trials involving 12-step programs and comparison groups of sufficient size that are followed over a relatively long posttreatment duration. . . . Randomized designs are the best method yet to disaggregate the effectiveness of treatment from other influences, including motivational differences. . . . [T]he findings indicate that both weekly and less-than-weekly 12-step participants had very high recovery motivation scores—scores that may be attributable, at least in part, to the sampling bias of the study.

  Caveats such as these are standard practice in peer-revie
wed science, so they should be taken only as possibilities, not as an indictment of the research as a whole. Yet the significance of these warnings cannot be overstated: anyone who understands the inherent difficulties with observational science would recognize this list of concerns as grounds to consider the results provisional until a controlled study can be mounted.

  Fiorentine and his colleagues did attempt to minimize the effects of sampling bias by doing what researchers almost always do in epidemiological science: they applied multiple regression analysis (MRA), which involves developing a mathematical model to try to explain the data, and to account for and separate out all the known differences between the groups—disaggregate, in their language. MRA unquestionably has its uses, but it can no more overlay controls retroactively on an uncontrolled study than a camera can turn a single still image into a 360-degree panorama. In elegant understatement, Harvard Medical School professor and epidemiologist Jerry Avorn told the New York Times that MRA “doesn’t always work as well as we’d like it to.”16

  Indeed, what troubles many good scientists about research like the Fiorentine paper is that studying the people who choose to attend AA is an almost perfect recipe for generating the compliance effect error. AA members who frequently attend meetings may be demonstrating the same sort of self-care qualities that the placebo takers do. They may be, in effect, the Boy Scouts, or “eager patients,” of the addict population.17 Nobody who has looked at this data would dispute that people who attend AA most often and stay the longest are more likely to improve than the dropouts. The question is whether AA is driving this outcome or benefiting from a correlation instead.

  Is it possible that the kind of people who stay in 12-step programs are already more likely to improve? Would they be equally likely to do so in any treatment, or even no treatment at all? At heart, the dilemma facing AA research is whether people stay in AA because they’re the type of people who will stick with a program no matter what it is and who would have stuck with it even if it were of no help to them at all.

  THE MOOS DATA

  In 2005, husband-and-wife team Rudolph and Bernice Moos of Stanford University published the first of two papers that would become some of the most-cited data in support of Alcoholics Anonymous.18 Because these articles have become major sources of faith in the effectiveness of AA, they deserve an especially careful review.

  The authors conducted a longitudinal, observational study of 362 previously untreated people who chose to enter either AA, professional treatment, or a combination of both. Notably, the authors never defined what was meant by “professional treatment,” or the level of training or competence of the professionals performing the treatment, a point they conceded in the 2006 paper, “Participation in Treatment and Alcoholics Anonymous”: “[An] issue involves the lack of data on the content of treatment, which might have enabled us to examine whether aspects of psychological and social functioning changed less because they were not addressed adequately in treatment.”

  In truth, the word treatment could mean almost anything in this context, including the very real possibility that it was 12-step-based as well, or was “motivational enhancement therapy,” which is a brief encouragement-based approach that does not resemble serious psychotherapy. The paper’s definition of “long-term treatment” is also mistaken. The researchers defined this as anything more than six months, while most well-trained and experienced professionals in psychology would consider that a short-term treatment.

  Surveys were sent out at various checkpoints: one, three, eight, and sixteen years. In their first paper, the researchers concluded:

  Compared with individuals who participated only in professional treatment in the first year after they initiated help-seeking, individuals who participated in both [professional] treatment and AA were more likely to achieve remission. Individuals who entered treatment but delayed participation in AA did not appear to obtain any additional benefit from AA.19

  It was, in other words, a mixed bag. Visit a therapist and AA together, the data suggests, and you are likely to do better than you would with therapy alone. But visit a therapist for one year and then try AA, and you won’t do any better than if you had just stayed in therapy.

  Notably, the researchers went on to publish a far more strongly worded follow-up in 2006, drawing from the same data. This paper begins by demonstrating some similarities in compliance with treatment between the AA attendees and “treatment” group:

  In the first year . . . 273 (59.2%) of the 461 individuals entered professional treatment and 269 (58.4%) entered AA. In the second and third years of follow-up, 167 individuals (36.2%) were in treatment and 176 (38.2%) participated in AA. In years 4 to 8, 144 individuals (31.2%) were in treatment and 166 (36.0%) participated in AA.20

  Unsurprisingly, they found that the people who stuck with either treatment—AA or professional treatment—did significantly better than those who did not. These were the compliers. The authors continue:

  Compared with individuals who remained untreated, individuals who obtained 27 weeks or more of treatment in the first year after seeking help had better 16-year alcohol-related outcomes. Similarly, individuals who participated in AA for 27 weeks or more had better 16-year outcomes. Subsequent AA involvement was also associated with better 16-year outcomes, but this was not true of subsequent treatment.21

  In other words, again unsurprisingly, they found that the people who stuck with either approach—AA or professional treatment—did significantly better than those who did not. Yet the last sentence suggests that people continued to improve over time with AA, whereas they failed to continue improving with treatment. (The authors measured improvement via self-reports in answer to questions such as “Have you been sad the past month?” or “Have you participated in social activities?”) What their conclusion doesn’t address, however, is the possibility that the people in treatment were already doing better than the AA group, and that they therefore had less room to improve over those last eight years. We do not know, nor do the researchers say, which interpretation is right.

  More problematic is that the study elided some potentially telling fluctuations in the data. People who stayed in AA for fewer than six months had worse outcomes than people who never entered AA at all. This finding seems to mirror the Brandsma data: AA attendees seem to get worse before they get better. One theory is that the finding is nothing but noise—the standard statistical turbulence that can foul any short-term study. But if the data are real and repeatable, then they suggest something the Moos researchers perhaps did not consider: that AA might do more harm than good for the people who choose to attend but do not buy into the program.

  The Moos study also employs some objectionable statistical methods. In one critical omission, its conclusions ignore all the people who died and the large number of people who dropped out of the study altogether, despite conceding that these were the people who statistically consumed the most alcohol. As early as year eight, the number of subjects who were left in AA had already shrunk by nearly 40 percent (from 269 to 166), yet these people are erased from the conclusions as if they had never existed at all. Add up all the people who died and the dropouts, and the results for AA become far grimmer than the authors suggest.

  The stated size of this survey is also misleading. Although the researchers began with 628 people, the total number of people who remained through the sixteen-year follow-up and also stayed in AA for longer than six months—that is, the group on which the authors’ major findings are based—was just 107, or 17 percent of the original sample. And of the remaining 107, the researchers never revealed the actual number of people who improved, or even stayed sober. They told us only which group “had better outcomes.”

  Next, there is the question of validity of the results. As I have mentioned, self-reporting is a tricky methodology, prone to the illusions of self-deception and imperfect memory. In most observational research, surveys are the standard currency—without surveys, there can be no data
. Yet there are ways to mitigate the information people report about themselves, notably independent testing. The Moos study did not attempt to independently verify any of the surveys it was based on. (The Fiorentine group, by contrast, supplemented their surveys with urine tests.)

  Finally, the punctuated nature of the study addressed only the six-month windows prior to each of the four check-ins. This meant that of the sixteen years covered by the study, the researchers’ surveys gathered information on only two of them. No questions were ever asked about the stretches of time in between follow-ups; 88 percent of the time was never studied. As the authors acknowledged in the 2006 paper, “Another limitation is that we obtained information only on 6-month windows of alcohol-related outcomes at each follow-up, and thus cannot trace the complete drinking status of respondents over the 16-year interval.”

  Ultimately none of these issues should be great enough to disqualify the Moos study on its own. But together they should give us pause. The study had no controls, so subjects were free to join and leave treatment as they wished. And for every slice of subjects that got better, the study omitted many about which we are never told a thing. Possibly as a consequence of these limitations, the authors of the study readily acknowledged that they, too, struggled with the question of cause and effect:

  [I]ndividuals self-selected into treatment and AA and, based on their experiences, decided on the duration of participation. Thus, in part, the benefits we identified are due to the influence of self-selection and motivation to obtain help as well as that of longer participation per se. Although our findings probably reflect the real-world effectiveness of participation in treatment and AA for alcohol use disorders, the naturalistic design precludes firm inferences about the causal role of treatment or AA.

 

‹ Prev