Book Read Free

The Sober Truth

Page 18

by Lance Dodes


  And there is an even greater problem with the worship of evidence, regardless of its validity: it is very easy to find meaningless evidence. Setting up an experiment to study an irrelevant question is a bit like a telescope pointing at the wrong place—you may confirm that the sun is indeed hot, but if you’re looking for life on Mars, then you haven’t exactly advanced the dialogue. Experiments that are designed to answer facile or specious questions about their topics are doomed to irrelevance before they begin. Thus we have a parade of statisticians determined to figure out how many heroin addicts are likely to use cocaine, without bothering to ask if this data is actionable or illuminating. It can easily be “proven” that environmental cues remind us to drink or that compulsive gamblers tend to do poorly in school. You could send out a survey tomorrow and collect solid evidence that drinkers like to smoke or that there is more alcohol consumption when people are “stressed.” You might even publish and advance in academia for having done so, while just out of sight, the state of addiction research remains in stasis.

  Nearly every addiction study is guilty of looking at the wrong things, and the reason is that most of these researchers have no training or interest in psychology. The false dogma that addiction is a biochemical disorder, or can be understood with superficial measures of behavior, has become self-perpetuating in the addiction literature. The gatekeepers who stand at the threshold of our science journals continue to reward trivial inquiries that shore up this woefully inadequate model of human behavior. If more researchers considered psychological explanations of addiction—and they should, given the preponderance of countervailing evidence that has left the “brain disease” concept in tatters (remember the veterans’ study discussed in chapter 5)—they might take an interest in more humanistic ideas about humans.

  If someone wanted to study the psychology of addiction statistically (more on why this is not a great idea later), researchers could step away from the rats and examine what precipitates addictive actions in humans. In my second book, I raised this notion as a way to help people predict the next episode of addictive behavior.8 The same question could be studied in a large-scale way by asking people to keep a record of the events, feelings, and situations that precede addictive acts. Subjects could then be interviewed to see if a common emotional thread can be found behind each of these precipitants. We might gather a good amount of evidence and find statistically significant commonalities in that data, suggesting that addiction is a comprehensible psychological symptom. No one has yet tried.

  There is one other serious problem with the term “evidence-based science,” and it was highlighted eloquently in a now-famous paper by John Ioannidis, a professor of medicine and director of the Stanford Prevention Research Center at Stanford University School of Medicine.9 Ioannidis showed that a research finding is less likely to be true when the studies conducted in a field are smaller, when effect sizes are smaller (the difference between a positive and negative finding is small), when researchers are prejudiced in favor of or against a certain result, and perhaps most importantly, when studies make fundamentally inaccurate assumptions about whether their findings will be true before they run the study (more on this below). I have seen all of these errors in addiction research: an inadequate number of people in studies, attempts to find statistical meaning in a small effect (an overall success rate of 12-step treatment of only 5 to 10 percent), and bias in presenting data (selection bias, compliance bias, omitting data that doesn’t fit the conclusion).

  The error of starting out with the belief that what you are looking for is likely to be meaningful was first formally recognized by Thomas Bayes, an eighteenth-century English minister and mathematician. He wrote that in experimental science, it is necessary to estimate the chance of each result prior to running an experiment. A good example of why this is important was given by the statistician Nate Silver (famous for accurately predicting virtually every state and national election result in the United States in both 2008 and 2012).10 Silver points out how, for many years, the winner of the Super Bowl was widely said to predict the rise or fall of the stock market for the rest of that year. This was because starting with the first Super Bowl in 1967 and for the next thirty years until 1997, the stock market gained an average of 14 percent for the rest of the year when a team from the original NFL won the game, but fell almost 10 percent when a team from the original AFL won. Statistically, that correlation “showed” a definite connection between the two events. Indeed, there was just a one in five million possibility that this connection was due to chance alone! Without a foundation in Bayesian thinking, one would believe this to be incontrovertible proof that some as-yet-unidentified factor really did tie these two outcomes together. Of course they were mistaken, as the next fourteen years showed exactly the opposite result.

  Bayes was intrigued by our tendency to seize upon absurd statistical conclusions like this and realized that relying on numbers alone was simply too shortsighted to make sense of statistics, or the world. Numbers contain precious little information about whether a correlation actually reflects a plausible reality or might instead be a statistical blip, hiccup, clump, or random anomaly.

  In the case of the Super Bowl, for instance, those who breathlessly repeated and studied the coincidence as if it were significant forgot to ask an important question in plain language first: How could the winner of the Super Bowl have anything to do with the stock market? Bayes said that if you don’t take into account the likelihood of something being true before you interpret the results, then you are stepping into never-never land; failing to decide in advance whether the outcome is realistic robs us of any chance to describe reality. For the Super Bowl correlation, a moment’s thought would have given the likelihood of it being meaningful a very low probability. Applying Bayes’ theorem (a simple formula that takes into account the likelihood of an outcome’s being meaningful) would have yielded a result showing a very low chance that this measured statistic had any validity for the real world. As Ioannidis put it: “The probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance.”11

  All the studies we have seen purporting to show the effectiveness of AA, for instance, begin with the assumption that the AA method is eminently reasonable. As proof, they offer references to each other. In investigating these papers, I found zero references to psychological views of addiction, which might have led the authors to decrease their estimate of the likelihood that their results were describing anything of value. In an insular field that has preemptively decided what it believes, meaningless findings are reinforced and consonant results are amplified without the counterbalance of skepticism. Ioannidis said it best:

  The greater the . . . interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias [and] are very common in biomedical research, and typically they are inadequately and sparsely reported. . . . Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. . . . Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. . . . Empirical evidence on expert opinion shows that it is extremely unreliable.12

  The root of this error goes beyond mutually supporting belief systems. The addiction field has been dominated by two colossal institutions, neither of which is trained or interested in looking beneath the surface of any behavior to its underlying causes. One of these forces is AA. The other is the titanic shift in psychiatry away from the exploration of human psychology toward more reductive and behavioral models, including the very popular notion that addiction is a disease. Both are riddled with biases that preclude their investigation of more plausible mechanisms behind addiction.

&nb
sp; The end goal of those who study human behavior for genetic markers and neurotransmitters is a seductive fallacy: the notion that someday, with perfect knowledge of our brain chemistry, we might somehow “unlock” the essence of human experience. It is a fallacy because it fails to recognize what more than thirty years of chaos and complex systems theory have already taught us: When networked pieces of anything come together, be they ants in a colony or neurons in a brain, the network exhibits emergent behaviors that are far more strange and complex than anyone could predict from looking at their constituent parts. Indeed, one of the tantalizing findings of this research is that often these behaviors have nothing to do with those constituent parts; they are, in a sense, platform agnostic. One of my favorite quotes by the Nobel laureate Philip Anderson encapsulates the point wonderfully: “Psychology is not applied biology, nor is biology applied chemistry.”13

  How do I know that my own bias toward a psychological perspective isn’t pushing me toward the same flawed and unfounded worldview? First, there is the commonsense fact that addiction looks just like known psychologically caused compulsions and can respond to purely psychological treatment; from a Bayesian standpoint, the idea that addictions and compulsions are intimately related is a sensible hypothesis. Those in favor of a biochemical model must contend with the fact that behaviors that truly are biochemical in origin, such as schizophrenia and mania, are fundamentally different from human addiction—they can arise and persist without psychological precipitants and can be treated with medication. Although these biochemical diseases create enormous distress, they do not have a specific emotional meaning or purpose; when appropriately treated with medication, people with these diagnoses can return to their usual state.

  There are other objective factors supporting a psychological view of addiction. As we know from a large academic literature (see The Heart of Addiction for many references), as well as from common experience, addiction in humans follows psychological precipitants, which are idiosyncratic to each individual and predictable.14 Addictive behavior can shift to compulsive symptoms that are universally understood to be psychological in nature, such as compulsively cleaning the house. And addiction can be successfully understood and treated by understanding how it works psychologically in each person through a talking treatment (psychotherapy). If we had never started out with the misconception that addictions are somehow different from other compulsive symptoms, we would not have made the error of separating them from the rest of the human condition to begin with.

  WHEN NUMBERS MEAN LESS THAN WORDS

  These days, virtually every addiction journal assigns far more value to statistical studies than to clinical findings. The primary claim is that words are not rigorous; numbers are. Yet this perspective fails to account for the complexity of human beings, who are, let’s face it, not just more complex than rats, but more complex than any number could possibly assimilate. (If someone undergoes therapy and is now more comfortable in intimate situations, what number should we assign to that?)

  Serious psychology journals usually manage this problem by reporting case studies rather than numbers. While individual cases have the limitation that they may not be generalizable to everyone, the accumulated wisdom from many case reports allows increased understanding of the way human beings’ minds work. If you wanted to learn about how radios work, you could take a thousand of them and subject them to an experiment, say, by dropping them off a building, then study the statistical likelihood of their having transistors. Or, you could start with one radio and carefully take it apart. True, there might be other radios that work differently, but after examining this one, you would know in broad strokes how radios work.

  Case reports have tremendous value. They are, quite simply, the only way to describe treatment. They supply a level of detail, nuance, and narrative that doesn’t conform to statistical terms but contains more information. Therapy often yields common external and observable consequences of internal changes, but these may be impossible to measure except in the subjective experience of the patient. In the case of increased ability to tolerate intimacy, for instance, if a patient who has avoided closeness his entire life is now able to look someone in the eye and spend time talking instead of quickly hurrying away, that may be evidence of a life-changing alteration of his internal state that is deeply meaningful to the patient—yet ultimately immeasurable. Should we therefore discount it? Someone once said, “Not everything that is important can be measured, and not everything that can be measured is important.” This is nowhere more applicable than in the study of human emotions, behaviors, and experience. We don’t have a system of numbers for such things. But they couldn’t be more relevant to the question of addiction.

  The memorable phrase “Lies, damn lies, and statistics,” commonly attributed to Mark Twain, was probably invented out of a combination of humor and pique. But statistics are neither good nor bad. In this book, I have cited statistics when there is no reason to doubt their legitimacy and criticized them when they are applied with bias or other methodological flaws. Perhaps most important, there are places where statistics have no role.

  DESIGNING THE PERFECT STUDY

  So how could we arrive at a more encompassing and broadly applicable consensus about what “works” in addiction treatment? The gold standard in science is the randomized controlled study. (No psychological study can be double-blind, which is the third common standard, as the psychologists administering the therapy will know which type of treatment they are offering.)

  Let’s imagine what that study might look like. A large population (over multiple treatment centers around the country) would be randomly assigned into groups that would receive the standard of care in four different approaches, or modalities: cognitive behavioral therapy (CBT), psychodynamic therapy based on the modern understanding described in this book, a 12-step outpatient approach, and a control group given no treatment at all. All groups would be matched for relevant factors such as age, sex, race, income, and educational levels. Follow-up surveys and interviews would be conducted every month through the six-month mark, and then at one year, two years, three years, five years, ten years, and twenty years.

  Shockingly, nobody has ever conducted such a study. Besides a dismaying lack of interest, the other reason is almost certainly money. Major public studies such as these can run well into the millions of dollars. And the organizations with the deepest pockets in this area have the strongest reasons to leave the current paradigm alone. It must ultimately fall to public science or to a wealthy university to get this kind of research off the ground. A fraction of what Americans spend on rehab would cover the entire study, and then some.

  But the researchers would face some profound limitations as well. Psychodynamic work requires long-term follow-up, as well as assessment of outcomes beyond the symptom. One reason psychodynamic work requires long follow-ups is that major life-affecting improvement may occur during the treatment but before the addictive behavior ends. Therefore if the behavior alone is measured, then psychodynamic therapy may appear to be slower (hence less “effective”) when what is actually happening is that the causes of the behavior are being worked out before the behavior stops (though the addiction may end before the therapy is very far along, as I described in Breaking Addiction).

  The relative capacity of therapists would also have to be determined, which is much harder to do than establishing baseline competence to administer questionnaires or perform therapy out of a workbook (commonplace for CBT). However, no universal standard of effectiveness for psychodynamic work has ever been established. In order to adequately test the theory of addiction I’ve described in my work, it would be necessary to train already-sophisticated psychodynamic clinicians in this new perspective. The good news is that this would actually not be difficult, since the model is entirely based on already established and accepted psychodynamic understandings.

  The costs and logistics of doing a proper study would certainly be great, but could be c
ompleted with governmental support. Unfortunately, the government’s own agency (the National Institute on Drug Addiction) is deeply invested in its own neurobiological (“brain disease”) idea. And pharmaceutical companies, a ready source for research on drugs, would have nothing to gain by funding a study of psychodynamic treatment.

  For the time being, until a critical mass is reached on pursuing the question of addiction treatment from a fuller perspective, the very best contribution individuals can make is to seek out therapists with good general psychological training (and without 12-step bias), and to apply pressure where it is needed to mount a public campaign in support of enlightenment in addiction research.

  My hope is that the website for this book will become a rallying point for readers to coalesce around the disillusionment so many Americans feel with the current system—and provide a tipping point that leads us toward a better approach to this solvable problem.

  ACKNOWLEDGMENTS

  WE ARE DEEPLY GRATEFUL to have had Helene Atwan as our editor. Her careful attention and perceptive eye made the book as good as it could be.

  We would like to thank our agent, Don Fehr of Trident Media Group, whose enthusiasm for producing a potentially controversial book was essential to its creation.

  Thanks also to Professor Richard Gelber, whose expertise in biostatistics was critical to our evaluation of scientific studies.

  We are grateful to the many people who offered to share, by interview or in writing, their personal experiences with AA and rehab centers.

 

‹ Prev