by Steven Hatch
Narrative and Uncertainty
Why do people—physician and patient alike—have such difficulties coping with concepts as probability and uncertainty? The answers can be found in the disciplines of evolution and psychology and are largely beyond the scope of this book, but the power of stories, and the influence of narratives on our thinking, is critically important. We think about ourselves, and of the universe around us, in absolute terms of cause and effect. We don’t regard our lives as being subject to mere chance; we assume that the variables are within our control and that our successes can be attributed to our strengths and our failures to our weaknesses. Medicine, too, is a story of sorts, and we resist the notion that chance plays a key role in the endeavor.
But this just isn’t so. It is a trick of the mind, and it impedes us from understanding the modern world. Daniel Kahneman, a Nobel laureate in economics, refers to this as the “narrative fallacy,” writing that it inevitably arises “from our continuous attempt to make sense of the world,” adding that “the explanatory stories that people find compelling are simple; are concrete rather than abstract . . . and focus on a few striking events that happened rather than on the countless events that failed to happen.” In medicine—both at the personal and at the policy level—succumbing to the narrative fallacy can be disastrous.
Take a look at nearly any news story on medicine, and you will see this devotion to narrative in full view. Invariably, a story on a new diabetes drug or a fancy new surgical technique or an unfortunate reaction to a medication will begin with the saga of one (or more) patients. All too frequently statistics aren’t even mentioned: Is this patient’s story common or rare? Is the story applicable to the many or the few? When these rather important details are sidestepped, the misunderstandings can be profound, with the result that patients and families often feel betrayed when the state-of-the-art technology fails to deliver.
I think the reason people have so much difficulty coping with uncertainty is that these powerful narratives, from which the narrative fallacy arises, are both hidden and in plain sight. You can almost pluck these narratives out of the air as they swirl around us. They are found in the e-mails that circulate through cyberspace, where links to health news items are shared by colleagues and friends; they lurk in the television dramas that portray doctors working at the cutting edge; and they can be heard in the chit-chat of weekend dinner parties, with people exchanging concerns and fears, both real and imagined, about various public health scares. Nowhere in these exchanges can you find people explicitly stating them, for they would seem laughable oversimplifications. Yet I would argue that they are there nonetheless, and they have a major influence on our thinking.
A variety of mutually reinforcing narratives are upended by a discussion about overdiagnosis. Technology is beneficial is one message (with the implication that it is always so), and this overlaps with images reveal everything and expert doctors radiate confidence. These are entries in the “medicine is good” category, and they explain in part the enduring goodwill of the vast majority of the public toward physicians and why they remain among the most respected of professions.
Then there is the dark side of these narratives, which fall under the “medicine is bad” heading, and although they are often held by people completely hostile to the basic principles of modern medicine, they aren’t held only by antiscientific cranks. These messages include technology is cold and the pharmaceutical industry tries to keep people sick for profit and doctors are too often too sure of themselves. These medicine narratives lead to the kind of distrust that has allowed, for instance, the spread of so-called alternative medicine, which is mostly harmless as long as patients are healthy but can sometimes lead to delays in treatment for people with serious illnesses.
At some level, the “medicine is good” and the “medicine is bad” narratives are both right. I am not implying I believe in an anything-goes cultural relativism, but rather that, because our knowledge is based on uncertainty, medicine cannot essentially be only one thing or another. You can find specific stories to support both of these worldviews. Want to be appalled by doctors who suck at the teat of the pharmaceutical industry through bloated consulting fees and influence peddling that is corruption in all but name only? Spend some time investigating some of the more unsavory aspects of modern psychiatry.* Want to see the triumph of modern medicine as it conquers death? Learn about the development of electrocardiography or the history of antibiotics and what they’ve each done for patients in the past hundred or so years.
Sorry, psychiatry: you’re far from the only specialty with such problems, but you may be the worst. And, to be clear, by “psychiatry” I am referring not (solely) to psychiatrists, i.e., the specialists trained in psychiatry, but the practice, in whatever form, of psychiatry, which includes a hefty amount of primary care physicians doling out atypical antipsychotics and the like.
Thus, any of these narratives can be “true” in particular situations. My point is that they’re so powerful that they can be capable of having all of us, doctors and patients alike, ignore evidence that suggests that in a given situation those narratives are wrong. The drug industry may be driven by profits (see: psychiatry, modern practice of), but the vaccines they make really are safe and lifesaving—indeed, astonishingly so—and perform vastly better than virtually any other pharmaceutical product. Yet there is a persistent and irrational belief, even among some highly sophisticated and well-informed people, that vaccines are associated with a variety of harms, particularly autism. The tenacity with which people cling to views such as these, or the obliviousness of those people to the strong evidence that such views are simply wrong, is due in large measure to the power of these narratives that color our thinking.
This book argues that, to find our way forward and extricate ourselves from being victims of these sometimes overly simplistic narratives, we must look past the slogans and instead directly wrestle with the data. I don’t offer an analysis of all the narratives that underlie each of the stories I will tell in the coming pages, but they are there nonetheless, coloring and shaping our attitudes about new medical developments in addition to long-standing medical practices. These narratives, like the science and medicine they try to make sense of, are not inherently good or bad; they are tools by which we can either improve our wellness, or butcher ourselves—sometimes to death—in the name of health.
1
PRIMUM NON NOCERE: THE MOTIVATIONS AND HAZARDS OF OVERDIAGNOSIS
Many people are overconfident, prone to place too much faith in their intuitions . . . when people believe a conclusion is true, they are also very likely to believe arguments that appear to support it, even when those arguments are unsound.
—DANIEL KAHNEMAN
Stripped of all its complexities, the process of medicine consists of two basic activities. The first activity concerns diagnosis: the identification of some condition, some malady that afflicts the body. The second activity—done with some hope and luck in addition to some science—is treatment: the attempted eradication of disease, or at least the relief of symptoms, through the prescription of medicines or the correction of anatomy via surgery. Doctors, and the folk healers who preceded them, have been engaged in this two-pronged approach for millennia.
The emphasis on careful diagnosis characterizes the vocation of medicine even from antiquity: it is why we still regard Hippocrates, who lived more than 2,000 years ago, as our profession’s paterfamilias. Although our medicine has changed radically since the time of the ancient Greeks, we can still see the faint outlines of their diagnostic approach to this day. For instance, medical students routinely learn to observe a characteristically odd shape to fingernails known as “clubbing”; its identification suggests a variety of chronic diseases ranging from emphysema to cancer, and Hippocrates himself is credited with the earliest known clinical descriptions of the phenomenon. To this day, some of the old-timers will still use the term “Hippocratic nails” to describe clubbing.<
br />
The process of diagnosis until very recently consisted mainly of talking to patients about their symptoms and examining them—that is, investigating their bodies for the physical signs associated with disease. But we have witnessed a quantum leap over the past generation or so, and the act of diagnosis today would have blown the mind of a doctor from one hundred years ago. What doctors possess now to not only identify but precisely locate a disease is astonishing. MRIs and CT scans allow us to peek under the hood, so to speak, and look all the way through the body as if we had sliced it up, sometimes finding small pockets of infection or cancer that would have been difficult bordering on impossible to find before their advent. Blood tests can distinguish among dozens of viruses and bacteria, identify whether some infections occurred in the past or are happening in the present, report our electrolyte levels, locate the exact spot of a genetic mutation out of 3 billion “letters” of DNA, and do all of this within days, if not hours. Cameras can now be used to visualize seemingly any part of the body on a television screen, even to the point where we can swallow a pill with a tiny camera and in less than a day drop off a cartridge with an image file of a colonoscopy.
As we launch into this first stop on our tour of uncertainty, one point requires some emphasis at the outset: this is amazing. We are living in a golden age of diagnosis. There are entire categories of disease of which we have only recently become aware—conditions that we had mistakenly lumped into others, lacking the technology to distinguish them. As we improve in our ability to discover diseases and find ever-better tests by which we can diagnose such diseases, the likelihood that doctors will make life better for their patients increases exponentially. These new technologies are absolutely indispensable to that process.
That said, the changes wrought by what we might call the great diagnostic shift is only beginning to be understood. There has been a slow recognition that not all of these changes have been beneficial. The precision with which we can make diagnoses is profound, but precision is not the same thing as certainty. One of the main problems with diagnosis today is that these new technologies are so sensitive, so able to find diseases, and in such early stages of development, that they leave us too confident that we’ve identified disease when we probably haven’t. This process of finding-disease-that-isn’t-disease is called overdiagnosis, and it is possibly the most important real-world consequence of our misguided faith in certainty.
The best place to start to illustrate this, ironically, is by investigating a profession that still barely uses all of this new diagnostic technology: psychiatry. I’ll begin by looking at one of the more infamous chapters in the history of that profession and considering how that might be related to CTs and MRIs and blood tests and all the rest.
Error Machines
Between 1969 and 1972, a group of eight patients had presented to a variety of psychiatric hospitals across the United States with a remarkably similar complaint. They all said they were hearing voices. This is hardly uncommon in psychiatric institutions, but in each case the voices said the same three words: “empty,” “hollow,” and “thud.”
Many of these eight patients worked in the field of mental health: four were either psychologists or psychiatrists, and a fifth was a graduate student in psychology. The remaining three included another doctor (a pediatrician), a painter, and a housewife. None of them had any prior history of psychiatric problems, and none of them behaved strangely during their initial evaluations. Following the initial assessments, seven of these eight patients were admitted with the diagnosis of paranoid schizophrenia, while the eighth was diagnosed with manic-depressive psychosis.
What made this cohort of patients singularly interesting to the profession of psychiatry, and why they are known to posterity, is that in addition to having precisely the same auditory hallucination at different times and in different places, they shared one other feature: they were all completely sane. These eight “patients” were, in fact, volunteers for a brilliant and devious study conducted on the psychiatrists, and more broadly on the staff at the psychiatric facilities where the evaluations took place. The results were published in 1973 in one of the most unusual papers ever to grace the pages of Science, then as now one of the greatest scientific journals in the world.
The research question of the author, Dr. David Rosenhan, was elegant in its simplicity: Could highly trained mental health professionals detect a sane patient in an insane place? The answer was a resounding no. The details of what has since been come to be called the Rosenhan experiment make for somewhat uncomfortable reading and can be found without difficulty on the Internet. The results, written in a fluid prose style easily comprehensible to nonscientists, should be humbling to the profession of psychiatry. More than forty years after the publication of this seminal experiment, I am still not sure if that is the case.
The key to the experiment relied on the distinction between a psychiatric symptom in a sane person and the clinical definition of insanity. For the experiment, the participants were otherwise normal people presenting with one single, transient, unusual symptom. They all reported experiencing an isolated auditory hallucination, and although such a symptom is associated with mental illness, a lone hallucination is not equivalent to schizophrenia. Many people experience transitory hallucinations without having a total psychiatric breakdown. What is required for a diagnosis like schizophrenia is evidence of a person’s complete inability to function in the world as a consequence of the hallucinations and the inability to stop or control such hallucinations to the point where that person is overwhelmed. The pseudopatients—which included Rosenhan himself—were all instructed to behave as they normally do in life, which prior to their admission had raised no alarm to anyone. The only psychiatric diagnosis that could reasonably be made fell far short of schizophrenia or manic depression.
Their sanity made no difference. Despite the paucity of evidence that they suffered from deeper pathology, not only did all of the participants end up with the most global and profound labels of mental illness, they were held for astonishingly long times to manage these conditions. The average length of hospitalization was nineteen days, with the shortest lasting seven days and the longest a dumbfounding fifty-two. Moreover, it was hardly the psychiatrists alone who failed to notice the sanity of the participants, as the nurses and attendants treated the patients in the same manner. Indeed, a review of the nursing records showed that they interpreted specific behaviors as evidence of pathology, even though such behavior would raise no suspicions outside a mental hospital. “Patient engaged in writing behavior” was one of the observations made when one pseudopatient was simply taking notes on his surroundings for later review, a line that has since been quoted hundreds of times in the medical literature and by patient advocates skeptical of modern psychiatry’s benefits.
By contrast, the insane—that is, the true psychiatric patients—were surprisingly good at spotting sanity. During the first three hospitalizations, 35 of 118 patients voiced varying degrees of suspicion that the participants were either journalists, professors, or some other kind of sane person “checking up on the hospital.” Despite reassurances that they “had been sick before but were fine now,” some of the actual patients persisted in their belief that the pseudopatients had no history of psychiatric problems for the entire length of the hospitalizations in question. In some ways, the number is even more remarkable, because the 35 “diagnoses” offered by the psychiatric patients came unbidden, in contrast to those of the psychiatry staff who were charged with the responsibility of making a diagnosis. “The fact that the patients often recognized normality when staff did not raises important questions,” Rosenhan commented in what can only be regarded as a major understatement.
Yet the most fiendish portion of the Rosenhan experiment came after these pseudopatients were discharged. Rosenhan presented the results to the staff at a prestigious psychiatric research and teaching hospital and was greeted with widespread disbelief. Surely, the staff ar
gued, such a miscarriage of medicine would not occur at our institution. So Rosenhan tipped his hand and informed them that over the course of the next three months, one or more pseudopatients would be admitted to their hospital. He provided the staff with questionnaires to assess their assessments, so to speak, of the level of sanity or insanity of the admitted patients. During the experimental period, 193 patients were admitted. Of these, 41 were considered to be pseudopatients by at least one staff member, and 23 were labeled pseudopatients by the psychiatrists. Nineteen patients in all were identified as pseudopatients by at least two mental health professionals. In reality, none of the 193 admissions were pseudopatients—or as Rosenhan sardonically noted, if they were, they weren’t involved in his research.
Viewed in sum, the Rosenhan experiment appears to be a case study in the limitations in psychiatric diagnosis and the effects of institutionalization on the mentally ill. “It could be a mistake, and a very unfortunate one, to consider that what happened to us derived from malice or stupidity on the part of the staff,” he wrote in conclusion. “Where they failed, as they sometimes did painfully, it would be more accurate to attribute those failures to the environment in which they, too, found themselves than to personal callousness.”
The meaning of the Rosenhan experiment is still hotly debated to the present day. Does it demonstrate that psychiatry is hopelessly mired in subjective impressions, where once a label is applied, it sticks beyond all reason, to the point that prolonged sane behavior cannot even be recognized for what it is? Or was the experiment jerry-rigged to arrive at this conclusion in the first place? Among the criticisms of the research was that, instead of testing the ability of the psychiatrists to diagnose insanity, it was actually just testing the ability of the patients to lie. One analysis offered the following counterfactual: if a person could steal a few pints of blood, swallow it, and later present to an emergency room vomiting up the blood without explaining what happened, the subsequent diagnosis of a stomach ulcer wouldn’t mean that the staff didn’t know how to diagnose that condition.