The Laws of Medicine
Page 3
The statistical problem that concerned Bayes requires a sophisticated piece of mathematical reasoning. Most of Bayes’s mathematical compatriots were concerned with problems of pure statistics: If you have a box of twenty-five white balls and seventy-five black balls, say, what is the chance of drawing two black balls in a row? Bayes, instead, was concerned with a converse conundrum—the problem of knowledge acquisition from observed realities. If you draw two black balls in a row from a box containing a mix of balls, he asked, what can you say about the composition of white versus black balls in the box? What if you draw two white and one black ball in a row? How does your assessment of the contents of the box change?
Perhaps the most striking illustration of Bayes’s theorem comes from a riddle that a mathematics teacher that I knew would pose to his students on the first day of their class. Suppose, he would ask, you go to a roadside fair and meet a man tossing coins. The first toss lands “heads.” So does the second. And the third, fourth . . . and so forth, for twelve straight tosses. What are the chances that the next toss will land “heads” ? Most of the students in the class, trained in standard statistics and probability, would nod knowingly and say: 50 percent. But even a child knows the real answer: it’s the coin that is rigged. Pure statistical reasoning cannot tell you the answer to the question—but common sense does. The fact that the coin has landed “heads” twelve times tells you more about its future chances of landing “heads” than any abstract formula. If you fail to use prior information, you will inevitably make foolish judgments about the future. This is the way we intuit the world, Bayes argued. There is no absolute knowledge; there is only conditional knowledge. History repeats itself—and so do statistical patterns. The past is the best guide to the future.
It is easy to appreciate the theological import of this line of reasoning. Standard probability theory asks us to predict consequences from abstract knowledge: Knowing God’s vision, what can you predict about Man? But Bayes’s theorem takes the more pragmatic and humble approach to inference. It is based on real, observable knowledge: Knowing Man’s world, Bayes asks, what can you guess about the mind of God?
....
How might this apply to a medical test? The equations described by Bayes teach us how to interpret a test given our prior knowledge of risk and prevalence: If a man has a history of drug addiction, and if drug addicts have a higher prevalence of HIV infection, then what is the chance that a positive test is real? A test is not a Delphic oracle, Bayes reminds us; it is not a predictor of perfect truths. It is, rather, a machine that modifies probabilities. It takes information in and puts information out. We feed it an “input probability” and it gives us an “output probability.” If we feed it garbage, then it will inevitably spit garbage out.
The peculiar thing about the “garbage in, garbage out” rule is that we are quick to apply it to information or computers, but are reluctant to apply it to medical tests. Take PSA testing, for instance. Prostate cancer is an age-related cancer: the incidence climbs dramatically as a man ages. If you test every man over the age of forty with a PSA test, the number of false positives will doubtless overwhelm the number of true positives. Thousands of needless biopsies and confirmatory tests will be performed, each adding complications, frustration, and cost. If you use the same test on men above sixty, the yield might increase somewhat, but the false-positive and -negative rates might still be forbidding. Add more data—family history, risk factors, genetics, or a change in PSA value over time—and the probability of a truly useful test keeps getting refined. There is no getting away from this logic. Yet, demands for indiscriminate PSA testing to “screen” for prostate cancer keep erupting among patients and advocacy groups.
The force of Bayes’s logic has not diminished as medical information has expanded; it has only become more powerful. Should a woman with a mutant BRCA1 gene have a double mastectomy? “Yes” and “no” are both foolish answers. The presence of a BRCA1 mutation is well known to increase the risk of ovarian or breast cancer—but the actual risk varies vastly from person to person. One woman might develop a lethal, rapidly growing breast cancer at thirty; another woman might only develop an indolent variant in her eighties. A Bayesian analyst would ask you to seek more information: Did a woman’s mother or grandmother have breast cancer? At what age? What do we know about her previous risks—genes, exposures, environments? Are any of the risks modifiable?
If you scan the daily newspapers to identify the major “controversies” simmering through medicine, they inevitably concern Bayesian analysis, or a fundamental lack of understanding of Bayesian theory. Should a forty-year-old woman get a mammogram? Well, unless we can modify the prior probability of her having breast cancer, chances are that we will pick up more junk than real cases of cancer. What if we invented an incredibly sophisticated blood test to detect Ebola? Should we screen all travelers at the airport using such a test and thereby prevent the spread of a lethal virus into the United States? Suppose I told you, further, that every person who had Ebola tested positive with this test, and the only drawback was a modest 5 percent false-positive rate. On first glance, it seems like a no-brainer. But watch what happens with Bayesian analysis. Assume that 1 percent of travelers are actually infected with Ebola—a hefty fraction. If a man tests positive at the airport, what is the actual chance that he is infected? Most people guess some number between 50 and 90 percent. The actual answer is about 16 percent. If the actual prevalence of infection among travelers drops to 0.1 percent, a more realistic fraction, then the chance of a positive test being real drops to a staggering 2 percent. In other words, 98 percent of tests will be false, and our efforts will be overwhelmed trying to hunt for the two cases that are real out of a hundred.
Can’t we devise tests of such accuracy and consistency that we can escape the dismal mathematical orbit of Bayes’s theorem? What if we could decrease the false-positive rate to such a low number that we would no longer need to bother with prior probabilities? The “screen everyone for everything” approach—Dr. McCoy’s handheld all-body scanner in Star Trek—works if we have infinite resources and absolutely perfect tests, but it begins to fail when resources and time are finite. Perhaps in the future we can imagine a doctor who doesn’t have to take a careful history, feel the contours of your pulse, ask questions about your ancestors, inquire about your recent trip to a new planetary system, or watch the rhythm of your gait as you walk out of the office. Perhaps all the uncertain, unquantifiable, inchoate priors—inferences, as I’ve called them loosely—will become obsolete. But by then, medicine will have changed. We will be orbiting some new world, and we’ll have to learn new laws for medicine.
....
LAW TWO
* * *
“Normals” teach us rules; “outliers” teach us laws.
Tycho Brahe was the most famous astronomer of his time. Born to a wealthy family in Scania, Denmark (now a part of Sweden), in 1546, Brahe became keenly interested in astronomy as a young man and soon began a systematic study of the motion of planets. His key discovery—that the stars were not “tailless comets” that had been pinned to an invisible canopy in the heavens, but were massive bodies radiating light from vast distances in space—shot him to instant fame. Granted an enormous, windswept island estate on the Øresund strait by the king, Brahe launched the construction of a gigantic observatory to understand the organization of the cosmos.
In Brahe’s time, the most widely accepted view of the universe was one that had been proposed centuries earlier by the Greek astronomer Ptolemy: the earth sat at the center of the solar system, and the planets, sun, and moon revolved around it. Ptolemy’s theory satisfied the ancient human desire to sit at the center of the solar system, but it could not explain the observed motion of the planets and the moon using simple orbits. To explain those movements, Ptolemy had to resort to bizarrely convoluted orbital paths, in which some planets circled the earth, but also moved in smaller “epicycles” around themselves, like spinning
dervishes that traced chains of rings around a central ring. The model was riddled with contradictions and exceptions—but there was nothing better. In 1512, an eccentric Prussian polymath named Nicolaus Copernicus published a rough pamphlet claiming—heretically—that the sun sat at the center of the planets, and the earth revolved around it. But even Copernicus’s model could not explain the movements of the planets. His orbits were strictly circular—and the predicted positions of the planets deviated so far from the observed positions that it was easy to write it all off as nonsense.
Brahe recognized the powerful features of Copernicus’s model—it simplified many of Ptolemy’s problems—but he still could not bring himself to believe it (the earth is a “hulking, lazy body, unfit for motion,” he wrote). Instead, in an attempt to make the best of both cosmological worlds, Brahe proposed a hybrid model of the universe, with the earth still at the center and the sun moving around it—but with the other planets revolving around the sun.
Brahe’s model was spectacular. His strength as a cosmologist was the exquisite accuracy of his measurements, and his model worked perfectly for nearly every measured orbit. The rules were beautiful, except for a pesky planet called Mars. Mars just would not fit. It was the outlier, the aberration, the grain of sand in the eye of Tychonian cosmology. If you carefully follow Mars on the horizon, it tracks a peculiar path—pitching forward at first and then tacking backward in space before resuming a forward motion again. This phenomena—called the retrograde motion of Mars—did not make sense in either Ptolemy’s or Brahe’s model. Fed up with Mars’s path across the evening sky, Brahe assigned the problem to an indigent, if exceptionally ambitious, young assistant named Johannes Kepler, a young mathematician from Germany with whom he had a stormy, on-again, off-again relationship. Brahe quite possibly threw Kepler the “Mars problem” to keep him distracted with an insoluble conundrum of little value. Perhaps Kepler, too, would be stuck cycling two steps forward and five steps back, leaving Brahe to ponder real questions of cosmological importance.
Kepler, however, did not consider Mars peripheral: if a planetary model was real, it had to explain the movements of all the planets, not just the convenient ones. He studied the motion of Mars obsessively. He managed to retain some of Brahe’s astronomical charts even after Brahe’s death, fending off rapacious heirs for nearly a decade while he pored carefully through the borrowed data. He tried no less than forty different models to explain the retrograde motion of Mars. The drunken “doubling back” of the planet would not fit. Then the answer came to him in an inspired flash: the orbits of all the planets were not circles, but ellipses around the sun. All the planets, including Mars, orbit the sun in concentric ellipses. Seen from the earth, Mars moves “backward” in the same sense that one train appears to pitch backward when another train overtakes it on a parallel track. What Brahe had dismissed as an aberration was the most important piece of information needed to understand the organization of the cosmos. The exception to the rule, it turned out, was crucial to the formulation of Kepler’s Law.
....
In 1908, when psychiatrists encountered children who were withdrawn, self-absorbed, emotionally uncommunicative, and often prone to repetitive behaviors, they classified the disease as a strange variant of schizophrenia. But the diagnosis of schizophrenia would not fit. As child psychiatrists studied these children over time, it became clear that this illness was quite distinct from schizophrenia, although certain features overlapped. Children with this disease seemed to be caught in a labyrinth of their own selves, unable to escape. In 1912, the Swiss psychiatrist Paul Eugen Bleuler coined a new word to describe the illness: autism—from the Greek word for “self.”
For a few decades, psychiatrists studied families and children with autism, trying to make sense of the disease. They noted that the disease ran in families, often coursing through multiple generations, and that children with autism tended to have older parents, especially older fathers. But no systematic model for the illness yet existed. Some scientists argued that the disease was related to abnormal neural development. But in the 1960s, from the throes of psychoanalytical and behavioral thinking, a powerful new theory took root and held fast: autism was the result of parents who were emotionally cold to their children.
Almost everything about the theory seemed to fit. Observed carefully, the parents of children with autism did seem remote and detached from their children. That children learn behaviors by mirroring the actions of their parents was well established—and it seemed perfectly plausible that they might imitate their emotional responses as well. Animals deprived of their parents in experimental situations develop maladaptive, repetitive behaviors—and so, children with such parents might also develop these symptoms. By the early 1970s, this theory had hardened into the “refrigerator mom” hypothesis. Refrigerator moms, unable to thaw their own selves, had created icy, withdrawn, socially awkward children, resulting ultimately in autism.
The refrigerator mom theory caught the imagination of psychiatry—could there be a more potent mix than sexism and a mysterious illness?—and unleashed a torrent of therapies for autism. Children with autism were treated with electrical shocks, with “attachment therapies,” with hallucinogenic drugs to “warm” them to the world, with behavioral counseling to correct their maladapted parenting. One psychiatrist proposed a radical “parent-ectomy”—akin to a surgical mastectomy for breast cancer, except here the diseased parent was to be excised from the child’s life.
Yet, the family history of autism would not fit the model. It was hard to imagine emotional refrigeration, whatever that was, running through multiple generations; no one had documented such an effect. Nor was it simple to explain away the striking incidence of autism in children of older male parents.
We now know that autism has little to with “refrigerator moms.” When geneticists examined the risk of autism between identical twins, they found a striking rate of concordance—between 50 and 80 percent in most studies—strongly suggesting a genetic cause of the illness. In 2012, biologists began to analyze the genomes of children with so-called spontaneous autism. In these cases, the siblings and parents of the child do not have the disease, but a child develops it—allowing biologists to compare and contrast the genome of a child with that of his or her parents. These gene-sequencing studies uncovered dozens of genes that differed between parents without autism and children with autism, again strongly suggesting a genetic cause. Many of the mutations cluster around genes that have to do with the brain and neural development. Many of them result in altered neurodevelopmental anatomies—brain circuits that seem abnormally organized.
In retrospect, we now know that the behavior of the mothers of autistic children was not the cause of autism; it was the effect—an emotional response to a child who produces virtually no emotional response. There are, in short, no refrigerator moms. There are only neurodevelopmental pathways that, lacking appropriate signals and molecules, have gone cold.
....
The moral and medical lessons from this story are even more relevant today. Medicine is in the midst of a vast reorganization of fundamental principles. Most of our models of illness are hybrid models; past knowledge is mishmashed with present knowledge. These hybrid models produce the illusion of a systematic understanding of a disease—but the understanding is, in fact, incomplete. Everything seems to work spectacularly, until one planet begins to move backward on the horizon. We have invented many rules to understand normalcy—but we still lack a deeper, more unified understanding of physiology and pathology.
This is true for even for the most common and extensively studied diseases—cancer, heart disease, and diabetes. If cancer is a disease in which genes that control cell division are mutated, thus causing unbridled cellular growth, then why do the most exquisitely targeted inhibitors of cell division fail to cure most cancers? If type 2 diabetes results from the insensitivity of tissues to insulin signaling, then why does adding extra insulin reverse many, but n
ot all, features of the disease? Why do certain autoimmune diseases cluster together in some people, while others have only one variant? Why do patients with some neurological diseases, such as Parkinson’s disease, have a reduced risk of cancer? These “outlying” questions are the Mars problems of medicine: they point to systematic flaws in our understanding, and therefore to potentially new ways of organizing the cosmos.
Every outlier represents an opportunity to refine our understanding of illness. In 2009, a young cancer scientist named David Solit in New York set off on a research project that, at first glance, might seem like a young scientist’s folly. It is a long-established fact in the world of cancer pharmacology that nine out of ten drugs in clinical development are doomed to fail. In pharmaceutical lingo, this phenomenon is called the valley of death: a new drug moves smoothly along in its early phase of clinical development, seemingly achieving all its scientific milestones, yet it inevitably falters and dies during an actual clinical trial. In some cases, a trial has to be stopped because of unanticipated toxicities. In other cases, the medicine provokes no response, or the response is not durable. Occasionally, a trial shows a striking response, but it is unpredictable and fleetingly rare. Only 1 woman in a trial of 1,000 women might experience a near complete disappearance of all the metastatic lesions of breast cancer—while 999 women experience no response. One patient with widely spread melanoma might live for fifteen years, while the rest of the cohort has died by the seventh month of the trial.
The trouble with such “exceptional responders,” as Solit called them, was that they had traditionally been ignored, brushed off as random variations, attributed to errors in diagnosis or ascribed, simply, to extraordinary good fortune. The catchphrase attached to these case histories carried the stamp of ultimate scientific damnation: single patient anecdotes (of all words, scientists find the word anecdote particularly poisonous since it refers to a subjective memory). Medical journals have long refused to publish these reports. At scientific conferences when such cases were described, researchers generally rolled their eyes and avoided the topic. When the trials ended, these responders were formally annotated as “outliers,” and the drug was quietly binned.