by Steven Hatch
Cut it out, they have not. The alternative ILADS continues to spread its message about the pervasiveness of chronic Lyme and the benefits of prolonged therapy as if it were 2007 and nothing has transpired since. When I completed the first draft of this chapter in the summer of 2014, their website, as well as other chronic Lyme discussion groups on the Internet, included a great deal of information about the attorney general investigation of IDSA. Yet there was effectively no mention of the review panel’s vindication of IDSA.
Attorney General Blumenthal has since become US senator Blumenthal. He was so chastened by the report of the review panel that almost as soon as he joined the Senate in 2011, he introduced legislation calling for the establishment of a “Tick-Borne Advisory Committee” that would “ensure a broad spectrum of scientific viewpoints” in formulating Lyme policy. There can be no doubt that what Blumenthal means by “broad spectrum” is that it should include the very chronic Lyme viewpoint the review panel had so forcefully rejected. Indeed, at the press conference announcing the bill, he prominently showcased members of the chronic Lyme advocacy groups, as clear a message as any that his thinking had not really changed. In short, he is trying to achieve through the government bureaucracy what he could not with an independent panel of scientists—a strategy that the alternative Lyme groups have pursued successfully at the state level for several years.
David Marsh, the young man who has come to my office, is aware of this controversy, although his interpretation of it would differ from my account. In the months preceding our visits he had increasingly read up on chronic Lyme via the large number of websites devoted to the subject. He is planning on attending an ILADS conference. Because of the almost complete repudiation of the chronic Lyme viewpoint among mainstream physicians—and it should be clear by now that I am, at least in this respect, very much a mainstream physician—most patients who strongly support the chronic Lyme theories do not visit university-based infectious disease consultants. To a great extent, these patients inhabit a hermetically sealed world and have very few discussions with people like me. Our visits represented a rare moment where those two worlds interfaced rather than collided. In part, this was due to the fact that David stood on the threshold of that separate world and wanted to talk to me before deciding to enter it wholly.
What followed between us were conversations, ultimately, about the limits of certainty, and what action one should take in the face of uncertainty. I could not, for instance, say that he never had Lyme. He lived in a place where his risk of Lyme infection was high, and his outdoor activities increased his risk further. I could not even say that his symptoms weren’t due to some prior encounter with Lyme: although there is almost total consensus that Lyme cannot survive the course of antibiotics recommended in the guidelines, there is an equal recognition that a very small percentage of patients who have Lyme develop the kind of chronic fatigue from which David suffers, and is referred to as “post-Lyme syndrome,” though really that’s just a label saying we know something is going wrong but have little idea what that something is. So perhaps this is all the consequence of a fully treated infection, and that infection triggered his immune system to go haywire. I just don’t know.
Nor do I know what diagnosis, infectious or otherwise, to provide him in place of Lyme. A considerable amount of intellectual firepower and scientific resources have been brought to bear on the subject of chronic fatigue syndrome. Decades ago, some researchers working on the Epstein Barr virus (a common cause of mononucleosis) considered it to be a leading candidate as the cause of chronic fatigue. Subsequent population studies, however, made it clear that nearly everyone gets infected by EBV, so if it is indeed the cause, nobody has yet shown how it causes symptoms in only a small group of people.* For a brief period at just about the time the Lyme review panel was announcing its conclusions, scientists reported that a newly discovered pathogen with the nightmarish name of xenotropic murine leukemia virus-related virus looked like it was the cause, but no other labs were able to validate their findings, and the paper was partially retracted, as it appears that the virus may have been an accidental laboratory contaminant. Many other candidates have been nominated, most of them viruses, bacteria, or industrial chemicals, but nothing has thus far panned out. On the subject of the cause of chronic fatigue, doctors remain in the dark.
Worth emphasizing here that the discovery that EBV was widespread wasn’t necessarily evidence that EBV didn’t cause chronic fatigue. The vast majority of people infected with tuberculosis live their entire lives without becoming sick from it—doctors refer to this as “latent tuberculosis infection.” The discovery of latent TB, however, doesn’t mean that Mycobacterium tuberculosis isn’t the cause of the TB that kills so many, as there has been more than a century of accumulated evidence supporting that theory. So EBV is potentially the cause of the chronic fatigue syndrome, but there has been no convincing evidence to support that thus far.
That’s a lot of uncertainty to swallow, especially for someone who understandably is desperate for an answer and some help, but that isn’t the end of the story with respect to Lyme. As I wrote at the beginning of the chapter, we occupy a spot on the spectrum of certainty that moves away from the uncertainties of benefits and toward the increasing certainties of harm. For although we don’t know why some patients develop chronic fatigue, we do know that months and months of antibiotics don’t help. We know that because multiple studies were done, and all of them failed to show any meaningful benefit. By “know,” I don’t mean know with absolute, unimpeachable certainty, but rather I mean know within the limits of what can reasonably be expected at this point in time. It’s the kind of knowing that doctors require to do their diagnosing and treating every day of their careers. It’s the kind of knowing that people require to do their living and functioning every day of their lives.
Moreover, I know a few other things. I know that his negative Lyme Western blot is pretty strong evidence that, whatever happened in the past, he isn’t infected now. Furthermore, the Western blot was done despite a negative ELISA, and a negative ELISA likewise provides a strong degree of certainty he isn’t currently infected. The test’s parameters were designed specifically to exclude people who aren’t infected; its shortcoming, like the ELISA test for HIV, lies in its inability to exclude uninfected patients in the positive range. So a negative test, with a negative Western blot, should at least provide the comfort—though granted, it isn’t much—that Lyme is not the source of his troubles. The greater value in this information is that it can protect him from useless treatments that won’t help him and could very possibly make his life worse. On these points, I am much more confident, and I tell him so.
But that is where we part ways, as he is bound for the ILADS conference where he will undoubtedly find a physician who will be more receptive to his request for the tetracycline, or perhaps encourage him to try a more toxic regimen. Part of what makes these visits painful is that I sense a degree of mutual respect. He is eloquent and thoughtful, reflective about his troubles. He is coping with his illness with tremendous grace. I think that he appreciates my candor as I emphasize what we do and don’t know about chronic fatigue. I hope he sees that I am, despite our different views about Lyme, trying to listen to him and validate him in what ways I can. Ultimately, however, he is disappointed in me.
I am disappointed that I don’t have answers for him.
6
THE ORIGINS OF KNOWLEDGE AND THE SEEDS OF UNCERTAINTY
There is nothing men will not do, there is nothing they have not done, to recover their health. They have submitted to be half-drowned in water, and half-choked with gases, to be buried up to their chins in earth, to be seared with hot irons like galley-slaves, to be crimped with knives, like codfish, to have needles thrust into their flesh, and bonfires kindled on their skin, to swallow all sorts of abominations, and to pay for all of this, as if to be singed and scalded were costly privilege, as if blisters were a blessing and leeches were a l
uxury.
—OLIVER WENDELL HOLMES
In the previous chapters, we looked at various forms of medical uncertainty and the impact that such uncertainty has on patients, whether they are submitting blood for an insurance screen, or calling the local hospital to schedule a screening test, or considering starting medications to aim for a goal blood pressure, or relieving unrelenting fatigue. But before we look further at uncertainty it is best to consider how our modern medical knowledge differs from that of the past—that is, to look at the origins of our modern “certainty,” even while we keep our eyes fixed on the ever-present problem of uncertainty.
In this chapter, I will focus on how making the proper comparisons allows us to know the true value of medications. Shortly I will discuss some of the biggest, most successful drugs of our time and how their success is built on this very modern line of reasoning. As I briefly noted in the introduction, our modern medicines really do work in a way that medicines from two hundred years ago mostly didn’t. We have much more specific indications, coupled with a considerably deeper appreciation for physiology than premodern healers could have dreamed of. From a pharmacologic standpoint, it is a remarkable time to be alive—especially when stacked against the kind of “drugs” that were considered standard fare a few hundred years ago, when establishment physicians, Western or otherwise, routinely offered treatments that would now leave us in speechless recoil.
Yet if you peer beneath the surface of this notion that nearly all drugs before the twentieth century were worthless, you will find some odd and irreducible curiosities. Take a look today at any hospital formulary or drugstore shelf stock, and you will encounter several medications absolutely central to our basic armamentarium of healing. You might be surprised to find that their properties were not only understood before 1900, they were understood long before that year arrived.
Patients who suffer from heart failure, especially in the setting of a condition known as atrial fibrillation, are still sometimes prescribed digoxin, the precursor of which (digitalis) was the dominant cardiac drug for much of the twentieth century; it is derived from extracts of the foxglove plant, whose pharmacologic properties were first described in 1785. Senna, a treatment for constipation, is still used commonly today and sold over the counter; it was introduced by Arab physicians of the ninth century. Our most important painkillers remain the narcotic family of drugs known as opiates; the first modern drug in this class, morphine, was developed in 1804, but a working knowledge of the anesthetic properties of opium dates back thousands of years. And perhaps the single most important drug in the history of humanity, salicylic acid, or aspirin—a drug with a variety of uses and demonstrably lifesaving qualities despite a reasonably low side-effect profile—was synthesized in its modern, pure form in 1899, but descriptions of the medicinal properties of willow and other plants containing salicylic acid can be found in Egyptian texts from the second millennium BCE. If these primitive healers didn’t know anything about anything, how did they know these plant extracts worked? And because they did appear to know at least something about something, what makes our way of knowing different from theirs, or have we deluded ourselves into thinking we know stuff when we’re no better than they were?
To consider this question let’s pretend there’s a very bad disease. First the symptoms start out in a nonspecific way: people feel fatigued and their muscles ache. But it gets worse. After a time, their skin becomes rough, and they can bruise easily. Their bones start to hurt. Wounds don’t heal. Their mouth and eyes become dry, their gums swell, and their teeth fall out. Eventually their livers fail and they slip into a coma to die a few hours or days later. The cause of this disease is completely unclear. Patients are dropping like flies.
Now let’s forget we know anything about modern medicine for a moment and imagine that someone proposes a series of treatments for this strange disease using some common, minimally toxic substances that we incorporate into our daily diet. They suggest that we take twelve people suffering from this malady, split them into six groups of two, and give these substances to see what will happen. Two people will be given a quart of cider to drink each day; two will add a small amount of acid to their drink; two will have a few tablespoons of vinegar added to their food; two will be given oranges to eat and lemons to suck on; two will eat food flavored liberally with nutmeg; and the final pair will drink a few glasses of saltwater each day.
Sounds like nonsense, right? Perhaps such a study might send a frisson of excitement through the homeopathic crowd, but most people would look at this “drug trial” and assume that it is distinctly unscientific in its underlying philosophy as well as its execution. After all, why on earth would one choose these items to begin with, or was it just some random hodgepodge? This “trial,” insofar as it is a trial at all by modern standards, appears doomed. It’s a classic nonscientific muddle consisting of taking some arbitrary substances, giving them to some very sick people, and hoping for the best. One doesn’t have to be a research scientist to sense that this isn’t what modern scientific research is about.
But what happens when just one pair—say, the two who had oranges and lemons—starts to recover? In fact, with their oranges and lemons, they are back to their normal selves within days. None of the other pairs have recovered in the meantime.
It turns out that although this example may seem outlandish, it isn’t made up. What I’ve just described really did happen, and it is often regarded by historians as one of the key moments when contemporary, Western medicine took shape. Sometimes referred to as the first modern drug trial, this is the experiment that the Scotsman James Lind performed on patients suffering from scurvy.
The research subjects, divided into pairs like I describe above, were British sailors; the two that got oranges and lemons—Lind may have been inspired by the English nursery rhyme, which dates back at least a century before his experiment—were fit for duty inside a week.* Lind performed his experiment in 1747 and published his Treatise on Scurvy describing this Lazarus-like effect six years after that, but it would be nearly forty more years before the juice of citrus fruits was routinely added to the grog of sailors and scurvy virtually disappeared from the British navy, in part because Lind himself appears not to have fully understood the importance of his own finding. He continued to work with the navy for the next two decades, experimenting with all kinds of dietary supplements, but never became the champion of oranges and lemons that we would assume he would have been based on the trial results.
Lind isn’t the source of the phrase “oranges and lemons,” but he is given the lion’s share of credit for the origin of the term “limeys” because once the scurvy-preventing properties of citrus fruit were finally understood, limes became the main additive to the British navy’s diet. They were more abundant, apparently: in Roy Porter’s comprehensive history of medicine, limes are referred to as “less effective” than lemons, but from what I can find on Doctor Internet, they both appear to have more than the necessary daily amount of vitamin C.
What can we gain from this anecdote? That big, big changes to the scientific method sometimes start small—so small, in fact, that it isn’t obvious to the scientists in the moment what profound discovery they’ve stumbled upon.
The Organized Search for Dissimilarities
So why does James Lind get such credit for being a pioneer of science? It hardly seems like an auspicious achievement in retrospect. His theories as to the cause of scurvy was a typical eighteenth-century morass, consisting of concerns about putrefaction of poorly digested food and living in the damp environment of sea ships. Like all physicians of that time, he didn’t even possess the intellectual scaffolding to place an idea such as vitamin deficiency in his thinking. Scurvy is caused by a lack of vitamin C, and the initial discoveries of the chemical compounds we now call vitamins took place in the 1890s, one full century after Lind’s death. He was lucky rather than smart, having randomly picked an effective treatment for scurvy, finding a fruit
rich in a chemical he knew nothing about—indeed, to say that he knew anything of chemicals themselves in the way we think of that term is going too far. So why do we solemnly invoke his name at medical school convocations and the like?
The simplest answer is, he compared things. He administered such-and-such to this group, a different thing to another group, and so on and monitored the effects. That may seem like a pedestrian innovation, but it’s the basis for everything we know in clinical medicine. All clinical research, or at least good clinical research, is a variation on the theme of seeing what happens when X is administered to one group, and X is not administered to another. We call the second group a control group, and the research controlled trials. Lind didn’t actually have a control group because all his pairs of sailors were given different treatments. He ended up having controls by default—most of the other treatments had no vitamin C—but of course he didn’t know that. Not all controlled trials today have only one intervention, although it’s unusual today to see more than two or three interventions in a single trial.
Doing this kind of research is not especially glamorous work: it requires intellectual rigor without intellectual creativity. But without that basic tool of comparing X to not-X, we’d have no greater insights than the Quacks of Old London about what drugs are helpful and what are a waste of money and resources—and what may well make us sicker rather than healthier. Modern clinical medicine really did start at the moment Lind doled out those odd treatments, even though he hadn’t really a clue what he was doing or how important that approach would later be.