Pandemic

Home > Other > Pandemic > Page 17
Pandemic Page 17

by Sonia Shah


  * * *

  Is it possible to sever the connection between the amorphous fears inspired by epidemics and other social crises and the misplaced blame that follows? In the philosopher René Girard’s conception, ending the cycle of fear and blame requires establishing the scapegoat’s innocence, as in the New Testament story of Jesus’s persecution, which ends with his resurrection.

  Perhaps a modern form of accountability could perform a similar function today. In Haiti, human-rights lawyers are attempting to do just that, although not to establish innocence but to establish guilt. They are taking the United Nations to court to prove its complicity in the cholera epidemic. Soon after the epidemic exploded in Haiti, the Haitian human-rights lawyer Mario Joseph, in partnership with lawyers in the United States, collected fifteen thousand complaints from cholera-scarred Haitians. Since the Haitian government had granted the United Nations immunity from Haitian courts and the commission that the agency had said it would create in order to process claims against its troops had never been established, Joseph and his colleagues planned to sue the UN in courts in the United States and Europe, demanding an apology and reparations for the cholera epidemic.73

  Joseph’s office is in a grand house in Port-au-Prince, with a heavy door outfitted with brass adornments. Inside, it’s dark and oppressively hot, with just a few ceiling fans whirring so slowly that I wondered, when I visited in 2013, why they’d turned them on at all. The windows are not glassed in, and the sounds of the permanent traffic jam on the road wafted in. Joseph, dark and round in a light blue short-sleeved dress shirt, railed about imperialist, racist interventions in Haiti, sweat beading on his forehead and neck.

  “The Nepalese poured fecal matter in the river and a lot of people drink the river water,” he said. “This is a calamity from this force of occupation!

  “They need to pay people. They need to compensate people. And then they need to apologize for that! Because the United Nations never protected people from Nepali cholera!… Imagine if that happened in the United States, or in France, or Canada or England? What would happen?… I don’t know, is it because we are Haitians?… Is it because we are black? I don’t know! Because we are Haitian? I don’t know!”74

  Joseph’s contention is that the United Nations understood the sanitary conditions in Haiti and thus should have taken adequate precautions to avoid introducing a dangerous new pathogen. It should have more rigorously screened its troops for signs of infection, for instance, and ensured that waste disposal practices on its bases were sanitary. By not doing so, it threw a match onto a gas-soaked pyre. It was not unfairly scapegoated: it was actually culpable. And holding the UN accountable in a court of law would show that.

  There’s little doubt that the cholera in Haiti really did come from Nepal. When scientists compared the genomes of the cholera vibrio circulating in Haiti with a sample of cholera vibrio from Nepal, they found a near perfect match, with only one or two base pairs distinguishing the two.75 Still, I couldn’t help but feel troubled by the underlying idea that even if people are judged to have infected others with pathogens, they should be held legally responsible. It’s possible that neither sanitary waste management on the base nor rigorous health screening of the troops could have prevented cholera from Nepal from coming to Haiti. The person who introduced it was most likely a silent carrier, unaware of the undetectable infection within his body. Even if his waste had been treated on the UN base, it wouldn’t have been almost anywhere else in Haiti. In that way, the Nepalese soldiers who brought cholera to Haiti were like any one of us. With the global biota and all its pathogens on the move, we’re all potential carriers.

  Channeling victims’ fury in court is undoubtedly a lot more constructive than acting it out in the streets. But couldn’t the judgment Joseph sought endow scapegoating with the force of law? Depending on who did the adjudicating, that group of “responsible” people could have included health-care workers in Guinea in 2014, gay people in the United States in the 1980s, and Irish immigrants in New York City in the 1830s. Even if those who introduced new pathogens were accurately pinpointed, as in Haiti, it’s unclear how much of the blame they should be forced to shoulder. Epidemics are sparked by social conditions as much as they are by introductions. Whether it’s deforestation and civil war in West Africa, the lack of sanitation and modern infrastructure in Haiti, or the crowding and filth of nineteenth-century New York City, without the right social conditions, epidemics of cholera and Ebola would have never occurred. Should health-care workers in West Africa, UN soldiers in Haiti, or Irish immigrants in nineteenth-century New York be held responsible for those, too?

  As if to underline the selective nature of the blame game, in which UN soldiers are held accountable but local infrastructure weaknesses are not, in the middle of his rant, the dim fluorescent tubes in Joseph’s office flickered and then clicked off. The various machines in the room—the printer, the computers, the lazily ineffective ceiling fan—all came to a halt. Soon the room was plunged into darkness. I leaned forward in my chair and looked around, preparing to make a move. But Joseph was unfazed. He knew that electricity in Port-au-Prince is spotty. “It will come back,” he said calmly, and kept talking into the battery-powered recorder I’d set upon his massive desk.

  * * *

  The ways in which new pathogens conspire to weaken our social ties and exploit our political divisions are wide-ranging and varied. But there’s still one final way we can defang them. It is, perhaps, the most potent one of all. We can develop specific tools to destroy or arrest them with surgical precision, tools that can be effectively used by any individual with access to them, with no elaborate cooperative effort required.

  Those tools, of course, are medicines.

  The right cures make all the ways we spread pathogens among us moot. With the right cures, spillovers, filth, crowding, political corruption, and social conflict fail to spread pathogens. Epidemics and pandemics are stillborn, and their nonevents pass unnoticed. So long as there’s a drugstore on the corner or a doctor willing to write a prescription, individuals can tame pathogens on their own.

  First, though, those cures have to be developed.

  SEVEN

  THE CURE

  Finding out how cholera spread and how to stop it was a matter of great urgency for every part of nineteenth-century society affected by the disease, but no sector more so than the medical community. The arrival of the deadly new disease charged the medical world like a bolt of lightning. By all accounts doctors and scientists worked intensely to unravel cholera’s mysteries and save their stricken patients. They expounded on their ideas about the pathology and transmission of the disease in scores of papers, lectures, conferences, and essays. They developed a bevy of experimental treatments, theories as to how cholera spread, and interventions designed to arrest it.

  And yet for decades, effective cures for cholera eluded them.

  Their failure was not due to lack of technical capacity. The cure for cholera is almost comically simple. The vibrio does not destroy tissue, like, say, blood-cell-devouring malaria parasites or the lung-destroying tubercle bacilli that cause tuberculosis. It doesn’t hijack our cells and turn them against us, like HIV. As deadly as cholera is, its tenure in the body is really more like a visit from an unpleasantly demanding guest than a murderous assailant. What kills is the dehydration the vibrio causes while replicating in the gut. That means that surviving cholera requires solely that we replenish the fluids it sucks dry. The cure for cholera is clean water, plus a smattering of simple electrolytes like salts. This elementary treatment reduces cholera mortality from 50 percent to less than 1 percent. Preventing cholera by separating human waste from drinking-water supplies was similarly well within the reach of nineteenth-century technology. The aqueducts and reservoirs of the ancients could have done it.1

  The failure was not due to a lack of observations about the nature of cholera, either. Scientists and doctors had noted the association between cholera and
dirty water since the earliest days of cholera’s emergence in Europe. In Moscow, the epidemic ravaged the banks of the Moskva; in Warsaw, along the banks of the Vistula; in London, along the banks of the Thames. The French surgeon Jacques Mathieu Delpech noted in 1832 that the cholera in England spread from a central point to the periphery, and “that central point was the bank of the river.” That same year, another French commentator observed that cholera spread from a fountain “full of putrid matter,” and once abandoned, “there were no further cases of cholera.”2 In 1833, a medical professor had even published a map of the city of Lexington, Kentucky, which correlated the location of cholera deaths to the local topography and its filth. The same is true for the saltwater cure. It had been first proposed—and supported with solid evidence—in the 1830s.3

  Nineteenth-century physicians had made the right observations and had the right technology to cure cholera. The problem was that the right observations and the right technology were beside the point.

  * * *

  In 1962, the physicist and philosopher of science Thomas Kuhn explained how the practice of science can paradoxically repress as much as it reveals. Scientists understand reality through the prism of what Kuhn called paradigms, theoretical constructs that explain why things function the way they do. Paradigms provide explanatory frameworks for scientific observations. They’re like elaborate line drawings that scientists fill in with color and detail, reinforcing and enriching the paradigms as they do so. Evolution forms such a paradigm for modern biology; plate tectonics, for modern geology.

  Hippocratic theory was nineteenth-century medicine’s paradigm. According to Hippocratic principles, health and disease were the result of complex, idiosyncratic interplays among large, amorphous external factors, like meteorological conditions and local topography, and unique internal factors. Maintaining and restoring health was a matter of correcting the balance among these various factors.

  These ideas had first been articulated by followers of the ancient Greek physician Hippocrates and had been handed down, essentially intact, for thousands of years. The fifth-century B.C. Hippocratic Corpus, a collection of sixty tomes on health and medicine, along with a ten-thousand-page elaboration on its ideas by the second-century A.D. physician Galen, had been standard issue in medical education since the sixth century. By A.D. 1200, professional licensure in Europe required the study of these works. Important English and French translations of Hippocratic and Galenic texts continued to appear throughout the nineteenth century.4

  Kuhn believed that without such paradigms, science couldn’t exist. The number of available facts and questions that can be asked is potentially infinite. Without a sense of why something might occur the way it does, he observed, there’s no way for scientists to know which questions to ask or which facts to collect. There’s no way to get to the “how” questions that underlie most scientific activity.

  But as useful as paradigms are, they also create subversive dilemmas for scientists. Paradigms create expectations, and expectations limit scientists’ perceptions. Psychologists have described two common cognitive snags that occur: “confirmation bias” and “change blindness.” The problem of confirmation bias is that people selectively notice and remember only the subset of evidence that supports their expectations. They see what they expect to see. They also fail to notice anomalies that contradict their expectations, which is “change blindness.” In one study of change blindness, experimenters purposely violated people’s expectations by covertly switching one interviewer with a different person while the person being interviewed was momentarily distracted. Subjects assimilated the perceptual violation to such an extent that they didn’t consciously register the change. It was as if it never happened at all.5

  Confirmation bias and change blindness are two ways observations that violate expectations—or what Kuhn called anomalies, facts that subvert paradigms—are ignored. These two cognitive biases undoubtedly played a role in Hippocratic doctors’ failure to notice when cholera didn’t proceed according to Hippocratic principles. But Kuhn noted another cognitive dilemma, too. Sometimes, when people are forced to recognize anomalies, they still reject them.

  Kuhn pointed to a 1949 study of cognitive dissonance in which subjects were asked to identify playing cards. Most were normal, but a few were anomalous, such as a red six of spades or a black four of hearts. When people were asked to identify such cards, “without apparent hesitation or puzzlement” they immediately identified them as normal cards, he noted. What the subjects saw was an anomalous red six of spades, but what they said they saw was an ordinary black six of spades or an ordinary red six of hearts. That’s a form of confirmation bias. But what’s interesting is what happened when they were shown the anomalous cards multiple times. Although they became increasingly aware that something was wrong with the cards, they were unclear what exactly it was. Some refused to accept the anomalies and became distressed. “I can’t make the suit out, whatever it is it didn’t even look like a card that time,” one subject said. “I don’t know what color it is now or whether it’s a spade or a heart. I’m not even sure now what a spade looks like. My God!”6

  The history of medicine is replete with examples of this phenomenon. When observations and treatments were unexpected or violated reigning paradigms—and no alternative explanation could be convincingly articulated—they were thrown out on theoretical grounds alone, no matter how well supported they were by evidence. In the seventeenth century, for example, a Dutch draper named Anton van Leeuwenhoek had handcrafted a microscope and discovered bacteria. He examined rainwater, lake water, canal water, and his own feces (among other things), and everywhere he looked he found microorganisms, which he called “animalcules.” Further inquiries could have revealed the role these microbes played in human disease, but instead the study of the body through microscopy went underground for two centuries. The idea that tiny things shaped health and the body in some mechanical fashion violated the Hippocratic paradigm of health as a holistic enterprise. The seventeenth-century physician Thomas Sydenham, known as the “English Hippocrates,” dismissed Leeuwenhoek’s microscopic observations as irrelevant. His student, the doctor and philosopher John Locke, wrote that attempting to learn about disease by examining the body through microscopy was like trying to tell time by peering into the interior of a clock.7

  Similarly, in the eighteenth century, the shipboard doctor James Lind had discovered that lemon juice cured scurvy, a condition of vitamin C deficiency, by the unorthodox method of comparing the outcomes of different groups of sailors who were given different treatments. He’s lauded today for conducting the first clinical trial. But at the time, because he couldn’t support his theory as to why lemon juice worked (that is, according to Lind, because acidic lemons supposedly broke through the pores blocked by damp air), his findings were dismissed. Medical experts recommended ineffectual vinegar rather than lemon.8

  This is just what happened to cholera’s cures in the nineteenth century. The scientists who discovered cholera’s cures were not as fully indoctrinated in the paradigms of Hippocratic medicine as the elite physicians atop the medical establishment. They were outsiders. William Stevens, for example, was a lowly physician who had practiced medicine in the Virgin Islands and was unknown among Britain’s medical elites in London. So was the Scottish physician William O’Shaughnessy. Both advocated for cholera’s lifesaving cure of salty water in the 1830s. Stevens thought it would help correct cholera patients’ dark-colored blood. (He had noticed that salt reddened the blood of his tropical fever patients.) O’Shaughnessy recommended “the injection into the veins of tepid water, holding a solution of the normal salts of the blood,” as The Lancet reported, not only to fix the color of the blood but also to restore the body’s lost fluids and salts.9 In one of the most convincing demonstrations of the therapy’s effectiveness, in 1832 Stevens administered salty fluids to more than two hundred cholera sufferers at a London prison and lost less than 4 percent of his pa
tients.10

  But the logic of the cure—replenishing the fluids lost through vomiting and diarrhea—violated Hippocratic paradigms. According to Hippocratic principles, epidemic diseases like cholera spread through foul-smelling gases called “miasmas” that poisoned those who inhaled them. That’s why cholera patients experienced dramatic vomiting and diarrhea: their bodies were attempting to get rid of the miasmatic poison. Counteracting those symptoms with salt water or anything else was as philosophically wrongheaded as ripping a scab is today.

  And so medical experts lambasted the salt water advocates. Experts who visited the prison to review Stevens’s results dismissed them out of hand, claiming that the patients he treated never had cholera to begin with. They defined a cholera victim as someone on his or her deathbed, in the throes of what they called “collapse.” As none of Stevens’s patients lay in such a state, by definition they could not have cholera. (The outlandish notion that they might actually have recovered was inconceivable.) “There certainly was not a single case which from any symptoms witnessed by me I could point out as a case of cholera,” asserted one investigator. One young woman, a “very troublesome perverse character,” another pointed out, was just “simulating” cholera.

  Journal editors who reviewed Stevens’s work concluded that he was a charlatan. “We turn from the whole affair with mingled sensations of pity and disgust,” the editors of The Medico-Chirurgical Review wrote. “The best that can be hoped for the ‘saline treatment’ and its authors is, that both may be speedily forgotten.”11 “However it might be with pigs and herrings,” punned one commentator in 1844, “salting the patients was not always the same as curing him.” Saline therapy, another agreed in 1874, had “proved ineffective.”12

  Evidence that showed that cholera spread in dirty water, not miasmas, was similarly dismissed and suppressed. The nineteenth-century London anesthetist John Snow was particularly well placed to grasp the shortcomings of the miasma theory as applied to cholera. Snow had been knocking himself unconscious with various gases—ether, chloroform, and benzene, among others—for years, studying their effects on his body in search of the perfect anesthetic to administer to his patients.13 As an expert on the behavior of gases, he knew that if victims contracted cholera by inhaling a gas, as the medical establishment held, the disease would affect the respiratory system, including the lungs, much like a deep breath of acrid smoke. And yet, it didn’t. Instead, cholera plagued the digestive system.14

 

‹ Prev