The Fever

Home > Other > The Fever > Page 18
The Fever Page 18

by Sonia Shah


  Or say, as a retired public health inspector in Italy did in 1926, that the Anopheles mosquitoes that laid gray eggs carried malaria, while those that laid, say, dark eggs did not. The inspector, Domenico Falleroni, collected mosquito eggs as a hobby, and had noticed that individual female mosquitoes always laid eggs with the same markings. Finding the delicately designed eggs quite beautiful, Falleroni painstakingly described and categorized them, even naming two types messeae and labranchiae, after his friends from the health department Drs. Messea and Labranca.

  Some scientists apparently conducted dental examinations of local mosquitoes and, finding nothing, dropped Roubaud’s idea. Falleroni’s ideas about the markings of eggs they dismissed as a “mere eccentricity of nature,” as biologically insignificant as “spots on mongrel puppies,” as the Rockefeller Foundation’s Lewis Hackett put it.85

  It wasn’t until the late 1930s that a collaboration between Italian scientists and malariologists from the Rockefeller Foundation revealed that what had been inexpertly termed Anopheles maculipennis was in fact five different species of Anopheles mosquito, visually indistinguishable except for the delicate markings of their eggs. Of the five, only Falleroni’s Anopheles messeae and Anopheles labranchiae transmitted the scourge; the others were blameless.

  • • •

  This finding resolved the scientific impasse that had impeded widespread acceptance of the mosquito’s role in malaria for some forty years, decoding the mystery of millions of cases of European malaria.

  Even more than that, it ushered in a new paradigm for malariology. Scientists realized that deciphering the ecology of malaria transmission required much more than simply checking the Anopheles credentials of the local mosquitoes. In Europe, delicate mosquito eggs had to be gathered and studied. In the United States, the two tiny hairs that protrude from mosquito larvae’s heads had to be examined—on harmless Anopheles punctipennis, those hairs are close together at the base; on the killer Anopheles quadrimaculatus, they’re spread ever so slightly apart.86

  As the morphological differences between Anopheles species became clear, so did their unique habits, and the specificity with which each would have to be stalked within its own ecological niche. To stanch transmission, local entomologists had to work with engineers, who had to work with health officers and clinicians. For, as Lewis Hackett put it, “the best method in one place may be the worst possible thing to do only forty miles away.”87

  Hackett summarized the new thinking in 1937:

  A mosquito, harmless in Java, is found to be the chief vector in the interior of Sumatra. A method of treatment unusually successful in India is almost without effect in Sardinia. The half-mile radius, sufficient for larval control in Malaya, has to be quintupled in the Mediterranean basin. A village in Spain, in which half the population is in bed with chills and fevers in August, turns out to be less infected than a village in Africa where virtually no one has to abandon work on account of malaria at any time.88

  Malaria, he said, was “so moulded and altered by local conditions that it becomes a thousand different diseases and epidemiological puzzles.”89 Each would have to be unraveled on its own terms.

  • • •

  It took malariology four decades to grasp the futility of single-bullet solutions to malaria. And yet, the paradigm-shifting insight described by Hackett has mostly been lost.

  In part that’s because of the vagaries of malaria research funding. Local, ecologically driven malaria research is not particularly applicable to other areas of the economy, nor to other areas of the world. It must be funded locally for public health reasons alone, and political will or financial resources are lacking for it in most malaria-endemic regions. It’s challenging enough to fund proven treatment and prevention, let alone in-depth investigations into the entomology, ecology, and epidemiology of local malaria transmission. Even in wealthy countries, support for malariology ebbs and flows. When Italian authorities believed they’d solved their domestic malaria problem in the 1920s, for example, they dismantled their malaria research infrastructure altogether. When the United States and other international public health authorities believed DDT and chloroquine would end malaria, they similarly stopped funding research.

  It’s also because, by the time Hackett’s revelation emerged, much of the infrastructure for malaria science had largely been built, and it suited locally grounded, ecologically minded malariology as well as a shoe fit a hand. None of the malaria research centers established by Patrick Manson, authorities in the British Raj, or the Rockefeller Foundation were sited in malaria-endemic regions, where malariologists could study malaria up close and on the ground. With the financial support of colonial authorities, Manson helped found the London School of Hygiene and Tropical Medicine in the malaria-free city of London.90 The Raj built malaria research institutes in India’s cool hill towns, which Britishers found more comfortable but where malaria seldom arose.91 The Rockefeller Foundation poured dollars and the expertise of its malariologists into public health research at American universities, such as Johns Hopkins and Harvard.92 These centers form the backbone of global malaria research to this day.

  As a result, the thrust of the most high-profile and well-funded malaria research is devised by specialists oceans away from wild malaria, intended for use every where regardless of local malaria ecologies, and sponsored by funders intent on bold gestures, not idiosyncratic tinkering. It’s the very opposite of locally tailored.

  Take the boom in malaria vaccine research.

  There are now dozens of experimental malaria vaccines percolating in labs across the globe. Vaccine research is expensive, and progress has been limited. The vaccine at the most advanced stage of development, called Mosquirix, was first created by scientists at GlaxoSmithKline from a tiny piece of the falciparum parasite, specifically a subunit of a protein of the sporozoite. So-called subunit vaccines, while considered safer than vaccines made from whole pathogens, don’t generally trigger particularly vigorous or long-lasting immune responses. Mosquirix is no exception. Clinical trial results released at the end of 2008 showed that Mosquirix reduced the incidence of infection by 65 percent and of clinical malaria by nearly 60 percent, but for only six months.93

  Another malaria vaccine candidate, created from a whole sporozoite damaged by radiation, has triggered complete immunity, but only in nonimmune adults exposed to experimental malaria in the lab. That vaccine, under development by a new company called Sanaria, suffers from fierce manufacturing and distribution headaches. As of 2008, the sporozoites have to be raised in live mosquitoes, and vaccination would entail injecting people with up to ten thousand live and kicking P. falciparum sporozoites, an unpopular approach that could badly backfire.94 Most of the other vaccines are aimed at blood-stage parasites. Such vaccines can reduce illness, but they won’t prevent infection or interrupt transmission.

  These results are not surprising. Effective vaccines such as those against yellow fever and smallpox are based on the fact that the immune system can naturally create perfect immunity to those pathogens if exposed at a low level first. That’s why in places where yellow fever and smallpox have been endemic, many local people naturally acquire complete immunity. The immune response against malaria is neither as complete nor as long-lasting.95

  Plus, malaria vaccines must face the challenge of all the multiple forms of the malaria parasite, each of which plays a different role in the business of malaria illness and transmission. A vaccine that helped the body fight off sporozoites could help prevent infection, but that same vaccine would not help the body battle merozoites, and so could not prevent illness, nor would it help fight gametocytes and thus prevent transmission (unless it provided extraordinary, 100 percent effective protection). Moreover, a vaccine that acted specifically against P. falciparum could not be expected to exert any action against P. vivax, or P. malariae, or P. ovale, not to mention other strains of P. falciparum.

  That’s why in the 1960s, experts at WHO and USAID
decided against launching malaria vaccine research, and in 2007, a World Bank–sponsored report on malaria vaccines pronounced that “failure will continue to be the norm rather than the exception.”96

  And yet, malaria vaccine research is one of the most lushly funded and high-profile areas of malaria research today. Why? Because a vaccine is the ultimate single-bullet solution: one shot that would bestow upon its recipient complete lifetime protection against Plasmodium. A vaccine would dispense with complex ecological conditions and convoluted malaria epidemiology. Distribution wouldn’t even require a health clinic. Thousands could be vaccinated in a matter of days, cheaply, via traveling makeshift camps. Since the late 1990s, the Bill and Melinda Gates Foundation, which promotes bold, technically difficult solutions, has devoted $150 million exclusively to the search for the malaria vaccine.

  Or consider the high-tech genomics-based drug development research that Dyann Wirth and her colleagues conduct at the Harvard Malaria Initiative. Wirth studies the genetic diversity of P. falciparum, charting its vast variability, in order to pinpoint parasite genes and proteins under siege by the human immune system. “It is very basic, very fundamental” kind of work, Wirth says. If successful, the research will reveal potential weak spots inside the parasite.97 Then molecular biologists may be able to synthesize a drug to attack it.

  If Hackett’s vision of malariology was as an interdisciplinary, holistic science, the Harvard Malaria Initiative’s method of drug development is the very definition of reductionist. Our best antimalarials—quinine and artemisinin—are nothing like the synthetic, highly targeted chemicals that HMI’s model will eventually create. They’re diffuse-acting compounds created by plants and discovered by traditional healers, which is in part why quinine has yet to provoke much resistance, despite centuries of use. HMI’s drugs will pursue the parasite with intense, surgical precision, inevitably exerting pressure on it to evolve resistance. There are cheaper, more proven methods of finding new malaria medicines—for example, by tramping the world collecting traditional medicines, or screening thousands of compounds for antimalarial activity. That’s how we’ve found most other drugs and all other antimalarial drugs.

  But despite the fact that HMI’s drug development model is unproven and will render the kind of drugs most likely to trigger resistance in the parasite, it, too, is lavishly funded. Why? Because it uses cutting-edge, economy-building technology. In 2002, scientists decoded the genomes of P. falciparum and A. gambiae, and their results were splashed dramatically across the covers of top scientific journals Nature and Science simultaneously. Some of the most well-endowed institutions in the world turned their attention to malaria then, many for the first time ever.98

  It’s hard to predict how long the current funding wave for malaria research will last, or what may follow it. Perhaps the wave will gather strength and be followed by another one, and another—a flood spilling over the land. Or perhaps, when enthusiasm for genomics or bold gestures inevitably crashes, the wave will quietly recede.

  After my visit to Harvard, I descend to the underground parking lot to find my beat-up Honda amid the Audis and BMWs. I pass through a sun-filled atrium, where students and faculty lunch on green salads and beautifully ripened fruits, their backpacks and satchels slung over the backs of their chairs. Perhaps their aspirational, high-tech research will render just the kind of scientific solutions the malarious masses need. If the history of malariology is any guide, the next scientific breakthrough in malaria could come from anywhere at all. Tomorrow’s antimalarial superweapon may well be lurking in the lab notebooks and journal papers that peek out of these researchers’ bags.

  Before pulling open the door, I draw thick mittens over my hands. The winter air outside the humming building is cold indeed.99

  8. THE DISAPPEARED: HOW MALARIA VANISHED FROM THE WEST

  Life for a malarial mosquito isn’t easy inside London’s Houses of Parliament these days. It’s drafty. Sharp, clammy winds waft through the imposing building, which is cold and dimly lit. In the autumn of 2006, a few unfortunate mosquitoes, plucked from their adopted home in a local laboratory—where they were coddled in a specially heated and humidified insectary—braved the punishing conditions in service of a small and mostly ignored exhibit about malaria. The exhibitors positioned the mosquitoes inside an eighteen-inch glass cube placed on a table ringed by a few bulletin boards in a vast and empty hall tucked in some out-of-the-way corner of the building. The organizers had no doubt hoped the shivering insects would provide an edge of drama to the exhibit—real, live killer mosquitoes!—diluting the dispiriting eighth-grade-science-project effect of the Formica table and thumbtacked posters. But London being London, the heat had been turned off in the Parliament over the weekend, and the exhibit’s six-legged headliners froze to death.

  Now it was up to a cadre of mosquito specialists, summoned from the nearby London School of Hygiene and Tropical Medicine, to replenish the cube. Inside their sultry insectary, they lured a few captive Anopheles gambiae into a Dixie cup, placed a piece of gauze over the top, and secured it with a rubber band. Professor Chris Curtis tucked the cup and a few other tools into his handbag and, with malariologist Jo Lines, two research fellows, and a technician in tow, boarded the London Tube bound for the Parliament.

  One of the sponsors of the exhibit—though the very picture of modesty, the exhibit had been sponsored by a raft of multinational outfits, including GlaxoSmithKline, Novartis, Royal Dutch Shell, and UNICEF—would meet the mosquito-laden scientists at the entrance to the building. Curtis called the benefactor, simply, a “very rich lady.” Whisked through security (“What is that?” a guard exclaimed with some alarm, encountering Curtis’s two-foot-long mosquito-sucking tube), the group headed straight for the expiring insects. A pile of mosquito corpses lay on a piece of filter paper at the bottom of the forlorn cube. (“So malaria has been eradicated!” joked Lines. Pause. “In Parliament.”)

  The research technician got to work. She pulled out a little sachet and dispatched one of the research fellows to the men’s room to soak it in water. Meanwhile, Curtis rolled up his sleeve and fished out the Dixie cup, which he turned upside down and placed snugly over his forearm, so the gauze pressed against his skin.

  Passersby murmured in wonder as Curtis encouraged the mosquitoes to feast on his blood. The mosquitoes rooted around on Curtis’s arm frenetically. The journey and the strange new environment had chilled and rattled them, the technician, Shahida Begum, told me. They might refuse the blood meal altogether. As finely attuned to their needs as Begum was, there wasn’t much she could do. They might not like the taste of Curtis, she whispered.

  In the insectary, Begum could keep captive Anopheles alive for around thirty days. In the stiff and proper environs of the Houses of Parliament, despite the ministrations of a crack team of world-class experts, they’d be lucky if they lasted the night.

  • • •

  How the mighty have fallen! Earlier generations of Anopheles in London did not shrink in forgotten corners. They ruled the city, unleashing outbreaks of malaria that held the city in fevered thrall. Once, the powerful men in the Houses of Parliament quaked in their boots at the thought of the mosquito’s wrath.

  Granted, that was around four hundred years ago. By then, malaria had long prospered in England. P. vivax first arrived in Britain after the Roman Empire declined and the Roman technology that had held malaria at bay fell into ruin. By the end of the first millennium, P. vivax had sunk its tentacles deep into the low-lying marshlands around the Thames estuary. It didn’t leave until the end of the nineteenth century.

  Kent is today known as the Garden of England for its lovely orchards, and Essex, among other things, as a popular suburban area convenient to London, but as long as malaria ruled these two counties, nobody wanted to live there. Those who did were called marsh dwellers, and regularly suffered the “marsh ague.” Although ague literally means “acute,” and at the time could refer to any fever or malaise, th
e marsh ague was a very particular disease: a “rigor and horror which is succeeded by heat and that afterwards by a sweat,” as the seventeenth-century English physician Thomas Sydenham described it, leaving little question as to its malarial nature. The marsh dwellers were “very rarely without” it, wrote the eighteenth-century historian Edward Hasted, “and if they survive, are generally afflicted with them till summer, and often for several years.”

  Horrified visitors noted the swollen spleens of the local children—which they dubbed “ague cake”—and the sallow complexions of the adults who lived with them. In the marsh counties “it is not unusual to see,” wrote Hasted, “a poor man, his wife, and whole family of five or six children, hovering over their fire in their hovel, shaking with an ague all at the same time.” It was “the moory soil, the watry atmosphere,” an anonymous poet wrote, “With damp, unhealthy moisture . . . [and] thick, stinking fogs, and noxious vapours,” that were to blame. “Agues and coughs are epidemicall; / Hence every face presented to our view / Looks of a pallid or a sallow hue.”

  The highland women whom the marsh men often married and brought back to their malarial homes succumbed with terrible regularity. “When the young lasses . . . came out of their native air into the marshes among the fogs and damps, there they presently changed their complexion, got an ague or two, and seldom held it above half a year or a year at most,” wrote the seventeenth-century English novelist Daniel Defoe. Defoe claimed to have met men in Kent and Essex who’d lost more than a dozen wives to the marsh ague.1

 

‹ Prev