The Fever

Home > Other > The Fever > Page 15
The Fever Page 15

by Sonia Shah


  The retired WHO scientist Socrates Litsios, a hunched, white-haired New Yorker, takes obvious pleasure in describing the antics of his solemn and ponderous former employer. He describes WHO’s statistical methods this way: Different WHO programs devote themselves to different diseases, from flu to tuberculosis to malaria. Jockeying for public interest, influence, and funding, and working in relative isolation, each tends to exaggerate the burden of its assigned disease. Finally, someone added up the mortality figures for all the diseases, which resulted in an impossible, implausible sum. Embarrassed, WHO held a meeting and literally doled out the numbers, Litsios says. Eyes gleaming, he imagines the scene: “Okay, measles, you get one million; malaria, you get a million; tuberculosis, a million.”41 He roars with delight.

  For outsiders, of course, malaria is not some vague, mild, ignorable illness. It’s a killer disease, a scourge of the poor, a travesty in the modern world. That’s our outsider’s perspective, and we stick to it, disregarding, just as we have for centuries, the actual social experience of those who live with the disease.42

  In the same way we’d dismiss the justifications of an alcoholic, we dismiss malaria patients’ apathy as a symptom of their disease. After all, malarious communities are isolated—for malaria repels outsiders—and their chronic disease burden leaves them weakened and debilitated. The more malaria they have, the more remote and impoverished they become—and they adapt to this reality. They accept malaria, in other words, because malaria itself has lowered their expectations. That’s no reason for us, we figure, to do the same.43

  We portray malaria in our media as a ferocious disease preying on powerless people. A photograph in The New York Times illustrating a story on a new antimalarial drug hatched in Western labs, for example, pictures a Mozambican boy lying on a rough wooden bench and gazing mournfully at the camera. The caption explains his obvious sorrow and lassitude by noting that the child has just learned he has malaria and that the disease kills three thousand African children a day. The boy, the reader is led to understand, has just received a death sentence. In fact, in endemic countries such as Mozambique, people get tested for malaria not because they are worried that they have it, but in the hopes that they do, for that would mean they don’t have anything worse. The positive malaria diagnosis the boy received would have been, in fact, a solace.44

  We attribute the underlying conditions that create the social experience of malaria to a simple lack of money and the things it can buy. Malaria in Africa “is just a cash question, basically,” said Martin Hayman, a London lawyer and consultant for malaria-control organizations.45 Money buys better drugs, for example, so we ship the drugs to Africa, and the problem is solved. And yet, even if the quality of antimalarial drugs were to be improved from 85 percent to 100 percent, the overall effectiveness of malaria treatment could improve by only a single percentage point.46 That’s because, as two German epidemiologists found when they posted observers in local clinics and pharmacies, only 21 percent of people with malaria actually visit health centers. Of these, nearly 70 percent don’t have a sufficient history taken, and more than 30 percent don’t have their temperature taken. Twenty percent are prescribed the wrong drugs at the wrong doses. Ten percent don’t bother buying the drugs, and more than 30 percent don’t take the drugs as prescribed. The fact that the drugs are only 85 percent effective accounts for a very small portion of the failure in effective treatment. Even with 85 percent effective drugs, only 3 percent of local people were being effectively treated for malaria. If the drugs were 100 percent effective, the epidemiologists reckoned, the percentage of people effectively treated for malaria would rise only from 3 percent to 4 percent.47

  We send reporters to the malaria-plagued to demand testimony on their need for Western rescue from the malarial wolf. I witnessed one such exchange, between a BBC reporter and a Cameroonian woman holding her deathly ill child. How would she pay for the hospital visit, the reporter demanded. It was an impossibly rude question, delivered sans preamble, but, both parties knew, it was critical for the central premise of the BBC story. The African mother must be captured on record describing her need for money. The woman’s face crumpled. The predicament she found herself in, of course, was much more complicated than cash. Whether she was about to cry or laugh was impossible to tell.

  Our outsider’s perspective on malaria strikes those we seek to help as incomprehensible. Across the malarious world, medical anthropologist H. Kristian Heggenhougen writes, people profess “puzzlement over the focus on malaria.” People who live in poverty and who face myriad life-and-death issues wonder “why outsiders pay such attention and resources on what they see as a minor concern within the range of problems they face every day.”48 They “cannot understand why malaria should be selected for elimination,” says Thai social epidemiologist Wijitr Fungladda, “rather than their poor living conditions or any other disease.”49 (So what do they want? The New York Times’s Tina Rosenberg cites a survey that asked rural poor people just that. “The first three items,” Rosenberg notes, “were a radio, a bicycle and, heartbreakingly, a plastic bucket.”50)

  This is nothing new. For centuries, outsiders’ sense of malaria as a killer disease has collided with the actual social experience of those who live with it. When missionary doctor David Livingstone steamed down the Shire River to Chikwawa in 1859, he came to help save the Africans from the “kingdom of darkness” in which they lived. Although his explorations in Central Africa were not explicitly for the purpose of disease alleviation—Livingstone hoped to “make an open path for commerce and Christianity”—the notion that Africa required such moral and economic uplift rested upon his conception of the continent as backward and diseased, under siege and in need of external rescue. Livingstone, like other Brits of the time, equated climate with health, and good health with good morals, which led him to believe that the heavy toll of African pathogens on British explorers indicated a malignancy in the land and moral turpitude in the people. They care for “no god except being bewitched,” Livingstone complained, and were “inured to bloodshed and murder.”51 By establishing missions across Central Africa, Livingstone would, he believed, light the interior and banish this moral darkness.

  Livingstone’s long-term survival in Central Africa probably rested on the quinine therapy he pioneered, and the fact that he regularly used a mosquito net and wore heavy boots.52 (Anopheles gambiae are especially attracted to the smell of human feet.) But in keeping with the guiding principles of his work, he chalked it up to his own moral strength and respect for good clean living. “It is our conviction that we owe our escape from the disease . . . to the good diet provided for us by H.M. Government,” he wrote to The Medical Times and Gazette in 1859. He avoided “imprudent . . . exposure to the sun,” and partook of “regular and active exercise.”53 Livingstone’s project of enlightening Africa proved wildly popular throughout the English-speaking world. His book, Missionary Travels and Researches in South Africa, sold a staggering seventy thousand copies. He was the “hero of the hour,” enthused Harper’s magazine in 1857, “a man whose travels, adventures, and discoveries in the interior of Africa are only excelled by the heroism, philanthropy, and self-sacrifice which he has displayed.”54

  But the central premise of Livingstone’s project, by his own experience, was deeply flawed. While nineteenth-century British society projected a dark, diseased continent in need of Christianity’s spiritual uplift, Livingstone discovered instead that while African diseases regularly felled his European compatriots, the local peoples who joined his expeditions remained healthful.55 In Chikwawa, he found abundance and good health: luxuriant stands of cassava, beans, tobacco, pumpkins, okra, and millet tended by vigorously singing villagers. Chikwawa’s chief did not plead for help or make threatening or depraved gestures, but warmly welcomed the explorer. “We were not to be alarmed,” Livingstone remembers the chief telling him, “of the singing of his people.”56

  Still, the celebrated notion of Western
benefaction of civilization, culture, and development upon the malarious African masses continued for decades. “The peoples of Africa south of the Sahara are still in an underdeveloped state so far as degree of civilization and culture,” noted WHO’s deputy director-general at a 1950 meeting on malaria in equatorial Africa. “With untiring generosity and an unflagging desire for progress,” he went on, the “very highly developed countries” would contribute their “cultural and scientific resources,” to alleviate Africa’s malarial burden.57

  These attitudes derived from not just a different social experience of malaria and other diseases, of course, but also powerful political and economic interests. The British aimed to stamp out the African slave trade, which, besides being morally repugnant, posed an unwanted competitive threat to underemployed British workers.58 Britain wanted improved access to Africa’s natural resources, and hoped to establish political control, too. When the British denigrated Africans’ leaders, healers, and faith as chiefs, witch doctors, and devil worship, respectively, and touted Christian morals as the cure for a diseased continent, they had more than Africans’ spiritual uplift and public health in mind.

  Today, the economic and political context in which Western philanthropists and aid organizations offer help to the malarious masses has changed dramatically. The West’s modern fight against African malaria is aimed not at undermining African governments but at collaborating with them. Our economies still rely upon Africa’s natural resources, but our public health offerings are not based on speculative conjecture. Clinical trials have proven that antimalarial drugs, bed nets, and insecticides—unlike, say, the Ten Commandments—effectively alleviate malaria.

  And yet, muffled echoes of that earlier dissonance reverberate. The Western clinicians staffing the malaria research ward in Blantyre don’t seriously consider what the Malawian women all around them think about any of the proceedings. They can’t. The mother of a patient in a mysterious coma, according to Taylor, thinks that the problem with the child is the horrible antituberculosis drugs he was given. The mother of a spaced-out and seriously ill girl thinks her daughter has a bad headache. The clinicians don’t make much of this. They do what they think their patients need, despite their charges’ palpable skepticism. Their achievement, in lives saved, is orders of magnitude greater than Livingstone’s—it took five years for his mission in southern Malawi to convert just a single African59—and yet, the one-hundred-fifty-year-old gap between the world of the Western clinicians and that of the rural Africans they seek to help remains.

  We want to think of Africans as battling an enemy, malaria, so that we can help them fight this enemy. We come—like Livingstone, with his moral righteousness—bearing the best our society has to offer: our riches and our technology. But the fight outsiders would like to wage against malaria isn’t always the same one fought by those who live with the disease.

  In 2005, the international financing institution the Global Fund to Fight AIDS, Tuberculosis and Malaria agreed to provide $170 million to African governments to buy artemisinin combination drugs. Novartis had knocked down the price considerably and, expecting a flood of orders, kicked up production. By 2006, the company had manufactured thirty million treatments. But few African governments placed orders.60 “Everything is on the table!” exclaimed one frustrated Novartis rep. “Everything is there! The nets, the drugs, the money—but the orders aren’t coming in! I don’t know why!”61 In the end, despite the available funding, African governments ordered less than half of Novartis’s supply,62 and the company had to destroy millions of the arduously produced tablets, for the heat-sensitive lifesaving drug wouldn’t keep for long. It was a “waste,” one malariologist said sadly, a “tragedy.”63

  7. SCIENTIFIC SOLUTIONS

  Everything about the Harvard Malaria Initiative, housed deep inside Harvard University’s School of Public Health, conveys a single, resounding message: this is where very important, very well-funded activities occur. The building is towering, majestic, especially in contrast with the narrow, rutted Boston streets that stream traffic around it. Security is thick. To broach the building’s cavernous underground parking center, your name has to be on a guest list. And to exit the garage, you have to take the elevator, whose green Up button will remain impassive until it receives a signal from a special ID you must swipe through a sensor. There’s more security upstairs, and more IDs, and more swiping of barcodes, to pass through heavy glass doors in order to reach the Harvard Malaria Initiative’s labyrinthine realm.

  HMI is not just a center of malaria research, but an “epicenter” (as its website boasts), with funding support and corporate partnerships ranging from ExxonMobil to Genzyme. The floors gleam, the walls are lined with elegant blond-wood lockers and doors, and the labs buzz with purpose. Researchers here don’t need to budget, and in fact have no idea how much their work costs. “If we did the calculations, we’d probably all be flabbergasted,” one says.

  The two dozen or so graduate students and researchers who work here meet weekly to share their results, in a conference room warmed by an Oriental rug and stately glass-doored bookcases. A buffet table offers them neatly trimmed sandwiches and fruit salad. The meeting’s presentations are graceful and articulate, laced with insider jokes about a Harvard education, and received by colleagues with thoughtful, imaginative questions. The only thing that seems to rattle them is the scrutiny of their mentor, Dyann Wirth, the gray-haired, forbidding molecular biologist who presides over HMI, who subjects them to slow, careful, monotone questioning. The day I was there, a few technical queries from Wirth pushed one young presenter over the edge. She misspoke, caught herself, paused, said something, retracted it, and then looked at her audience and laughed nervously.

  This is a happy and well-fed gang, exuding optimism and ambition, the very picture of scientific leadership that Harvard self-consciously cultivates. No doubt each participant hopes to produce the kinds of data that will result in the uncorking of one of the champagne bottles poised at the top of one of the conference room bookcases.1

  HMI, like a handful of other similarly well-endowed malaria labs scattered across the globe, may seem like the venerable product of centuries of unremitting investment in malaria research, the way that, say, the Human Genome Project or the National Cancer Institute can be seen as the results of long-term investments in research on technology and cancer. Surely, the relentless burden of malaria requires an equally relentless scientific response, and one of the top universities in one of the world’s wealthiest countries would, as a matter of course, devote a generous portion of its public health research to a global health priority of malaria’s magnitude.

  Not so. Political and financial commitment to malaria research has been cyclical, sometimes spiking, often falling. Most of the malaria research centers I’ve visited look a lot more like the one at the Gorgas Institute in Panama, where malariologists toil in a cramped, dingy, and dimly lit corner of the building, mostly using slides, microscopes, and some glassware, the same tools scientists have been using for over a hundred years. High-tech malaria research centers like HMI are not high points on an upward slanting line; they’re crests on a wave, leading a wake of deep troughs.

  From its founding, malariology has been a fragile, wayward field, vulnerable to the enthusiasms and disregard of a fickle public. Overzealous researchers announce ballyhooed discoveries that turn out to be mistaken. Obscure, underfinanced scientists make breakthroughs that go all but ignored. Important insights, ones that could establish lasting and fruitful scientific paradigms for the field, are met with public skepticism, disinterest, or both.

  Malariology’s founding question revolved around etiology. What precisely caused malaria? Folk wisdom held that swamps and miasmas were the culprit, but in the late nineteenth century, the new science of bacteriology emerged, exposing for the first time the tiny world of disease-causing microbes. In 1882, the German bacteriologist Robert Koch found the microbe responsible for tuberculosis, and in 188
4, the microbe for cholera. In rapid succession over the coming years, scientists fingered the culprits for a range of pestilences: typhoid, tetanus, plague. Similarly, the thinking went, there must be some microbial pest responsible for the age-old scourge of malaria.2

  Given the peculiar nature of malaria transmission, discerning the strange series of events leading to illness required interdisciplinary collaboration between naturalists, experimentalists, and clinicians. But an insecure scientific establishment, as status-conscious as a pack of wolves, made such collaborations difficult to sustain. Instead, prestige, resources, and influence flowed to the top dog, whether his story rang true or not. Not surprisingly, there were a few costly dead ends.

  The economic impediment imposed by malaria couldn’t have been clearer to the leaders of the new republic of Italy, founded in 1871. The “Roman fever” shaped the making of the Italian state just as it had the fall of the empire. First, Plasmodium claimed the beloved wife of the nationalist revolutionary Giuseppe Garibaldi. According to the subsequent legend, Garibaldi carried her in his arms across the Roman Campagna as she died of malaria, an act of romantic heroism lovingly recaptured by writers and painters.3 Then malaria killed the first prime minister of the United Kingdom of Italy, just three months after the new state was declared. With 1,500 of 2,200 railroad workers in Sicily sick with malaria, 10,000 of the standing army of 180,000 in hospital with fever,4 and the new Italian state hemorrhaging millions of its strongest and most hardworking young men to the Americas, Italian railroads, mining companies, and philanthropists begged Italian scientists to find a solution to the problem.5

  And so they did. One day in the late 1870s, two pathologists, Corrado Tommasi-Crudeli and Edwin Klebs, collected air and mud samples from the Roman Campagna. From the samples, they isolated ten-micromillimeter-long rods, which from the vantage point of their crude microscopes, seemed to develop into long threads. When injected into lab rabbits, the long threads soon had the bunnies heaving with chills and fever. Inside their slaughtered bodies, the pathologists found the ten-micromillimeter-long rods, once again.

 

‹ Prev