Book Read Free

The Coming Plague

Page 68

by Laurie Garrett


  “You can’t expect physicians to be concerned about public health,” Mark Lappé had opined one sunny spring afternoon in his Berkeley office at the University of California. It was 1981 and Lappé’s book Germs That Won’t Die had just been released. No one had yet heard of AIDS or drug-resistant clinical viruses or chlorine-resistant Legionella.

  “It’s hard to put the large view into day-to-day medicine. And it’s a real tragedy. And you can’t sue a doctor for violating an ecosphere, but you can sue for failure to give an antibiotic that you think would have enhanced the possibility of patient survival. It’s a real dilemma,” Lappé had said.

  A decade before the resistance crisis was acknowledged by mainstream science, he said that medicine and public health were locked in a conflict over drug-induced emergence of new microbes—a conflict that couldn’t easily be resolved. It was the physicians’ job, Lappé said, to individuate decisions on a patient-by-patient basis. The mission of the doctor was to cure individual cases of disease. 125 In contrast, public health’s mission required an ecological perspective on disease: individuals got lost in the tally of microbial versus human populations.

  When Lappé looked at American hospitals in 1980 he didn’t see the miracles of modern medicine—heart transplants, artificial knees, CT scans. Lappé saw disease, and microbes, and mutations.

  “It’s incredible,” Lappé said. “You can go into a hospital and you will have a four in a hundred chance of getting an infection you’ve never had before, while in that hospital. In some hospitals the odds are one in ten. What you will get in that hospital will be much worse than what you would have been contaminated with at home. They are the most tenacious organisms you can imagine. They can survive in the detergent. They can actually live on a bar of soap. These are organisms that are part of our endgame.”

  Decrying improper use of antibiotics as “experiments going on all the time in people, creating genuinely pathogenically new organisms,” Lappé occasionally lapsed into a grim global ecological description of the crisis —a perspective that critics charged in 1981 grossly exaggerated the scope of the problem:

  Unfortunately, we played a trick on the natural world by seizing control of these [natural] chemicals, making them more perfect in a way that has changed the whole microbial constitution of the developing countries. We have organisms now proliferating that never existed before in nature. We have selected them. We have organisms that probably caused a tenth of a percent of human disease in the past that now cause twenty, thirty percent of the disease that we’re seeing. We have changed the whole face of the earth by the use of antibiotics.

  By the 1990s, when public health authorities and physicians were nervously watching their antimicrobial tools become obsolete, Lappé’s book was out of print. But everything he had predicted in 1981 had, by 1991, transpired.

  III

  For developing countries, access to still-reliable antibiotics for treatment of everything from routine staph infections to tuberculosis and cholera had reached crisis proportions by the 1990s. In 1993 the World Bank estimated that the barest minimum health care package for poor countries required an annual per capita expenditure of $8.00. Yet most of the least developed countries couldn’t afford to spend more than $2.00 to $3.00 per person each year on total health care.126 With over 100,000 medicinal drugs marketed in the world (5,000 active ingredients), it was possible for government planners to lose sight of their highest-priority needs, purchasing nonessential agents rather than those necessary for their populations’ survival. And the scale of global disparity in drug access was staggering: the average Japanese citizen spent $412 in 1990 on pharmaceutical drugs; the typical American spent $191; in Mexico just $28 per year was spent; Kenyans spent less than $4.00 per year; and Bangladeshis and Mozambicans just $2.00 per year, on average.

  It was in the wealthy and medium-income countries where billions of dollars’ worth of antibiotics and antivirals were used and misused. And it was in the wealthy nations that resistant strains most commonly emerged. But it was the poor nations, unable to afford alternative drugs, that paid the highest price.

  “The development of new antibiotics is very costly,” wrote Burroughs-Wellcome researcher A. J. Slater, “and their provision to Third World countries alone can never be financially rewarding; furthermore, only about 20% of world-wide pharmaceutical sales are to Third World countries. The industry’s interest in developing drugs for exclusive or major use in such countries is declining.”127

  Some poor countries sought to offset rising drug costs and microbial resistance by developing their own pharmaceutical manufacturing and distribution capabilities. In the best-planned situations, the respective governments drew up a list of the hundred or so most essential drugs, decided which could (by virtue of unpatented status and ease of manufacture) be made in their countries, and then set out to make the products. Local manufacture might be carried out by a government-owned parastatal company, a private firm, or—most commonly—a local establishment that was in partnership with or a subsidiary of a major pharmaceutical multinational.

  Though such drug policies were strongly supported by all the relevant UN organizations and, eventually, the World Bank, they were considered direct threats to the stranglehold a relative handful of corporations had on the world’s drug market. The U.S.-based Pharmaceutical Manufacturers Association, which represented some sixty-five U.S.-headquartered drug and biotechnology companies and about thirty foreign-based multinationals, strongly opposed such policies. In general, the companies—all of which were North American, European, or Japanese—felt that local regulation, manufacturing, marketing restrictions, or advertising limitations infringed on their free market rights.128

  Given that these companies controlled the bulk of the raw materials required for drug manufacture, and purchase of such materials required hard currency (foreign exchange), most of the world’s poor nations were unable to actuate policies of local antibiotic production.129 At a time when all forms of bacteremia were on the rise in the poorest nations on earth—notably in sub-Saharan Africa130—the governments were least equipped to purchase already manufactured drugs or make their own.

  Not all the blame for the lack of effective, affordable antibiotics could be justifiably leveled at the multinational drug manufacturers: domestic problems in many poor nations were also at fault. Distribution of drugs inside many countries was nothing short of abominable. In developing countries, most of the essential pharmaceuticals never made their way out of the capital and the largest urban centers to the communities in need. On average, 60 to 70 percent of a poor country’s population made do with less than a third of the nation’s medicinal drug supply, according to the World Bank.

  Perhaps the classic case of the distribution crisis involved not an antibiotic but an antiparasite drug. During the early 1980s the U.S.-based multinational Merck & Company invented a drug called ivermectin that could cure the river blindness disease caused by a waterborne parasite, Onchocerca volvulvus. About 120 million people lived in onchocerciasisplagued areas, most of them in West Africa. And WHO estimated that at least 350,000 people were blind in 1988 as a result of the parasite’s damage to their eyes.

  It was, therefore, an extraordinary boon to the governments of the afflicted region and WHO when Merck issued its unprecedented announcement in 1987 that it would donate—free—ivermectin to WHO for distribution in the needy countries. No drug company had ever exhibited such generosity, and WHO immediately hailed Merck’s actions as a model for the entire pharmaceutical industry.

  But five years after the free ivermectin program began, fewer than 3 million of the estimated 120 million at risk for the disease had received the drug. Cost was not the issue. Infrastructural problems in transportation and d
istribution, military coups, local corruption,131 lack of primary health care infrastructures in rural areas, and other organizational obstacles forced WHO and Merck to privately admit in 1992 that the program to cure the world of river blindness might fail.132

  The World Bank and many independent economists argued that such problems would persist until developing countries instituted national health care financing policies133—a daunting vision given that the wealthiest nation in the world, the United States, only embarked on a course toward implementation of such a policy in 1994. The pharmaceutical industry argued that developing countries had proven woefully unable to produce quality medicinal drugs on an affordable, high-volume basis. Lack of skilled local personnel, overregulation and bureaucratization, corruption, and lack of hard currency for bulk purchase of supplies and raw materials were all given as reasons for developing country inadequacies. Restrictions on multinational access to local markets were doomed, the industry asserted, to exacerbate the situation by denying the populace needed drugs.134

  From the perspective of developing countries, the pharmaceutical industry and Western governments that acted in its support were solely concerned with the pursuit of profits, and would conduct any practice they saw fit to maintain their monopoly on the global medicinal drug market. Such practices, it was charged, included bribing local doctors and health officials, manipulating pricing structures to undermine local competitors, advertising nonessential drugs aggressively in urban areas, dumping poorquality or banned drugs onto Third World markets, withholding raw materials and drugs during local epidemics, and declining foreign aid to countries whose drug policies were considered overly restrictive.135

  While charges and countercharges flew, the crisis in many parts of the world deepened. According to the World Bank, the world spent $330 billion in 1990 on pharmaceuticals, $44 billion of which went to developing countries. The majority of the world’s population in 1990 lacked access to effective, affordable antibiotics.

  In 1991, with the world facing a tuberculosis crisis, it was suddenly noted that the global supply of streptomycin was tapped out. The second-oldest antibiotic in commercial use was no longer manufactured by any company. Unpatented, cheap, and needed solely in developing countries, it offered no significant profit margin to potential manufacturers. When drug-resistant TB surfaced in major U.S. cities that year, the Food and Drug Administration would find itself in a mad scramble to entice drug companies back into the streptomycin-manufacturing business.

  IV

  It wasn’t just the bacteria and viruses that gained newfound powers of resistance during the last decades of the twentieth century.

  “It seems we have a much greater enemy in malaria now than we did just a few years ago,” Dr. Wen Kilama said. The director-general of Tanzania’s National Institute for Medical Research was frustrated and angry in 1986. He, and his predecessors, had meticulously followed all the malaria control advice meted out by experts who lived in wealthy, cold countries. But after decades of spending upward of 70 percent of its entire health budget annually on malaria control, Kilama had a worse problem on his hands in 1986 than had his predecessors in 1956.

  “More than ten percent of all hospital admissions are malaria,” Kilama said. “As are ten percent of all our outpatient visits. In terms of death, it is quite high, and it is apparent that malaria is much more severe now than before.”

  Ten years earlier the first cases of chloroquine-resistant Plasmodium falciparum parasites had emerged in Tanzania; by 1986 most of the nation’s malaria was resistant to the world’s most effective treatment. Like nearly every other adult in the nation, Kilama had suffered a childhood bout with malaria, fortunately in the days before chloroquine resistance surfaced. Natural immunity to malaria among survivors like Kilama was weak, and whenever he was under stress he would be laid up with malarial fevers.

  “It is a very unusual individual in this country who doesn’t have chronic malaria,” Kilama said.

  Though he was speaking of Tanzania, Kilama might as well have said the same of most of the nations of Africa, Indochina, the Indian subcontinent, the Amazon region of Latin America, much of Oceania, and southern China. Most of the world’s population in 1986 lived in or near areas of endemic malaria.

  Since the days when optimists had set out to defeat malaria, hoping to drive the parasites off the face of the earth, the global situation had worsened significantly. Indeed, far more people would die of malaria-associated ailments in 1990 than did in 1960.

  For example, the Pan American Health Organization and the Brazilian government had succeeded in bringing malaria cases in that country down to near-zero levels by 1960. In 1983 the country suffered 297,000 malaria hospitalizations; that figure had doubled by 1988. Despite widespread use of DDT and other pesticides, the Anopheles darlingi mosquitoes thrived in the Amazon, feeding on the hundreds of thousands of nonimmune city dwellers who were flooding the region in search of gold and precious gems. The situation was completely out of control.136 By 1989 Brazil accounted for 11 percent of the world’s non-African malaria cases.137

  A 1987 survey of malaria parasites extracted from the blood of nearly 200 Brazilian patients revealed that 84 percent of the Amazon isolates were chloroquine-resistant; 73 percent were resistant to amodiaquine; nearly all the isolates showed some level of resistance to Fansidar (sulfadoxine/pyrimethamine). Only one then-available drug remained effective against malaria in Brazil: mefloquine.138

  By 1990 more than 80 percent of the world’s malaria cases were African; 95 percent of all malarial deaths occurred on the African continent. Up to half a billion Africans suffered at least one serious malarial episode each year, and typically an individual received some 200–300 infective mosquito bites annually. Up to one million African children died each year of the disease.139 And all over the continent the key drugs were failing.

  The first reported cases were among Caucasian tourists on safari in Tanzania and Kenya during 1978–79.140 As early as 1981 chloroquine’s efficacy was waning among Kenyan children living in highly malaria-endemic areas, and higher doses of the drug were necessary to reverse disease symptoms.141 Within two years, truly resistant parasites had emerged in Kenya, and laboratory tests showed that 65 percent of the P. falciparum parasites had some degree of chloroquine resistances.142

  By 1984 reports of people dying of malaria while on chloroquine, or failing to improve when taking the drug, were cropping up all over the African continent: from Malawi,143 Namibia,144 Zambia,145 Angola,146 South Africa,147 Mozambique, 148 and locations scattered in between. Public health planners watched nervously, wondering how long chloroquine—the best and most affordable of the antimalarials—would remain a useful drug.

  Kilama and his counterparts in other African nations tried mosquito control measures, but the insects quickly acquired their own resistance to the pesticides. They tried eliminating watery breeding sites for the mosquitoes, but, as Kilama put it, “what can you do when these creatures can breed thousands of offspring in a puddle the size of a hippo’s foot? During the rainy season there is absolutely nothing.”

  Kilama’s staff regularly tested children living in northern equatorial districts of Tanzania for chloroquine resistance, and watched in horror as the parasites’ sensitivity to the drug declined logarithmically between 1980 and 1986. Furthermore, isolated cases of mefloquine and pyrimethamine resistance were reported in the country.149

  The CDC developed a simple field test kit for drug resistance that was widely distributed in Africa in 1985. Immediately a picture of the resistance emergence patterns developed. The problem began along coastal areas of East Africa, particularly Zanzibar, Mombasa, and Dar es Salaam. In these areas two factors may have played a role: a highly mobile Asian population that traveled frequently to India and other regions of resistant malaria, and relatively high availability of chloroquine thr
ough both legal and black-market venues. From there, resistance spread along the equatorial travel routes connecting traders from Kenya, Tanzania, Malawi, Zambia, Zaire, Burundi, Rwanda, and Uganda—the same trade routes implicated in the spread of the region’s AIDS epidemic. The problem eventually spread outward, from Addis Ababa to Cape Town, from Senegal to Madagascar.150

  Studies of the newly emerging P. falciparum strains showed that the mutations involved in resistance, once present, were permanent features in the parasitic line. The resistance mechanisms involved several different genes: partial insensitivity could result from a single mutation, total resistance from two or more. Wherever the single-mutation somewhat insensitive strains emerged, fully resistant mutants soon followed.

  The mutants seemed to grow faster in laboratory cultures than did normal P. falciparum, indicating that they might have acquired some type of virulence advantage.

  And finally, resistance was cropping up not only in regions where chloroquine was heavily used but also among people who rarely took the drug. That implied that the mutation and emergence didn’t require heavy selection pressure. And it also posed serious questions about what policies governments should pursue to preserve the utility of the precious drug.151

  By 1990 chloroquine resistance was the rule rather than the exception in most malarial regions of Africa. In addition, physicians noticed that chloroquine-resistant strains of P. falciparum seemed somewhat insensitive to treatment with quinine or quinidine, probably because of the chemical similarities of the three drugs.152

 

‹ Prev