The Coming Plague

Home > Other > The Coming Plague > Page 5
The Coming Plague Page 5

by Laurie Garrett


  “And each time they sweep that broom,” Johnson realized, “they’re sending mouse urine-infected dust and crumbs drifting all about in the air.” Every time the families of San Joaquin assembled for breakfast, they shared virus-contaminated air. Johnson also decided that he, Ron, and Angel got sick as a result of eating contaminated food at the San Joaquin party.

  Researchers from the Rockefeller Foundation Laboratories in New York City and the University of Buenos Aires eventually reached a similar set of conclusions about the Argentine Junin virus. An Argentine team led by Dr. A. S. Parodi concluded that another species of wild Calomys mouse had been flushed out of its pampas habitat by post-World War II changes in local agricultural practices. Farmers had long had difficulty growing profitable crops of corn because short broad-leafed weeds had invaded the fields. After World War II, herbicides effectively eliminated the short grasses and dramatically increased crop yields.6

  As harvest time approached, however, taller grasses that were not affected by herbicides would grow in the corn fields, thriving just when humans entered the fields to reap the corn. As it turned out, a fairly rare species of field mouse naturally subsisted on the seeds of these tall grasses. As the grasses proliferated, so did the mice, until the once-rare species became the dominant rodent of the region.

  The mouse, of course, carried Junin virus, the cause of Argentine hemorrhagic fever.7

  MacKenzie thought another Bolivian factor also played a role in the San Joaquin epidemic. On all his trips to Magdalena, Orobayaya, and San Joaquin, he was struck by the remarkable absence of cats in the villages. When he asked the people what had happened to the cats, he was told they all died.

  The feline die-off coincided with the rise in the mouse population, allowing the Calomys to take over the town without battling predatory cats. There were two theories about why the cats died: they were also killed by the virus, or the felines were DDT victims. Bolivia was in the midst of a massive DDT-spraying campaign to eliminate malaria, and the quantities used in the homes of these remote areas were often so great that all furniture and walls were coated with a thin white film of insecticide powder.

  MacKenzie did a simple experiment. He injected six cats with the virus and force-fed DDT to six others. The virus-treated cats were completely unaffected by the experiment, but the DDT-poisoned animals all died of symptoms identical to those seen among the domestic pets of San Joaquin.

  Valverde was so impressed with MacKenzie’s hypothesis that he went on national radio to issue a call for donated cats. In June 1964 hundreds of cats were airlifted into San Joaquin, and the epidemic’s halt soon followed.

  Kuns, the ecologist of the group, wasn’t ready to buy either the cat die-off idea or Johnson’s notion of a mouse invasion.

  “It’s stupid! Absolutely stupid,” Kuns said of the cat/DDT link, noting that felines killed only the weak and sick members of mouse colonies and rarely had an enormous impact on the overall size of a rodent population. He wasn’t at all convinced people could get infected as a result of eating food contaminated with mouse urine either. To Johnson’s theory that three of the scientists got the disease as a result of feasting at the fiesta he said, with a wink, “You ought to think about what you did after the fiesta, boys, not what you ate.”8

  Kuns believed the epidemic began when the virus itself changed and became more virulent. And he felt the exact mode of transmission of the disease from mouse to human was still unresolved.

  Kuns told the NIH that he wanted to put fluorescent chemicals in mouse food and then use ultraviolet lights to follow the animals’ urine trails in San Joaquin. That, he hoped, would answer the question by showing where the animals’ urine came in intimate contact with human noses, mouths, and inhalations. His hunch was that the rodents scampered around the villages while people slept, directly infecting the slumbering men, women, and children.

  But once the epidemic was abated the NIH withdrew all research funds and Kuns’s efforts came to a full stop.

  “Hardly anything has ever disappointed me more in all my thirty-nine years than having to pull out of here without finishing the job,” Kuns mumbled to himself as he packed up his microscope and thousands of animal samples. When a few cases of the disease appeared in San Joaquin a year later, Kuns told The Saturday Evening Post: “You might compare us to firemen. We’ve discovered the location of the blaze and we’ve put it out. But we still don’t know where or when it could start again.”9

  For his part, however, Johnson was quite pleased.

  By the end of 1964 Johnson was able to look over his recent accomplishments with great pride. Together with MacKenzie, Kuns, and Webb, he had solved an intriguing mystery, stopped an epidemic, published in prominent scientific journals, and organized a crackerjack virology laboratory in Panama that was prepared to tackle anything that might surface in the Americas. Furthermore, the NIH promoted him to director of the whole MARU facility.

  He had also found love, honor, and a mission in life.

  He married Patricia Ann Webb, received the Decorated Order of the Condor from the Bolivian government, and discovered his personal call of the wild. He had survived a near-death experience and then gone on to defeat the enemy on its own turf.

  “That’s enough for some people for a lifetime,” thought the thirty-five-year-old. “But that’s just the start for me.”

  His life dreams now changed forever, Johnson sought ways to combine science, clinical medicine, and good old-fashioned detective work. Wherever he went, whatever epidemics might come his way, he knew these were the skills he would use, and the challenges he would relish.

  From then on, Johnson stressed the need for calm in the face of epidemics, for reason, science, sound clinical training, and the ability to work with a team of diverse expertise. These were lessons passed on first to those who worked with Johnson in Panama and later to a whole generation of infectious disease “cowboys.” Over the next two decades Johnson and his “cowboys” would fight dozens of skirmishes and a few pitched battles with the microbes, always maintaining a healthy respect for both their microscopic enemies and the human bureaucracies, governments, and institutions whose rules Johnson would regularly defy.

  The war was on, and the battlefield was the entire planet.

  2

  Health Transition

  THE AGE OF OPTIMISM-SETTING OUT TO ERADICATE DISEASE

  Germs come by stealth

  And ruin health,

  So listen, pard,

  Just drop a card

  To a man who’ll clean up your yard

  And that will hit the old germs hard.

  —Dr. Almus Pickerbaugh,

  in Arrowsmith, Sinclair Lewis

  I

  For Western physicians, the 1950s and 1960s were a time of tremendous optimism. Nearly every week the medical establishment declared another “miracle breakthrough” in humanity’s war with infectious disease. Antibiotics, first discovered in the early 1940s, were now growing in number and potency. So much so that clinicians and scientists shrugged off bacterial diseases, and in the industrialized world former scourges such as Staphylococcus and tuberculosis had been deftly moved from the “extremely dangerous” column to that of “easily managed minor infections.” Medicine was viewed as a huge chart depicting disease incidences over time: by the twenty-first century every infectious disease on the chart would have hit zero. Few scientists or physicians of the day doubted that humanity would continue on its linear course of triumphs over the microbes.

  Dr. Jonas Salk’s 1955 mass experimental polio vaccination campaign was so successful that cases of the disease in Western Europe and North America plummeted from 76,000 in 1955 to less than 1,000 in 1967.1 The excitement engendered by that drama prompted optimistic declarations that the disease would soon be eradicated
from the planet.

  Similar optimism enveloped discussion of nearly every infectious disease affecting human beings. In 1948, U. S. Secretary of State George C. Marshall declared at the Washington, D.C., gathering of the Fourth International Congress on Tropical Medicine and Malaria that the conquest of all infectious diseases was imminent.2 Through a combination of enhanced crop yields to provide adequate food for humanity and scientific breakthroughs in microbe control, Marshall predicted, all the earth’s microscopic scourges would be eliminated.

  By 1951 the World Health Organization was so optimistic that it declared that Asian malaria could soon reach a stage through careful local management wherein “malaria is no longer of major importance.”3 A key reason for the excitement was the discovery of DDT and the class of chemicals known as organochlorines, all of which possessed the remarkable capacity to kill mosquitoes and other insect pests on contact and to go on killing for months, perhaps years, all insects that might alight on pesticide-treated surfaces.

  In 1954 the Fourteenth Pan American Sanitary Conference resolved in its Santiago, Chile, meeting to eliminate malaria completely from the Western Hemisphere, and PAHO (Pan American Health Organization) was instructed to draw up an ambitious eradication plan. The following year the World Health Organization decided to eliminate all malaria on the planet. Few doubted that such a lofty goal was possible: nobody at the time could imagine a trend of worsening disease conditions; the arrow of history always pointed toward progress.

  Every problem seemed conquerable in the decade following World War II: humanity could reach the moon, bombs too terrifying to ever be used could create a balance of terror that would prevent all further worldwide wars, American and European agriculturalists could “green” the poor nations of the world and eliminate starvation, civil rights legislation could erase the scars of slavery and bring racial justice, democracy could shine in startling contrast to communism and provide a beacon to which the nations of the world would quickly flock, huge, gasoline-hungry cars cruised freshly paved highways, and their passengers dreamed of a New Tomorrow.

  From the capitalist world came thousands of zealous public health activists who rolled up their sleeves and dived like budding Dr. Pickerbaughs into amazingly complex health crises. Sinclair Lewis lambasted such zealous health optimism in Arrowsmith, creating the character of Almus Pickerbaugh, physician-congressman-poet, whose gems included:

  You can’t get health

  By a pussyfoot stealth,

  So let’s every health-booster

  Crow just like a rooster.

  Never mind the seemingly daunting obstacles presented by, for example, cholera control in India; all was possible in the Age of Boosterism.

  The notion developed of a Health Transition, as it was called. The concept was simple: as nations moved out of poverty and the basic food and housing needs of the populations were met, scientists could use the pharmaceutical and chemical tools at hand to wipe out parasites, bacteria, and viruses. What would remain were the slower chronic diseases that primarily struck in old age, particularly cancer and heart disease. Everybody would live longer, disease-free lives.

  Such glowing forecasts were not limited to the capitalist world. Soviet and Eastern bloc health officials presented ever-rosier medical statistics each year, suggesting that their societies were also well on their way to conquering infectious diseases. And Mao Zedong, leader of the nearly one-billion-strong Chinese nation, declared in 1963:

  The Four Seas are rising, clouds and waters raging,

  The Five Continents are rocking, wind and thunder roaring.

  Away with all pests!

  Our force is irresistible.4

  Throughout the 1950s and 1960s, the Chinese Communist Party waged a peasant-based war on infectious diseases, mobilizing millions of peasants to walk through irrigation ditches and pluck schistosome-carrying snails from the banks.5 According to British physician Joshua Horn, who fully embraced the campaign and Maoism, in 1965–66 virtually no new cases of schistosomiasis, a serious liver parasitic disease, occurred in China—a result, he claimed, of the Communist Party campaign.6

  Though the ideological frameworks differed markedly, both the capitalist and communist worlds were forecasting brighter futures in which there would be a chicken in every pot, a car in every garage, and a long, infectious-disease-free life ahead for every child. Both sides of the Iron Curtain agreed that mass mobilization of the global populace to fight disease would inevitably result in victory. Never mind in what rhetoric public health campaigns might be wrapped, humanity would triumph over the microbes.

  In September 1966 the U.S. Centers for Disease Control assessed the status of American health:

  The status of diseases may be classified as follows:

  1. Diseases eradicated within the United States (bubonic plague, malaria, smallpox, etc.)

  2. Diseases almost eradicated (typhoid, infantile paralysis, diphtheria, etc.)

  3. Diseases that still are health problems, although technology exists for effective control (syphilis, tuberculosis, uterine cervix cancer, injury, arthritis, breast cancer, gonorrhea, etc.)

  4. Diseases where technology is in early developmental stages or nonexistent—and where little capability exists for alleviating or preventing health impairment (leukemia and some other neoplasms, some respiratory diseases and strokes)7

  As the 1960s opened, the U.S. Department of Health, Education, and Welfare convened a team of medical experts to decide the future mission of the entire government public health effort. Praising the accomplishments of the 1950s, the advisory team declared that “science and technology have completely transformed man’s concepts of the universe, of his place in it, and of his own physiological and psychological systems. Man’s mastery over nature has been vastly extended, including his capacity to cope with diseases and other threats to human life and health.”8

  By 1967 U.S. Surgeon General William H. Stewart would be so utterly convinced of imminent success that he would tell a White House gathering of state and territorial health officers that it was time to close the book on infectious diseases and shift all national attention (and dollars) to what he termed “the New Dimensions” of health: chronic diseases.9

  “In the words of the song, ‘The fundamental things go by,’ polio and measles can be eradicated and should be eradicated,” Stewart would tell his exuberant audience. “Venereal disease and tuberculosis can be sharply reduced and should be sharply reduced. These are tasks that no one will perform for us. So long as a preventable disease remains, it must be prevented, and public health must be the primary force for prevention.”

  Not content to stop with the predicted eradication of all known infectious diseases, the optimists set out in search of rare and remote disease agents. Biology research stations were established throughout the Southern Hemisphere, staffed largely by scientists from the Northern Hemisphere. All sorts of agencies funded and administered these outposts, including the Rockefeller Foundation, agencies of the governments of France, the United States, Germany, and the United Kingdom, as well as a variety of small private interests.

  Johnson’s Panama Canal Zone laboratory was just such an outpost. The U.S. government alone operated twenty-eight overseas laboratories, and the Rockefeller Foundation’s Virus Program operated facilities in eight countries through which over sixty viruses would be discovered between 1951 and 1971.10

  But much of what these searching scientists were to find proved terrifying. As officials prepared to uncork celebratory champagne, Johnson and his colleagues were unlocking some of nature’s nastiest secrets.

  Boosters of the 1950s and early 1960s had some basis, born of ignorance, for their optimism: they knew comparatively little about genetics, microbial evolution, the human immune system, or disease ecology. Given the state of knowledge in the public health world of that day, it may have seemed appropria
te to view infectious diseases in simple cause-and-effect terms. Seen in such a reductionist manner, problems and solutions appeared obvious and readily conquerable, bravado warranted.

  As early as the 1930s scientists guessed that the genetic traits of large creatures, such as plants, animals, and humans, were carried in packages called chromosomes. These structures, which, when examined through a microscope, resembled dark, squiggly worms, were inside the central core, or nucleus, of every cell in a plant or animal. By manipulating chromosomes in test tubes, scientists could change the ways cells looked or grew; exposing chromosomes to radiation, for example, could transform healthy tissue into cancer colonies.

  True, Gregor Mendel showed in 1865 that some characteristics were passed on as dominant traits from one generation to another, while other genetic characteristics were recessive. But nobody knew exactly how all this worked, why blue-eyed parents had blue-eyed children, or a bacterium could seem to suddenly develop the ability to withstand higher temperatures than normally tolerated by its species.

  Until 1944 nobody knew what was responsible for this neat passage of genetic information, from the tiniest virus to the largest elephant. That year, Oswald Avery and his colleagues at the Rockefeller Institute in New York showed that if they destroyed a specific molecule found inside all living cells, the organisms could no longer pass on their genes.

  The molecule was called deoxyribonucleic acid, or DNA.

  In 1953 researcher Rosalind Franklin, working at King’s College in London, made the first X-ray photographs of DNA, showing that the molecules had a unique helical structure composed of various combinations of the same five key chemicals.

 

‹ Prev