Book Read Free

The Coming Plague

Page 91

by Laurie Garrett


  Multiply resistant organisms that carried plasmids with dozens of advantageous genes were also considered the result of chance mixtures of DNA pieces that combined and recombined over microbial generations.97 If randomness was at the root of microbial evolution, humanity needn’t fear unexpected changes in the rates of emergence of new mutants.

  Well before scientists appreciated the extreme mobility of discrete pieces of DNA, seminal laboratory experiments were done with Escherichia coli proving that mutant abilities to withstand attacks from either viruses or antibiotics preceded the appearances of those threats in the bacteria’s environment. 98 Roughly one out of every 10 million E. coli in a petri dish might randomly mutate to be resistant to, say, penicillin. Then, if the drug were poured into the petri dish, 9,999,999 bacteria would die, but that one resistant E. coli would survive, and divide and multiply, passing its genes for resistance on to its progeny.

  In 1988, however, John Cairns of the Harvard School of Public Health challenged that central dogma of biology.99 Using recombinant DNA techniques, his laboratory made a set of specific E. coli mutants that had unusual nutritional needs. They then altered the bacteria’s environments, making them deficient in chemicals the mutants couldn’t manufacture on their own. And they showed that the E. coli would specifically change two separate sets of genes to adapt to the situation and survive, doing so in far less time than random mutation would permit.

  “That such events ever occur seems almost unbelievable,” Cairns wrote, “but we have also to realize that what we are seeing probably gives us only a minimum estimate of the efficiency of the process, since in these cases the stimulus for change must fairly quickly disappear once a few mutant clones have been formed … . It is difficult to imagine how bacteria are able to solve complex problems like these—and do so without, at the same time, accumulating a large number of neutral and deleterious mutations—unless they have access to some reversible process of trial and error.”

  Cairns used computer metaphors to describe what he believed was going on in the microbial world. The essential genetic material that made an E. coli an E. coli was the organism’s hard disk. The bacteria had an almost endless number of ways to scan that basic disk, turning off and on various genetic programs and data bases. Plasmids and transposons were “drifting floppy disks,” carrying additional bits of genetic data and programming.

  There was a limit, Cairns argued, to how large any given organism’s hard disk could be. Furthermore, energy needs placed restrictions on how many genes could be expressed, or turned on, at any given time. Some genetic programs would remain silent most of the time, stored against emergencies in the bacteria’s data bank. Among those, he felt, were programs that actually ordered mutations, or changes, in elements of the basic hard disk. Since the bacteria couldn’t afford to contain enough DNA to carry programs in anticipation of every possible crisis, a direct mutation command was, Cairns argued, the next-best alternative.100

  The British-born biologist was convinced that such mechanisms were at play in some cases of drug resistance, as well as microbial evasion of the human immune system. In laboratory experiments it was possible to induce lysis—or rupture—of bacterial cells and see the microbes’ DNA “hard disk” flood into the fluid petri dish. There, other healthy bacteria would absorb the roaming DNA. And if antibodies were added to the mixture the scavenger bacteria would use the newly absorbed DNA to make new proteins to coat their membranes. In this way, the bacteria would disguise themselves from the antibodies, successfully evading immune system attack.

  Even the scavenging activity was less random than it seemed. Studies by Rockefeller University’s Alexander Tomasz of Neisseria gonorrhoeae and Hemophilus influenzae showed that these organisms had special proteins on the outer surface of their cell walls. The proteins scanned passing DNA, looking for useful genetic sequences. When something good drifted past, the protein grabbed it and pulled the DNA into the bacterium. And the pneumococci, which absorbed any “promiscuous DNA,” as Tomasz called it, had a special internal enzyme system that scanned the scavenged genetic material and rejected useless chunks of DNA.

  There were, by 1992, several identified “mutator alleles” along the E. coli genome—sites in the hard disk that ordered neighboring programs to alter themselves. And under experimental conditions it was possible to see a sort of “trial and error” mechanism in play, in which the microbe rejected useless or harmful mutations, but placed beneficial mutations in its permanent bacterial hard disk.101

  A number of stress proteins were discovered in microbes—proteins (or the genes that coded for them) that were activated when the cell was challenged by a range of threats: heat, fevers, some human hormones, arachidonic acid (an immune system activator), and a variety of human disease states. When activated, these proteins acted rapidly to protect vital biochemical functions inside the microbe. Termed “molecular chaperones,” the proteins guided fragile compounds through their duties. The stress proteins could be turned on and off experimentally by inflicting definable changes upon their environments. There was no clearer example of a microbe’s adaptation to its environment—adaptation that required genetic as well as chemical change. 102

  Studies of vancomycin resistance in Staphylococcus aureus strains found in a handful of European clinical settings revealed that seven separate genes were required to render the bacteria invulnerable to the drug. The seven genes prompted one simple alteration in the chemistry of the microbe’s cell wall, replacing an ester bond in a structural protein with an amide one. The ester bond was the target for vancomycin.

  Here was the amazing thing: those seven resistance genes were switched on only when vancomycin was in the bacteria’s environment. How the bacteria knew of the threat’s presence was an utter mystery.103

  Researchers noted that many extremely divergent microbial species shared genetic signaling sites, called operons, that with very minor mutation conferred multiple antibiotic resistances on the organisms. For example, seven very different microbes (E. coli, Salmonella, Shigella, Klebsiella, Citrobactr, Hofnia, and Enterobactr) naturally shared an operon which, with a single point mutation, made the organisms resistant to tetracycline, chloramphenicol, norfloxacin, ampicillin, and quinolones.104 In Cairns’s terms, this implied that all seven bacterial species shared a few bytes of hard disk space that was specifically designed to undergo a single data-bit alteration when necessary to respond to an antibiotic threat.

  Studies of various pathogenic E. coli strains showed that there was often a trade-off between genes for extreme virulence and those for antibiotic resistance. Rarely could the organisms carry enough genetic baggage to render them both highly lethal and resistant. Highly virulent strains didn’t usually need resistance genes, however, because they could produce disease—and reproduce themselves—so rapidly that Homo sapiens didn’t have the opportunity to make antibodies before the bacteria had accomplished their essential tasks of invasion, reproduction, and spread.105

  While infectious disease biologists debated questions of random versus directed mutations among the microbes, the overall evolutionary role of jumping genes was the subject of great debate among biologists of all stripes. Some scientists had, by the 1990s, come to believe that transposons and plasmids were a driving force—perhaps the driving force—of evolution, even in plants and animals. The grand biological soup of shifting genes, it was suggested, was constantly giving one creature the capabilities normally carried by another. Human beings, in fact, might be nothing more than four billion years of gene jumping.106

  But pure random chaos in such a mutation soup seemed terrifying. How could any species survive if its cells absorbed any chunk of DNA that came their way, no matter how dangerous it might be? Most random mutations were lethal, or at least deleterious, to the altere
d organism.

  A series of startling experiments performed in a variety of laboratories during the early 1990s significantly raised the stakes of that debate. Amber Beaudry and Gerald Joyce, of the Scripps Research Institute located in southern California, succeeded in forcing a particular protein, called a ribozyme, to evolve in a test tube. Normally the ribozyme’s job was to make specific cuts and slices in the organism’s RNA. But Beaudry and Joyce showed that after ten generations of reproduction the ribozyme could mutate, becoming capable of chopping DNA as well. 107

  Critics of Cairns’s experiments on bacteria and yeast grown under starvation conditions, which gave rise to directed mutations, charged that the British scientist’s conclusions were unjustified: even in the Cairns model, they said, the mutations could have been due to random events. 108 The arguments heated up as researchers found evidence of seemingly strange behaviors in microbes. For example, some transposons seemed to be able to sense when it was a good time to pop out of bacterial DNA and go their separate ways in search of a safer genome. How did they “know” that the bacterium was under fatal attack? Or was it possible the transposons didn’t “know” anything and scientists were simply witnessing the results of successful, though utterly random, gene jumping?109 In fungi, it was noted, environmental stress could induce a process called “ripping,” in which a massive number of single point mutations were suddenly made. Again, was the fungus responding to a stress by mutating in a specific, directed manner, or was it simply randomly mutating at a feverish pace?110

  On an even more basic level, many scientists argued that utterly random mutation and absorption and use of mobile DNA would be prohibitively expensive for microbes. It cost chemical energy to scavenge plasmids and transposons, to sexually conjugate, or to move pieces of DNA around inside cells. It seemed inconceivable that stressed organisms, in particular, would waste energy soaking up all sorts of DNA completely at random. Several genes had to be switched on and membrane changes had to be made in order, for example, for E. coli to absorb useful antibiotic-resistance factors from another species, Bacteroides fragilis.111 And though such horizontal transfers of genes between entirely different species of organisms were costly, they clearly occurred, spreading advantageous traits for resistance and virulence among microbes.112 In some cases the plasmids themselves seemed to improve as they moved about between species, recombining and adding new pieces of DNA as they went.113 Chemicals such as anesthetics, detergents, and environmental carcinogens seemed, for example, to influence bacterial sexual conjugation.114

  In 1994 the Cairnsian view of directed mutation got a boost from experiments performed at Rockefeller University and the University of Alberta, Canada. Researchers first confirmed Cairns’s initial experiments, showing that there was a specialized pathway of mutations that was switched on during E. coli starvation. Further, they showed that genetic recombination and resultant adaptive mutation occurred in the absence of bacterial reproduction. In other words, bacteria altered themselves not just through a process of random, error-prone reproduction that eventually yielded a surviving strain—the classic Darwinian view. In addition, they changed themselves, in some concerted manner, without reproducing.115

  The differences in the Darwinian and Cairnsian views were not trivial. If, for example, an E. coli bacterium residing in the human gut were suddenly exposed to a flood of tetracycline, would it occasionally mutate and perhaps become resistant after generations of bacterial reproduction? Or could it acquire instant resistance via some directed recombination or transposon mechanism?

  As issues of emerging diseases drew greater attention within the scientific community, theoretical debates centered on key questions: How likely was it that a previously unknown microbe would suddenly appear out of some stressed ecosphere? What were the odds that a fundamentally new pathogenic organism would emerge, the result either of recombination among other microbes or of large-scale mutation? Was it likely that old, well-understood microbes might successfully mutate into more dangerous forms? The first two questions were the subjects of mathematical models and extensive theoretical discussion, though the numbers of unknowns involved in such computations were enormous and significantly impeded conclusive analysis. Most scientists involved in such exercises felt that further basic research on microbial ecology and human behavior was needed in order to obtain enough data points to solve these quandaries.116

  As to the question of virulence, it was considered axiomatic that all pathogenic microbes would seek a state of moderate virulence in which they didn’t kill off their unwitting hosts too rapidly, giving themselves plenty of time to reproduce many times over and spread to other would-be hosts.117 Over time, even a rapid killer such as the 1918–19 Swine Flu would evolve toward lower virulence. Or so it was thought.

  But in the 1990s the world saw two viral cousins take off on very different virulence pathways. HIV-2 in West Africa became markedly less virulent between 1981 and 1993, infecting fewer people (despite the lack of safe sex practices) and possibly causing less severe disease in those it did infect.118 In contrast, over the same period there emerged strains of HIV-1 that seemed to be more transmissible and to cause more rapid disease. Thus, tendencies toward both less and greater virulence seemed to be occurring simultaneously in the global AIDS epidemic.

  Max Essex, Phyllis Kanki, and Souleymane MBoup studied HIV-2 closely and felt that there were inherent differences in the two species of AIDS viruses that could explain their opposite tendencies in virulence. Kevin DeCock felt, on the basis of his studies in Côte d’Ivoire, that HIV-2 was less transmissible than HIV-1, and probably always had been.

  Biology theorist Paul Ewald of Amherst College in Massachusetts believed HIV-1 was also becoming less virulent. He argued that Kaposi’s sarcoma, which was primarily seen among gay men with AIDS, was caused by a more virulent form of the virus that existed during early years of the epidemic. In Australia, Kaposi’s sarcoma and AIDS deaths had declined markedly over the course of the epidemic, due, Ewald thought, to a shift toward less virulent HIV-1 strains.119 But Australia’s situation was not mirrored in the rest of the world in 1994: globally HIV-1 was spreading at an extraordinary pace, and strains of the virus had recently emerged that seemed to be especially adapted to rapid heterosexual or intravenous transmission. A Ugandan strain surfaced sometime in 1992 that appeared to cause full disease within less than twelve months after the time of infection.120

  On the basis of mathematical models, British researchers predicted that HIV-1 would continue its trend toward greater virulence so long as the rates of multiple partner sexual activity remained high in a given area. As sexual activity declined, or as it became more monogamous, the rates of successful mutation, the number of quasispecies, and the virulence of HIV-1 would decrease. 121 And on that one point Ewald agreed: namely, that multiple partner sex was the key to virulence for sexually transmissible microbes.122

  At the root of much of the new thinking about virulence lay a key assumption: that microbes would be extremely virulent if long-term survival of the host wasn’t important for the spread and survival of the microbial species.123 If host population density increased, the microbes could afford to become more virulent, as they were guaranteed greater exposure to secondary and tertiary victims.

  That theoretical view received some experimental support in 1993 when Allen Herre, of the Smithsonian Tropical Research Institute in Panama, made a startling observation on the relationship between fig tree wasps and the minute roundworms that parasitized the insects. After ten years of observation and manipulation, Herre concluded that the worms became more virulent when the size of the wasp population, and the number of broods occupying any given fig tree niche, grew. When population size was low, the parasites were of low virulence and were passed from female wasps to their offspring via infected eggs laid in the figs. When the wasp population size swelled, and various broods intermi
ngled, the parasites spread horizontally, from wasp to wasp. This allowed the parasites to become more virulent and, among other things, to destroy the insects’ eggs. The difference could be seen in the paradoxical observation that the figs might be healthier, and suffer less wasp larvae infestation, at times when the adult wasp population was at its peak.124

  At Harvard Medical School, John Mekalanos studied a host of known virulence factors and developed a technique for teasing out unknown bacterial virulence genes. He concluded that many microbes stored virulence factors, just as they did resistance genes, on plasmids and transposons, snapping them up when conditions were ripe for all-out activity, and discarding them as excess baggage when the time was right. Such virulence factors could be shared across microbial species.

  Things that seemed to turn on known virulence factors included calcium fluxes, warmer temperatures (98.6°F inside a human body versus an external 60°F), the presence of iron, and a number of key chemicals.125 But Mekalanos also showed that for every known virulence factor in a given microbe there were dozens awaiting discovery. What mechanisms might switch those genes on, or cause them to mutate, weren’t known.

  Mekalanos disagreed with Ewald’s theory that virulence was tightly linked to transmissibility. There were exceptions. For example, a huge dose of cholera vibrio was needed to cause a human infection—on the order of one million. In contrast, Shigella could cause infection and disease with fewer than a hundred bacteria. Nevertheless, cholera was far more lethal than shigellosis.

  “It’s more complicated than mere transmissibility,” Mekalanos said. “Microorganisms respond to a more complex array of pressures that decide levels of virulence.”

 

‹ Prev