The Viral Storm
Page 17
The finding provided both a sense of adulation and fear. Most virologists would be lying if they said they didn’t enjoy finding something completely new. It had taken us years of hard work to line up the funding, find the local collaborating scientists who knew how to accomplish the work, set up a lab in central Africa, establish village outreach, collect specimens, store and ship them through the complexities of international agreements, and conduct the complicated laboratory work necessary to find an actual virus. The results showed that our system worked and that we were right in guessing that high levels of exposure to animals would lead to infection with novel viruses. Yet, the first evidence that new retroviruses were moving into humans also suggested that people’s faith in the existing public health structures—that they would inform us when novel viruses were moving into humans—was misguided. We were only beginning to see just how misguided it was.
In the following year, we went on to study yet another group of retroviruses, the T-lymphotropic viruses (TLVs). SFV was a virus with no real human precedents. Before our work, only a handful of laboratory workers had been infected, so determining how much the virus was likely to spread and cause disease—and its potential to become a pandemic—was unclear. Not so for the TLVs. It’s long been known that humans around the world are infected with two different varieties of TLV—HTLV-1 and HTLV-2; in fact, some twenty million people have these viruses. While some individuals can be infected without disease, many get sick from illnesses ranging from leukemia to paralysis. These viruses have pandemic potential. Clearly, if completely novel TLVs were moving from animals to humans, public health officials should know about it. Our results from SFV suggested this was a real possibility.
Going into the study, Bill and I knew that each of the two varieties of human TLV came from primates—just as HIV had. We also knew that another group of TLVs existed in primates that hadn’t yet been found in humans—the Simian T-lymphotropic Virus 3, known as STLV-3—so we began there. We screened the samples carefully, and as predicted, we found it—a virus infecting human hunters that was clearly unlike HTLV-1 and HTLV-2 and fell squarely with the viruses in the STLV-3 group. This was an important scientific finding for us. STLV-3 had the potential to cross into humans and was on the move. Even more surprising was an entirely new human TLV found in a single individual from eastern Cameroon—a virus we called HTLV-4.
The combination of finding a number of new SFVs in people exposed to primate bushmeat in central Africa and two entirely novel TLVs in the same population changed the way that we thought about our work. While it was theoretically clear that people exposed to a wide range of wildlife would acquire microbes from these animals, we didn’t know at the start whether monitoring those populations was practical or what such a system would look like. As we began the long and plodding work to determine the extent to which the new SFVs and TLVs were spreading and causing disease (work that continues to this day), our thinking opened up. We began to seriously consider that monitoring people highly exposed to wildlife could be a globally deployed system to capture viral chatter.
* * *
In 2005 I took a long shot. I applied to an unusual program sponsored by the National Institutes of Health (NIH), the largest government funder of biomedical research in the world. The NIH had supported my work in the past, but the world-class institution didn’t perfectly match the work I hoped to do in the future. While the NIH has a broad ranging program, it does not distribute its resources evenly. The NIH specializes in funding laboratory research rather than field research. It focuses its energy largely on research in more reductionist cell biology—work that focuses on very clear hypotheses that provide very clear yes or no answers. A program to spearhead a brand-new global monitoring effort to chart viral chatter and control pandemics was not something that would normally be supported. Yet in 2004 the NIH began a completely new program—the NIH Director’s Pioneer Award Program—aimed at sponsoring innovative research not normally supported by the NIH mission. The program gave grantees $2.5 million and five years to do largely whatever they felt was necessary to advance their scientific objectives. In the fall of 2005 I was among the fortunate individuals to get the award.
At this point, the pieces were beginning to fall into place. Certainly $2.5 million was nowhere near what would be needed to roll out a global monitoring system, but it was a good start. It allowed me to begin truly thinking about which key viral hotspots around the world needed the most urgent monitoring. Some key regions came to mind right away. The work with Jared Diamond and Claire Panosian had shown that Africa and Asia provided the lion’s share of our major infectious diseases. Those would be the places to start.
In the coming years, along with my team and a stunning range of local collaborators, I would take the model we’d developed in Cameroon and begin to deploy it in a number of other countries in central Africa. With the help of dedicated field scientists like Corina Monagin, who has become expert at making field sites in sensitive and difficult areas function, I’d renew collaborations from my years in Malaysia and begin to work with new sets of colleagues to establish programs in China and Southeast Asia. We’d set up the beginnings of a system to capture global viral chatter. Along with a growing number of colleagues worldwide, we would push ourselves to ask how we could best find new viruses. How could we capture a much higher percentage of the new viruses killing humans and infecting animals?
In the coming chapters, we’ll explore the results of this work. I’ll also discuss some of the cutting-edge tools employed to improve our ability to detect pandemics before they spread. While the threats associated with pandemics are large and growing, so too are the approaches and technologies to address them.
10
MICROBE FORECASTING
It was a large city. And it was hit hard. The first cases emerged in late August, and the victims suffered terribly. The earliest symptoms were profuse diarrhea and vomiting. They experienced severe dehydration, increased heart rate, muscle cramps, restlessness, severe thirst, and the loss of skin elasticity. Some of the cases progressed to kidney failure, while others led to coma or shock. Many of those who came down with the disease died.
Then on the night of August 31, the outbreak truly broke. Over the next three days, 127 people in a single neighborhood died. And by September 10 the number of fatalities would reach 500. The epidemic seemed to spare no one. Children and adults alike were killed. Few families did not have at least one member who came down with the disease.
The epidemic led to intense panic. Within a week, three-quarters of the neighborhood’s residents fled. Stores closed. Homes were locked. And you could walk down a formerly bustling urban street without seeing a single person.
Early in the outbreak, a forty-year-old epidemiologist began an investigation to determine its source. He consulted community leaders and methodically interviewed families of the victims and made careful maps of every single case. Following his hunch about a waterborne disease, he studied the sources of the community’s water and determined that it came from only one of two urban water utilities. He conducted microscopic and chemical analyses of specimens from the water system, which proved inconclusive.
In his report to the responsible officials, he presented his analysis and concluded that contaminated water was to blame. Despite the lack of definitive results from the analyses, the mapping of cases strongly supported his conclusion that one particular water outlet was the source of the outbreak. He recommended shutting down the water supply, and the officials agreed. And while the outbreak may have already been in decline because of the mass exodus, that investigation and water closure proved pivotal.
* * *
What was unusual about this outbreak was not the procedural investigation that followed. Modern epidemiologists in countries throughout the world conduct exactly this kind of investigation regularly. They enlist the help of local leaders, study the distribution of cases, conduct analyses on potential sources, and then often argue with off
icials as to the best course of action. What was unusual was that the outbreak was in 1854—before the field of epidemiology existed.
As you may have guessed, the investigator responsible for cracking the outbreak was none other than John Snow, the now famous London physician and clergyman considered one of the founders of contemporary epidemiology. The culprit was, of course, the bacteria Vibrio cholerae, or cholera. By finding that water was the source rather than “foul air,” Snow contributed to the modern germ theory of infectious diseases—that communicable diseases are caused by microbes. To this day, you can see a replica of the famous Broad Street pump that Snow identified as the source of the 1854 outbreak, in Soho, London.
It seems intuitive to us today, but the way that Snow used interviews, case identification, and mapping to chart the origin of the Broad Street cholera outbreak of 1854 was revolutionary in its time. While maps had certainly been used extensively prior to 1854, the map he made of Soho is considered the first of its kind, not only in epidemiology but also in cartography. He was the first to utilize maps to analyze geographically related events to make a conclusion about causality—namely, that the Broad Street pump was the source of the outbreak. By doing so he has been credited with using the first geographic information system, or GIS, a now commonly used cartographic system for capturing and analyzing geographic information.
* * *
In contemporary GIS, layers of information are added to maps like Snow’s to provide depth of geographic information and to suggest patterns of causality. While Snow’s map included streets, homes, locations of illness and water sources, a contemporary version could include many more layers—genetic information from cholera specimens collected in different locations, dimensions of time that track changes spatially with an added weather layer or social connections between the individuals in the various homes.
Modern GIS is among a range of contemporary tools that is radically changing the way that we investigate outbreaks and understand the transmission of diseases. When used in a coordinated and comprehensive way, these tools have the potential to fundamentally change the way that we monitor for outbreaks and stop them in their tracks.
We now have multiple scientific and technical advantages that Snow lacked in the mid-nineteenth century. Among the most profound is that we have significantly improved our capacity to catch the bugs we’re chasing and to document their diversity. The revolution in molecular biology, in particular the techniques for capturing and sequencing genetic information, has profoundly changed our ability to identify the microbes that surround us.
The map of London used by John Snow to find the source of the cholera outbreak.
Miraculous but now standard techniques like the polymerase chain reaction (PCR), which resulted in the Nobel Prize for its discoverer Kary Mullis, allow us to snip out tiny pieces of genetic information from microbes and create billions of identical copies, whose sequences can then be read and sorted out according to the family of microbes to which they belong. Yet standard PCR requires that you know what you’re looking for. If, for example, we want to find an unknown malaria parasite, we can use PCR designed to identify malaria-specific sequence, since all malaria parasites have genetic regions that look similar enough to each other. But what if we don’t know what we’re looking for?
Dr. John Snow, 1856. (Wellcome Library, London)
In the early 2000s, intent on finding unknown microbes, a bright young molecular biologist, Joe DeRisi, and his colleagues adapted an interesting technique developed by DeRisi’s doctoral adviser, Pat Brown, a Stanford biochemist. The DNA microarray chip consisted of thousands of tiny bits of distinct artificial genetic sequence distributed in an orderly fashion across a small glass slide. Since genetic information sticks to its mirror image sequence, if you flush solution from a specimen containing genetic information across a slide like this, the bits that match the designed sequences on the slide will fuse. You can then determine what was in the specimen by determining which of the sequences on the slide trapped their natural siblings. The technique had already provided thousands of scientists with a new way of characterizing the bits of genetic information that flow through living systems by the time DeRisi got his hands on it.
Prior to DeRisi’s innovation, the microarray chips had been used primarily to help determine the internal workings of the genes of humans and animals, but DeRisi and his colleagues realized that the technique could be modified to create a powerful viral detection system. Instead of designing the chips with bits of artificial human genetic information, he and his colleagues designed chips with bits of viral genetic information. By carefully reviewing the scientific databases for genetic information on all of the viruses known to science, they crafted chips that had bits of genetic information from a whole range of viral families lined up in neat rows. If they introduced genetic information from a sick patient, and it contained a virus with a sequence similar to one on the chip, the sequence would be trapped and—bingo!—we’d know the bug we were dealing with.
The viral microarray, as these specialized chips became known, have proliferated and spread to labs throughout the world. They’ve helped quickly identify the microbial villain responsible for new pandemics, like the coronavirus that causes SARS. Yet they are not perfect. These chips can only be made to capture viruses from families of viruses already known to science. If there are groups of viruses out there whose sequences we are completely unaware of, and there certainly are, then we have nothing with which to engineer the chips. Truly unknown viruses would slide right by.
* * *
Within the past few years, viral microarrays have been supplemented with a series of bold new genetic sequencing approaches. New machines churn out mammoth amounts of sequence data from specimens—amounts of sequence that previously would have been prohibitively expensive or time consuming. These machines are permitting an entirely new form of viral discovery.
Rather than look for particular bits of information, the approach is to take a specimen—say a drop of blood—and sequence every bit of genetic information it contains. Technically, it’s more complicated than that, but the result is similar to what you’d expect. We are approaching a moment when we will be able to read every single sequence within a given biological specimen. Every bit of DNA or RNA from the host specimen, and critically, every bit from the microbes that are riding along with them.
One of the central problems becomes the bioinformatics—how to sort through all of the billions of bits of information that are produced by these incredible technologies. Fortunately, in an enlightened move, scientists at the NIH picked up and nurtured an electronic repository of sequencing information developed at the famed Los Alamos National Laboratory and now called GenBank. Since scientists are required by funding sources and journals to submit sequences to GenBank prior to submitting academic papers, we collectively contribute billions of bits of genetic information each year. GenBank right now holds over a hundred billion bits of sequence information. And it’s growing rapidly. When a new sequence is identified from a sequencing run, it can be rapidly compared electronically to what’s in GenBank to see if there’s a match.
In late 2006 and early 2007 these techniques were used to good effect. In early December 2006 the organs of a patient who had died of a brain hemorrhage in Dandenong hospital in Australia were harvested for transplantation. A sixty-three-year-old grandmother received one of the kidneys, another unnamed recipient received the other kidney, and a sixty-four-year-old lecturer in a local university received the man’s liver. By early January all three had died.
The local hospital and collaborating labs looked for all of the usual suspects. They utilized PCR and tried to grow up the microbe on culture media. They even tried one of the viral microarrays, to no avail. A virus was only found when the specimen was subjected to massive sequencing. The team that found it, led by Ian Lipkin, a world-class laboratory virologist at Columbia University, had to sort through over a hundred thousand sequences to fin
d the fourteen sequences belonging to the mystery virus. Truly a needle in a haystack! The mystery virus ended up being in a group of viruses called arenaviruses that often live in rodents. Without massive sequencing, the virus would not likely have been found.
* * *
But while identifying what’s actually in a small new outbreak is vital, it’s only the beginning. As we get better and better at understanding what’s out there, we will have to start asking a tougher question: where is it going? Will it become a pandemic?
There are three primary objectives to the emerging science of pandemic prevention:
1. We need to identify epidemics early.
2. We need to assess the probability that they will grow into pandemics.
3. We need to stop the deadly ones before they grow into pandemics.
The viral microarray and sequencing techniques give us a snapshot of what is causing an epidemic, but more is needed to assess the possibility that a new agent in a limited outbreak has the right stuff to go pandemic. This is exactly the objective of a new program being developed by DARPA, the U.S. Department of Defense’s Advanced Research Projects Agency. DARPA has had a stunning impact on the contemporary world of technology, including sponsoring early research that has contributed in substantive ways to the development of modern computing, virtual reality, and the Internet itself.