Book Read Free

The Signal and the Noise

Page 22

by Nate Silver


  The press portrayed the mass vaccination program as a gamble.9 But Ford thought of it as a gamble between money and lives, and one that he was on the right side of. Overwhelming majorities in both houses of Congress approved his plans at a cost of $180 million.10

  By summer, however, there were serious doubts about the government’s plans. Although summer is the natural low season for the flu in the United States,11 it was winter in the Southern Hemisphere, when flu is normally at its peak. And nowhere, from Auckland to Argentina, were there any signs of H1N1; instead, the mild and common A/Victoria was the dominant strain again. Indeed, the roughly two hundred cases at Fort Dix remained the only confirmed cases of H1N1 anywhere in the world, and Private Lewis’s the only death. Criticism started to pour in from all quarters: from the assistant director of the CDC,12 the World Health Organization,13 the prestigious British medical journal The Lancet,14 and the editorial pages of the New York Times, which was already characterizing the H1N1 threat a “false alarm.”15 No other Western country had called for such drastic measures.

  Instead of admitting that they had overestimated the threat, the Ford administration doubled down, preparing a series of frightening public service announcements that ran in regular rotation on the nation’s television screens that fall.16 One mocked the naïveté of those who refused flu shots—“I’m the healthiest fifty-five-year-old you’ve ever seen—I play golf every weekend!” the balding everyman says, only to be shown on his deathbed moments later. Another featured a female narrator tracing the spread of the virus from one person to the next, dishing about it in breathy tones as though it were an STD—“Betty’s mother gave it to the cabdriver . . . and to one of the charming stewardesses . . . and then she gave it to her friend Dottie, who had a heart condition and died.”

  The campy commercials were intended to send a very serious message: Be afraid, be very afraid. Americans took the hint. Their fear, however, manifested itself as much toward the vaccine as toward the disease itself. Throughout American history, the notion of the government poking needles into everyone’s arm has always provoked more than its fair share of anxiety. But this time there was a more tangible basis for public doubt. In August of that year, under pressure from the drug companies, Congress and the White House had agreed to indemnify them from legal liability in the event of manufacturing defects. This was widely read as a vote of no-confidence; the vaccine looked as though it was being rushed out without adequate time for testing. Polls that summer showed that only about 50 percent of Americans planned to get vaccinated, far short of the government’s 80 percent goal.17

  The uproar did not hit a fever pitch until October, when the vaccination program began. On October 11, a report surfaced from Pittsburgh that three senior citizens had died shortly after receiving their flu shots; so had two elderly persons in Oklahoma City; so had another in Fort Lauderdale.18 There was no evidence that any of the deaths were linked to the vaccinations—elderly people die every day, after all.19 But between the anxiety about the government’s vaccination program and the media’s dubious understanding of statistics,20 every death of someone who’d gotten a flu shot become a cause for alarm. Even Walter Cronkite, the most trusted man in America—who had broken from his trademark austerity to admonish the media for its sensational handling of the story—could not calm the public down. Pittsburgh and many other cities shuttered their clinics.21

  By late fall, another problem had emerged, this one far more serious. About five hundred patients, after receiving their shots, had begun to exhibit the symptoms of a rare neurological condition known as Guillain–Barré syndrome, an autoimmune disorder that can cause paralysis. This time, the statistical evidence was far more convincing: the usual incidence of Guillain–Barré in the general population is only about one case per million persons.22 In contrast, the rate in the vaccinated population had been ten times that—five hundred cases out of the roughly fifty million people who had been administered the vaccine. Although scientists weren’t positive why the vaccines were causing Guillain–Barré, manufacturing defects triggered by the rush production schedule were a plausible culprit,23 and the consensus of the medical community24 was that the vaccine program should be shut down for good, which the government finally did on December 16.

  In the end, the outbreak of H1N1 at Fort Dix had been completely isolated; there was never another confirmed case anywhere in the country.25 Meanwhile, flu deaths from the ordinary A/Victoria strain were slightly below average in the winter of 1976–77.26 It had been much ado about nothing.

  The swine flu fiasco—as it was soon dubbed—was a disaster on every level for President Ford, who lost his bid for another term to the Democrat Jimmy Carter that November.27 The drug makers had been absolved of any legal responsibility, leaving more than $2.6 billion in liability claims28 against the United States government. It seemed like every local paper had run a story about the poor waitress or schoolteacher who had done her duty and gotten the vaccine, only to have contracted Guillain–Barré. Within a couple of years, the number of Americans willing to take flu shots dwindled to only about one million,29 potentially putting the nation in grave danger had a severe strain hit in 1978 or 1979.30

  Ford’s handling of H1N1 was irresponsible on a number of levels. By invoking the likelihood of a 1918-type pandemic, he had gone against the advice of medical experts, who believed at the time that the chance of such a worst-case outcome was no higher than 35 percent and perhaps as low as 2 percent.31

  Still, it was not clear what had caused H1N1 to disappear just as suddenly as it emerged. And predictions about H1N1 would fare little better when it came back some thirty-three years later. Scientists at first missed H1N1 when it reappeared in 2009. Then they substantially overestimated the threat it might pose once they detected it.

  A Sequel to the Swine Flu Fiasco?

  The influenza virus is perpetuated by birds—particularly wild seafaring birds like albatrosses, seagulls, ducks, swans, and geese, which carry its genes from one continent to another but rarely become sick from the disease. They pass it along to other species, especially pigs and domesticated fowl like chickens,32 which live in closer proximity to humans. Chickens can become ill from the flu, but they can usually cope with it well enough to survive and pass it along to their human keepers. Pigs are even better at this, because they are receptive to both human and avian viruses as well as their own, providing a vessel for different strains of the virus to mix and mutate together.33

  The perfect incubator for the swine flu, then, would be a region in which each of three conditions held:

  It would be a place where humans and pigs lived in close proximity—that is, somewhere where pork was a staple of the diet.

  It would be a place near the ocean where pigs and seafaring birds might intermingle.

  And it would probably be somewhere in the developing world, where poverty produced lower levels of hygiene and sanitation, allowing animal viruses to be transmitted to humans more easily.

  This mix almost perfectly describes the conditions found in Southeast Asian countries like China, Indonesia, Thailand, and Vietnam (China alone is home to about half the world’s pigs34). These countries are very often the source for the flu, both the annual strains and the more unusual varieties that can potentially become global pandemics.* So they have been the subject of most of the medical community’s attention, especially in recent years because of the fear over another strain of the virus. H5N1, better known as bird flu or avian flu, has been simmering for some years in East Asia and could be extremely deadly if it mutated in the wrong way.

  These circumstances are not exclusive to Asia, however. The Mexican state of Veracruz, for instance, provides similarly fertile conditions for the flu. Veracruz has a coastline on the Gulf of Mexico, and Mexico is a developing country with a culinary tradition that heavily features pork.35 It was in Veracruz—where very few scientists were looking for the flu36—that the 2009 outbreak of H1N1 began.37

  By th
e end of April 2009, scientists were bombarded with alarming statistics about the swine flu in Veracruz and other parts of Mexico. There were reports of about 1,900 cases of H1N1 in Mexico and some 150 deaths. The ratio of these two quantities is known as the case fatality rate and it was seemingly very high—about 8 percent of the people who had acquired the flu had apparently died from it, which exceeded the rate during the Spanish flu epidemic.38 Many of the dead, moreover, were relatively young and healthy adults, another characteristic of severe outbreaks. And the virus was clearly quite good at reproducing itself; cases had already been detected in Canada, Spain, Peru, the United Kingdom, Israel, New Zealand, Germany, the Netherlands, Switzerland, and Ireland, in addition to Mexico and the United States.39

  It suddenly appeared that H1N1—not H5N1—was the superbug that scientists had feared all along. Mexico City was essentially shut down; European countries warned their citizens against travel to either Mexico or the United States. Hong Kong and Singapore, notoriously jittery about flu pandemics, saw their stock markets plunge.40

  The initial worries soon subsided. Swine flu had indeed spread extremely rapidly in the United States—from twenty confirmed cases on April 26 to 2,618 some fifteen days later.41 But most cases were surprisingly mild, with just three deaths confirmed in the United States, a fatality rate comparable to the seasonal flu. Just a week after the swine flu had seemed to have boundless destructive potential, the CDC recommended that closed schools be reopened.

  The disease had continued to spread across the globe, however, and by June 2009 the WHO had declared it a level 6 pandemic, its highest classification. Scientists feared the disease might follow the progression of the Spanish flu of 1918, which had initially been fairly mild, but which came back in much deadlier second and third waves (figure 7-1). By August, the mood had again grown more pessimistic, with U.S. authorities describing a “plausible scenario” in which as much as half the population might be infected by swine flu and as many as 90,000 Americans might die.42

  FIGURE 7-1: DEATH RATE FROM 1918–19 H1N1 OUTBREAK

  Those predictions also proved to be unwarranted, however. Eventually, the government reported that a total of about fifty-five million Americans had become infected with H1N1 in 2009—about one sixth of the U.S. population rather than one half—and 11,000 had died from it.43 Rather than being an unusually severe strain of the virus, H1N1 had in fact been exceptionally mild, with a fatality rate of just 0.02 percent. Indeed, there were slightly fewer deaths from the flu in 2009–10 than in a typical season.44 It hadn’t quite been the epic embarrassment of 1976, but there had been failures of prediction from start to finish.

  There are no guarantees that flu predictions will do better the next time around. In fact, the flu and other infectious diseases have several properties that make them intrinsically very challenging to predict.

  The Dangers of Extrapolation

  Extrapolation is a very basic method of prediction—usually, much too basic. It simply involves the assumption that the current trend will continue indefinitely, into the future. Some of the best-known failures of prediction have resulted from applying this assumption too liberally.

  At the turn of the twentieth century, for instance, many city planners were concerned about the increasing use of horse-drawn carriages and their main pollutant: horse manure. Knee-deep in the issue in 1894, one writer in the Times of London predicted that by the 1940s, every street in London would be buried under nine feet of the stuff.45 About ten years later, fortunately, Henry Ford began producing his prototypes of the Model T and the crisis was averted.

  Extrapolation was also the culprit in several failed predictions related to population growth. Perhaps the first serious effort to predict the growth of the global population was made by an English economist, Sir William Petty, in 1682.46 Population statistics were not widely available at the time and Petty did a lot of rather innovative work to infer, quite correctly, that the growth rate in the human population was fairly slow in the seventeenth century. Incorrectly, however, he assumed that things would always remain that way, and his predictions implied that global population might be just over 700 million people in 2012.47 A century later, the Industrial Revolution began, and the population began to increase at a much faster rate. The actual world population, which surpassed seven billion in late 2011,48 is about ten times higher than Petty’s prediction.

  The controversial 1968 book The Population Bomb, by the Stanford biologist Paul R. Ehrlich and his wife, Anne Ehrlich, made the opposite mistake, quite wrongly predicting that hundreds of millions of people would die from starvation in the 1970s.49 The reasons for this failure of prediction were myriad, including the Ehrlichs’ tendency to focus on doomsday scenarios to draw attention to their cause. But one major problem was that they had assumed the record-high fertility rates in the free-love era of the 1960s would continue on indefinitely, meaning that there would be more and more hungry mouths to feed.* “When I wrote The Population Bomb I thought our interests in sex and children were so strong that it would be hard to change family size,” Paul Ehrlich told me in a brief interview. “We found out that if you treat women decently and give them job opportunities, the fertility rate goes down.” Other scholars who had not made such simplistic assumptions realized this at the time; population projections issued by the United Nations in the 1960s and 1970s generally did a good job of predicting what the population would look like thirty or forty years later.50

  Extrapolation tends to cause its greatest problems in fields—including population growth and disease—where the quantity that you want to study is growing exponentially. In the early 1980s, the cumulative number of AIDS cases diagnosed in the United States was increasing in this exponential fashion:51 there were 99 cases through 1980, then 434 through 1981, and eventually 11,148 through 1984. You can put these figures into a chart, as some scholars did at the time,52 and seek to extrapolate the pattern forward. Doing so would have yielded a prediction that the number of AIDS cases diagnosed in the United States would rise to about 270,000 by 1995. This would not have been a very good prediction; unfortunately it was too low. The actual number of AIDS cases was about 560,000 by 1995, more than twice as high.

  Perhaps the bigger problem from a statistical standpoint, however, is that precise predictions aren’t really possible to begin with when you are extrapolating on an exponential scale. A properly applied version53 of this method, which accounted for its margin of error, would have implied that there could be as few as 35,000 AIDS cases through 1995 or as many as 1.8 million. That’s much too broad a range to provide for much in the way of predictive insight.

  Why Flu Predictions Failed in 2009

  Although the statistical methods that epidemiologists use when a flu outbreak is first detected are not quite as simple as the preceding examples, they still face the challenge of making extrapolations from a small number of potentially dubious data points.

  One of the most useful quantities for predicting disease spread is a variable called the basic reproduction number. Usually designated as R0, it measures the number of uninfected people that can expect to catch a disease from a single infected individual. An R0 of 4, for instance, means that—in the absence of vaccines or other preventative measures—someone who gets a disease can be expected to pass it along to four other individuals before recovering (or dying) from it.

  In theory, any disease with an R0 greater than 1 will eventually spread to the entire population in the absence of vaccines or quarantines. But the numbers are sometimes much higher than this: R0 was about 3 for the Spanish flu, 6 for smallpox, and 15 for measles. It is perhaps well into the triple digits for malaria, one of the deadliest diseases in the history of civilization, which still accounts for about 10 percent of all deaths in some parts of the world today.54

  FIGURE 7-3: MEDIAN ESTIMATES OF R0 FOR VARIOUS DISEASES55

  Malaria

  150

  Measles

  15

  Smallpox
<
br />   6

  HIV/AIDS

  3.5

  SARS

  3.5

  H1N1 (1918)

  3

  Ebola (1995)

  1.8

  H1N1 (2009)

  1.5

  Seasonal flu

  1.3

  The problem is that reliable estimates of R0 can usually not be formulated until well after a disease has swept through a community and there has been sufficient time to scrutinize the statistics. So epidemiologists are forced to make extrapolations about it from a few early data points. The other key statistical measure of a disease, the fatality rate, can similarly be difficult to measure accurately in the early going. It is a catch-22; a disease cannot be predicted very accurately without this information, but reliable estimates of these quantities are usually not available until the disease has begun to run its course.

  Instead, the data when an infectious disease first strikes is often misreported. For instance, the figures that I gave you for early AIDS diagnoses in the United States were those that were available only years after the fact. Even these updated statistics did a rather poor job at prediction. However, if you were relying on the data that was actually available to scientists at the time,56 you would have done even worse. This is because AIDS, in its early years, was poorly understood and was highly stigmatized, both among patients and sometimes also among doctors.57 Many strange syndromes with AIDS-like symptoms went undiagnosed or misdiagnosed—or the opportunistic infections that AIDS can cause were mistaken for the principal cause of death. Only years later when doctors began to reopen old case histories did they come close to developing good estimates of the prevalence of AIDS in its early years.

 

‹ Prev