Book Read Free

End Times: A Brief Guide to the End of the World

Page 21

by Bryan Walsh


  The Clade X exercise, however, shows that the threat from biology is evolving like a virus. Miraculous new biotechnology tools have been developed over the past few years, including in synthetic biology—the broad name for the science of rewriting the genes of living things—and the gene-editing technique CRISPR, which enables biologists to find and replace bits of DNA in a cell almost as easily as they might cut and paste letters in a Microsoft Word document. These tools promise life-changing medical advances, but one day soon they might also allow an ambitious terror group—or even a single alienated microbiologist—to tweak the genes in existing pathogens and create something worse than nature ever could. Something like Clade X. “Clade X was a fictional pathogen, but it is based on scientific principles,” Dr. Tom Inglesby, the director of the Center for Health Security, told me after the exercise. “These kinds of things are absolutely plausible.”

  I’ve lived through disease outbreaks, and in the previous chapter I showed just how unprepared we are to face a widespread pandemic of flu or another new pathogen like SARS. But a deliberate outbreak caused by an engineered pathogen would be far worse. We would face the same agonizing decisions that must be made during a natural pandemic: whether to ban travel from affected regions, how to keep overburdened hospitals working as the rolls of the sick grew, how to accelerate the development and distribution of vaccines and drugs. To that dire list add the terror that would spread once it became clear that the death and disease in our midst was not the random work of nature, but a deliberate act of malice. We’re scared of disease outbreaks and we’re scared of terrorism—put them together and you have a formula for chaos.

  As deadly and as disruptive as a conventional bioterror incident would be, an attack that employed existing pathogens could only spread so far, limited by the same laws of evolution that circumscribe natural disease outbreaks. But a virus engineered in a lab to break those laws could spread faster and kill quicker than anything that would emerge out of nature. It can be designed to evade medical countermeasures, frustrating doctors’ attempts to diagnose cases and treat patients. If health officials manage to stamp out the outbreak, it could be reintroduced into the public again and again. It could, with the right mix of genetic traits, even wipe us off the planet, making engineered viruses a genuine existential threat.

  And such an attack may not even be that difficult to carry out. Thanks to advances in biotechnology that have rapidly reduced the skill level and funding needed to perform gene editing and engineering, what might have once required the work of an army of virologists employed by a nation-state could soon be done by a handful of talented and trained individuals. Or maybe just one.

  When Melinda Gates was asked at the South by Southwest conference in 2018 to identify what she saw as the biggest threat facing the world over the next decade, she didn’t hesitate: “A bioterrorism event. Definitely.”2

  She’s far from alone. In 2016, President Obama’s director of national intelligence James Clapper identified CRISPR as a “weapon of mass destruction,” a category usually reserved for known nightmares like nuclear bombs and chemical weapons. A 2018 report from the National Academies of Sciences concluded that biotechnology had rewritten what was possible in creating new weapons, while also increasing the range of people capable of carrying out such attacks.3 That’s a fatal combination, one that plausibly threatens the future of humanity like nothing else.

  “The existential threat that would be most available for someone, if they felt like doing something, would be a bioweapon,” said Eric Klien, founder of the Lifeboat Foundation, a nonprofit dedicated to helping humanity survive existential risks. “It would not be hard for a small group of people, maybe even just two or three people, to kill a hundred million people using a bioweapon. There are probably a million people currently on the planet who would have the technical knowledge to pull this off. It’s actually surprising that it hasn’t happened yet.”

  Our best hope against the threat of bioengineered pathogens may be the same tools that can lead to their creation. Cheap genetic sequencing is enabling scientists to diagnose diseases of unknown origin in a matter of days, shrinking the vulnerable window of time when a new outbreak can spread unnoticed. Genetic engineering could speed the laborious process of creating and manufacturing vaccines, so that even an engineered supervirus could quickly be matched by an effective countermeasure. In their wildest dreams, some scientists believe that we might even be able to genetically design human beings who would be biologically impervious to viral infections, taking the ancient threat of disease—natural or man-made—off the table.

  That’s what makes biotechnology so scary and so exhilarating. It is a dual-use technology, capable of being wielded for both benign and malevolent ends. Just as we saw with the drive to build a nuclear bomb, the discoveries being made by geneticists who only want to help the world could be used to destroy it. The question we face is this: is it possible to harness the gifts of biotechnology without opening the door to a real-life Clade X?

  As long as wars have been fought, armies have sought to turn disease into a weapon. In one of the first recorded examples of biowarfare, the Athenian leader Solon poisoned the water supply of the city of Kirrha with a noxious plant in 600 BC. Alexander the Great is believed to have catapulted the bodies of dead men into cities under siege, a tactic later adopted by warriors in the Middle Ages. By one account the Black Death may have been sparked in Europe when invading Mongols hurled the corpses of plague victims over the walls of the Black Sea port of Caffa. During the French and Indian War in colonial America, the British general Sir Jeffery Amherst—whose name was given to the town in Massachusetts and the college later founded there—urged one of his commanders to spread smallpox among the indigenous tribes besieging his fort.4

  In each of these examples, military leaders, without knowing anything about the existence of germs, used disease much as modern-day terrorists would: to kill their enemies and cripple the morale of survivors. It was at best a crude weapon and barely controllable, one that risked backfiring and infecting the attacker. But as medicine advanced and doctors began to understand how disease spread, it became clear that the same knowledge that could be used to fight infectious disease could also temper it into a more perfect weapon. Medicine itself became a dual-use dilemma, and remains one today.

  Every major combatant in World War II—including the United States—ran some type of biological weapons program. The Japanese military had the most extensive one, and they made terrifying use of it in China, repeatedly targeting civilians with bombs filled with plague-infested fleas. Much of the work was carried out by Unit 731, officially under the army’s Epidemic Prevention and Water Purification Department. (It would not be the last time an offensive bioweapons program masqueraded as a benign medical project.) Unit 731 carried out horrific tests on human subjects, including vivisecting live people, without anesthesia, who had been deliberately exposed to diseases, all in a twisted effort to perfect biological weapons.5 After the war the sadistic commander of Unit 731, General Shiro Ishii, traded his research data to the American military in exchange for clemency, and was allowed to live peacefully until his death from cancer in 1959.

  It was during the Cold War that biological weapons research reached its peak, however. The United States carried out years of research at Fort Detrick in Maryland. Some of that work was defensive, but much of it involved weapons experimentation, including in secret field trials carried out in major American cities. In one 1950 experiment in San Francisco, a U.S. Navy ship sprayed a cloud of microbes to test how a biological weapons attack might spread through the city. The germs were supposed to be noninfectious, but they later turned out to have caused urinary tract infections in several unlucky San Franciscans.6

  Yet the U.S. program paled next to the work done by the Soviet Union, which built the largest biological weapons factory in history at Vozrozhdeniya, an island in the inland Aral Sea. By the end of the Cold War more than sixty thousand pe
ople in the Soviet Union were involved in the research, testing, and manufacturing of biological weapons. The Soviets produced thousands of tons of deadly pathogens, including anthrax, plague, and smallpox, easily enough to end all human life on Earth.7

  The United States unilaterally renounced its offense biological warfare program in 1969, a few years before signing on to the Biological Weapons Convention (BWC), an international agreement that officially banned the development, stockpile, and use of germ weapons. The Soviet Union signed the BWC but secretly continued work on biological weapons, convinced the United States was doing so as well. The Soviets, though, soon discovered that germs made for disobedient soldiers. In 1979 the accidental release of anthrax spores at a military complex in Yekaterinburg reportedly killed one hundred workers and townspeople. It was one of many recorded calamities in Soviet bioweapons research. Another may well have been a minor flu pandemic that broke out in 1977, which some researchers have since traced to the accidental release of an old flu virus from a Soviet military lab.8 It wasn’t until the 1990s that a now-independent Russia finally admitted the existence of the Soviet Union’s decades-long offensive bioweapons research.

  To this day, the BWC represents the only time that the world has agreed to ban an entire class of weaponry. As president Richard Nixon put it when announcing in 1969 that the United States would abandon offensive bioweapons work: “Mankind already carries in its own hands too many of the seeds of its own destruction.”9 The negotiation of the pact was made easier because no country ever determined out how to wield bioweapons reliably. Germ weapons are made out of life, after all, and life is fussy. While a bullet or a missile will go where it is fired, once in the field germs will infect whomever they can. By one count more than a thousand Japanese soldiers fell victim to their country’s own germ weapons during World War II. That was why biological weapons, despite being stockpiled by every major power, were so rarely employed on the field of battle. One paper from Oxford’s Future of Humanity Institute counts just eighteen uses between 1915 and 2000, nearly all of them during World War I and II.10

  Another reason that states largely abandoned their programs is that biological weapons are inherently destabilizing. Unlike nuclear arms—which require the kind of rare expertise and expensive materials that effectively limits the weapons to powerful governments—it is futile to physically restrict access to most dangerous pathogens and biotechnologies. There are more microbiologists than nuclear engineers in the United States today;11 add in the much greater number of general biologists and doctors, and you have far more people in America who have some experience with the mechanics of disease transmission than with the study of nuclear chain reactions.

  More important, the line between what might be considered legitimate biomedical research and work that could turn germs into weapons is a blurry one. A virologist employed in a biological weapons program would mostly use the same tools and techniques as their counterpart in a vaccine research program. That contradiction—the same skills that can heal the body can also be used to harm it—is at the heart of medicine, with its healing blade. It’s why new physicians pledge to “first do no harm.” It is not the tools that make the difference between a weapon and a salve. It is the intention.

  But intentions are difficult to police, which is why even with an international treaty in force banning germ weapons, even though hardened soldiers are horrified by the thought of plague bombs on the battlefield, the risk of biowarfare will never disappear. It is inherent in the medical arts, a dark twin. Biological research “is picked up,” Tom Inglesby told me. “It’s published. It moves quickly around the world.” The only way to fully ensure that germs could never be used to hurt would be to ban medical research altogether, a policy that would surely inflict far more harm than it could possibly avert. As the authors of the landmark Fink Report put it in their 2004 paper for the National Research Council, “The contrast between what is a legitimate, perhaps compelling subject for research and what might justifiably be prohibited or tightly controlled cannot be made a priori, stated in categorical terms, nor confirmed by remote observation.”12

  This is the dilemma of dual-use technologies, and it is key to understanding the existential risk of biotechnology, and almost every other man-made existential risk as well, including artificial intelligence. In both fields it has become increasingly difficult to draw a distinction between research that benefits humankind and work that could lead to our extinction. It was initially true of nuclear weapons as well—few of the physicists involved in fundamental nuclear science research in the 1920s and ’30s could have foreseen that the last stop would be Hiroshima (though some, like Leo Szilard, were able to do just that, and did their best to stop the bomb).13 It’s true in a sense of climate change, where the same energy sources that underwrote the Industrial Revolution and the great material boom of the twentieth century are turning our atmosphere into an oven. The dual-use dilemma is really the dilemma of science, how research once begun can lead to any destination, including many that its authors could never have imagined. Science is a method, not a guarantee. It is not ironic that it was the inventor of dynamite, Alfred Nobel, who established the greatest prizes in international science. It’s fitting.

  To their credit, most of those at the cutting edge of biotechnology are at least conscious of this dilemma—far more so than the physicists who paved the way for a nuclear bomb. Emily Leproust is the CEO of Twist Bioscience, a start-up that manufactures synthetic DNA. Her customers—a mix of academic labs and biotech companies—order custom strands of DNA, like for a specific strain of yeast. Twist then synthesizes the genes, packs them up, and mails them out. If synthetic biology is a new gold rush, then Twist is the company selling the picks and shovels.

  I met Leproust at Twist’s offices in San Francisco’s new biotech hub in Mission Bay, just across the street from where the Golden State Warriors’ new arena is rising from the ground. She is dizzyingly tall, with a black bob cut. Born in Tours, France, Leproust received her PhD in organic chemistry at the University of Houston, where she also learned most of her English—“though I didn’t pick up the accent,” she added in her deeply pitched voice. Leproust is an energetic evangelist for the power of synthetic biology, which she believes will transform medicine, energy, and material science. A few months after we met, in the fall of 2018, her company pulled off a successful initial public offering that added to the hundreds of millions in financing Twist has raised since it was founded in 2013. “We believe we can improve the human condition and the sustainability of the entire world,” she told me.

  But Leproust also knows that there are significant risks posed by tools that allow scientists to rewrite the operating code of life. “Every invention is a coin, with a positive side and a negative side,” she said. “With dynamite, you can build tunnels but you can also kill people. With iPhones, I can FaceTime with my mom, but some people use them to detonate bombs. With every invention there is a good use and a bad use.”

  Leproust is right—most inventions, if you add creativity and subtract morals, can be used for good or ill. But synthetic biology isn’t an invention, not in the way the telephone or the nuclear bomb is an invention. Better to think of it as a technological platform. What makes synthetic biology so revolutionary, and perhaps so dangerous, is not what it can do, but what it can make doable. And one of the most frightening possibilities of how synthetic biology might change what’s achievable involves a virus that haunts the nightmares of biosecurity experts: smallpox.

  No virus in nature makes for a better bioterror weapon than smallpox. Smallpox can kill as many as 30 percent of the people it infects. It can be transmitted by airborne droplet, and it is highly contagious—on average every smallpox victim will infect three to five unvaccinated people.14 And since the virus was eradicated from the wild decades ago, immunizations have largely been halted, which means that much of the world—and virtually all of the United States—would be vulnerable to infection. In the Ce
nter for Health Security “Dark Winter” tabletop exercise from 2001, the final death toll in the United States from a smallpox attack reached 1 million people.15

  Yet as biological weapons go, smallpox can be and is controllable. Eradicated from nature, the only two known samples of the virus are kept at highly secure government facilities. That makes the smallpox virus akin to a nuclear bomb in that it would have to be stolen by terrorists in order to be weaponized—and there are far more nuclear bombs in the world than there are viable samples of smallpox. That creates a double barrier to wielding smallpox as a weapon of terror: first a sample would need to be stolen, and then terrorists would need to figure out how to weaponize it.

  The comparison to nuclear terrorism is worth examining. It’s possible, of course, that terrorists could make their own nuclear bomb, eliminating the need to steal one. But they would have to find the right radioactive material and recruit people who knew how to build and use a bomb—and it is very hard to do both. That we have yet to experience the nightmare of a nuclear warhead going off in a major city may well have less to do with the success of security services than the fortunate fact that such plots are almost impossible to carry out.

  But what if that changed? What if building your own nuclear bomb became only as difficult as, say, programming a computer virus is today? That would fundamentally alter the rules of nuclear terrorism, so much so that it might only be a matter of time before a bomb exploded in an unlucky city. And while we’re fortunate that nuclear bombs haven’t become any easier to build or use, that is precisely what is happening in the field of synthetic biology.

 

‹ Prev