Book Read Free

Whole Earth Discipline_An Ecopragmatist Manifesto

Page 20

by Stewart Brand


  As Dyson noted, the precautionary principle, as currently applied, is deliberately one-sided, a rejection of what is called risk balancing. The convener of the Wingspread gathering, Carolyn Raffensperger, is widely quoted as saying, “Risk assessment embodies the idea that we can measure and manage or control risk and harm—and we can decide that some risk is acceptable. The Precautionary Principle is a very different idea that says that as an ethical matter, we are going to prevent all the harm we can.” Net-benefit analysis is ruled out.

  One consequence of the precautionary principle is that, in practice, it can be self-canceling. It says to wait for the results of further research, but it declares that the research is too dangerous to do. Under the banner of the precautionary principle, activists burn the fields where GE research is going on and threaten the researchers. “All technology should be assumed guilty until proven innocent,” said Dave Brower, founder of Friends of the Earth. That is a formula for paralysis. (I can imagine Dave responding, “A little paralysis might do a world of good about now.”)

  • Hear now “The Fable of the Steak Knives,” as told by the founder of Wikipedia, Jimmy Wales. His software engineers were spending a lot of their time imagining problems that would occur on Wikipedia and then devising software solutions to head off the problems. He explained why that is the wrong approach:You want to design a restaurant, and you think to yourself, “Well, in this restaurant we’re going to be serving steak. And since we’re going to be serving steak, we’re going to have steak knives, and since we’re going to have steak knives, people might stab each other. How do we solve this problem? We’re going to have to build cages around each table to make sure no one stabs each other.”

  This makes for a bad society. . . . When you try to prevent people from doing bad things, the very obvious side effect is that you prevent them from doing good things.

  The astronomical success of Wikipedia comes from its principle of not trying to solve imaginary problems but instead putting all of the community’s effort into close attention to what actually goes on, noting genuine problems as they emerge, and then solving them as locally as possible with speed and efficiency. The whole system is success driven rather than problem driven.

  Expected benefits from any act are finite and known: “Golden rice will prevent blindness in children.” Imagined problems are infinite and unknown: “Golden rice might cause poor people to stop eating green vegetables; it might lead to excess vitamin A consumption; it might be a Trojan horse for corporate takeover; it might cause who knows what problems!” The apparent imbalance is treated as a contest: small and unlikely good versus large and certain harm. In this formulation, no good surprises are possible, all bad surprises are probable, and intended consequences are never what actually happen. In reality, intended consequences are what usually happen, surprises are balanced between good and bad, and they’re easy to recognize and to expand on or correct, as needed.

  If cellphones had been subject to the precautionary principle, the arguments against them would have included: They’ll microwave your brain; they’ll exacerbate the Digital Divide; they’ll lead to the corporate takeover of all communications; they’ll homogenize society—prove they won’t! In reality none of those things occurred—though of course other problems did, such as incompatible standards and new forms of discourtesy. The main outcome was enormous, rapid success, with a vast empowering of individuals everywhere, especially the poor.

  The late Mary Douglas, anthropologist and lifelong student of risk, noted that sectarian groups such as some environmental organizations separate themselves from the world with infinite demands. For them, she wrote, “there can never be sufficient holiness or safety.” As a Brit, she also wondered, “What are Americans afraid of? Nothing much, really, except the food they eat, the water they drink, the air they breathe, the land they live on, and the energy they use.” The economist Paul Romer adds a global perspective: “Even if one society loses its nerve, there’ll be new entrants who can take up the torch and push ahead.”

  • The precautionary principle has been so widely recognized as a barrier to progress that, according to England’s Prospect magazine, in 2006, the House of Commons select committee on science and technology recommended that the term “should not be used and should ‘cease to be included in policy guidance.’ ” Various attempts have been made to draft a substitute—the proactive principle (Max More and Kevin Kelly), the precautionary approach (Nuffield Council on Bioethics), the reversibility principle (Jamais Cascio), and the anti-catastrophe principle—that one from an excellent book, Laws of Fear: Beyond the Precautionary Principle (2005), by behavioral economist Cass Sunstein, who now heads Obama’s Office of Information and Regulatory Affairs.

  I would not replace the precautionary principle. Its name and founding idea are too good to lose. But I would shift its bias away from inaction and toward action with a supplement—the vigilance principle, whose entire text is: “Eternal vigilance is the price of liberty.” The precautionary principle by itself seeks strictly to stop or slow new things, even in the face of urgent need. Precaution plus vigilance would seek to move quickly on new things. Viewed always in the context of potential opportunity, a new device or technique would be subjected to multidisciplinary scrutiny and then given three probationary categories for ongoing oversight: 1) provisionally unsafe until proven unsafe; 2) provisionally safe until proven safe; 3) provisionally beneficial until proven beneficial. As the evaluation grows more precise over time, public policy adjusts to match it.

  When GE food crops first went public in the early 1990s, precautionary vigilance would have monitored the brave early adopters, looking for signs of harm and signs of benefit, and especially for surprises, good and bad. (A surprising benefit from Bt corn, for example, is that it reduces mycotoxin poisoning in tortilla cornmeal because less insect damage means less fungal growth.) By the end of the 1990s, vigilance of a decade’s cumulative experience would have declared GE food apparently safe so far and apparently beneficial so far. Europeans would gingerly have begun buying and planting GE food crops, and anti-GE activists, while remaining suspicious, would have stopped burning GE research fields and labs.

  The emphasis of the vigilance principle is on liberty, the freedom to try things. The correction for emergent problems is in ceaseless, fine-grained monitoring, which largely can be automated these days via the Internet, by collecting data from distributed high-tech sensors and vigilant cellphone-armed volunteers. (Wikipedia, for example, is an orgy of vigilance: A cluster of diligent amateur watchers and correcters actively surveil each entry, with a response time of seconds.) Managing the precautionary process in this mode consists of identifying things to watch for as a new technology unfolds. (Does golden rice actually help with malnutrition? Are there really any instances of hypervitaminosis, too much vitamin A? Can they be headed off, given how they occur?) Intelligent precaution also would charge specific agencies to keep an eye out for unexpected correlations, such as the increases in lung cancer that developed around concentrations of asbestos—they were detected in the 1930s but not acted on until the 1980s. Tens of thousands suffered and died needlessly during that lag.

  The mantra for dealing with pandemics is “early detection, rapid response.” The old method of waiting for news of dead nurses in remote hospitals has been replaced by active monitoring of online chatter, active monitoring of the condition of animals sold in developing-world food markets, a network of “sentinel physicians,” automated bioassays, and more to come. That’s the way to organize vigilance.

  One also has to credit the pioneers of excess. In the 1920s, radiation was lauded for its healing properties until a millionaire golfer named Eben Byers died from drinking a thousand bottles of a popular radium potion called Radiothor. Some of my contemporaries in the 1960s took pains to prove that the danger from excessive LSD use was not brain damage or chromosome damage, as had been predicted, but personality damage. Amateurs can be counted on to discover exa
ctly how much video gaming leads to suicide, how many carrots lead to orange-eyed delirium, how many grizzly bears you have to hug before one eats you. I have no doubt that amateurs, not corporations or governments, will be the ones demonstrating how much GE is too much, and good luck heading them off with that line about proponents bearing the burden of proof. Nor will legions of corporate lawyers building forts around gene patents have any better luck. Biotech wants to be free.

  The fact is that the fastest-moving countries now with GE crops are the developing nations that have the scientific competence and confidence to stand up to excessively cautious environmentalists—China, Brazil, India, South Africa, Argentina, the Philippines. As they go, so goes the world. Foundations such as Gates, Rockefeller, and McKnight are helping to spread the technology—in locally nuanced form—to those who need it most in the poorest nations, mostly in Africa and south Asia. Bitching and moaning, Europe will drag along after.

  What about God? What about the retribution we invite by playing God with genetic engineering?

  A version of that question was put to me by Kathy Kohm, editor of the remarkable magazine Conservation. “The history of engineering is marked by a trail of unintended consequences,” she said. “We don’t know what we don’t know. How do we walk the line between hubris and humility?” I replied:A lot swings on what is considered news. Ever since ancient Greek drama, hubris and unintended consequences have made great theater.

  Intended consequences, although more common, are not news and not theater. GMOs have been tested extensively, but we never hear of the results unless something suspicious turns up. . . . One headline you will never see is “GM Crop Again Shown OK.”

  Technology emerges from science. Then we do science on the technology. Then we know what we know. The whole process works on a necessary blend of both hubris and humility.

  I admire Prince Charles, especially for his humanizing influence on the design of cities and buildings. With his usual forthrightness, he has made a clear statement about the impiety of GE: “I happen to believe that this kind of genetic modification takes mankind into realms that belong to God, and to God alone.” Pope Benedict in 2006 vilified scientists who “modify the very grammar of life as planned and willed by God. . . . To take God’s place, without being God, is insane arrogance, a risky and dangerous venture.”

  An unlikely ally of the prince and the pope is the American leftist Jeremy Rifkin, who believes that GE violates “the boundaries between the sacred and the profane” and must be banned wholesale from the world. (Among scientists who have read his work, Rifkin is regarded as America’s leading nitwit. The evolutionist Stephen Jay Gould, a considerable lefty himself, described Rifkin’s biotech book Algeny as “a cleverly constructed tract of anti-intellectual propaganda masquerading as scholarship. Among books promoted as serious intellectual statements by important thinkers, I don’t think I have ever read a shoddier work.”)

  Then you have Bill McKibben, who listened to working climatologists for his landmark book, The End of Nature, but borrowed his views on genetic engineering from Rifkin. GE, he wrote, “represents the second end of nature. . . . What will it mean to come across a rabbit in the woods once genetically engineered ‘rabbits’ are widespread? Why would we have any more reverence or affection for such a rabbit than we would for a Coke bottle?”

  There is a common sentiment among environmentalists that everything made by nature is good and everything made by man is bad. “Four legs good, two legs bad.” Nature is seen as whole and therefore holy. It is inscrutable and divine, whereas we are crass; and yet it is also fragile, vulnerable to our crass depredations.

  What “nature” are we talking about, exactly? You can’t do anything against nature, if your idea of nature includes physics, chemistry, and mechanics. Abominations can be imagined but cannot be performed. Anything you can do you can only do because nature allows it. Nuclear fission is so natural it occurs geologically. Horizontal gene flow is so natural it is the norm among microbes. Apparently what people mean when they say “against Nature” is “against my understanding of Darwinian inheritance and traditional breedline agriculture.” Or maybe it’s not so cosmic, and what people mean by “against Nature” is “something I’m not used to yet.”

  In looking for guidance on ethical issues, notions of abomination don’t help much. What does help is a sense of how harms and benefits are distributed. In 1999 and again in 2003, the question of genetic engineering was examined in exhaustive detail by the prestigious Nuffield Council on Bioethics, in Britain. Their conclusion: “There is a moral imperative for making GM crops readily and economically available to people in developing countries who want them.”

  • Most environmentalists don’t seem aware of what’s going on in the biosciences these days. They don’t realize that their battle against GE crops is a rearguard action in a sleepy backwater of biotech. So far we’ve been touring Agroecology 101 and Genetics 101—textbook stuff. Now we jump to the leading edge of biology, the new discoveries and techniques that aren’t in the textbooks yet. This is where alert environmentalists should hang out, looking for powerful new tools to seize and deploy for Green agendas. And, for those so inclined, whole new dimensions of things to worry about are on offer. GE crops will be left in the dustbin of outdated frets, like an old food fad: “Remember when we thought Bt corn was the end of the world?”

  Live-linked footnotes for this chapter, along with updates, additions, and illustrations, may be found online at www.sbnotes.com.

  • 6 •

  Gene Dreams

  Microbes run the world. It’s that simple.

  —The New Science of Metagenomics

  It was microbiologist Lynn Margulis, back in the 1970s, who first instructed me on the inventiveness of microbes. Along with codeveloping the Gaia hypothesis with Jim Lovelock, she revolutionized biology with her endosymbiotic theory, which posits that cells with a nucleus (which make up eukaryotes, including us) arose from the ingenious merging of nonnucleated bacteria (known as prokaryotes). The mitochondria in our cells are alien parasites that wound up being our primary energy factories. The light-harvesting chloroplasts in plant cells evolved from endosymbiotic cyanobacteria. Without them there would be no photosynthesis, no sugar synthesis, and no human beings. Endosymbiotic merging was the inspiration for Lewis Thomas’s felicitous book title, The Lives of a Cell (1978).

  The realization is humbling: All complex life forms were invented by creatures we think of as cooties, germs. Besides being the most diverse of all creatures, bacteria, Margulis writes, “are the oldest, having had the most time to evolve to take full advantage of Earth’s varied habitats, including the living environments of their fellow beings. By trading genes and acquiring new heritable traits, bacteria expand their genetic capacities—in minutes, or at most hours.” The fastest, then, as well as the oldest and most diverse of life forms, bacteria also are the only ones that can claim immortality. They don’t age; they split and carry on forever.

  No wonder eminent biologist Edward O. Wilson, who himself has revolutionized science half a dozen times, declared in his memoir, Naturalist (1994), “If I could do it all over again, and relive my vision in the twenty-first century, I would be a microbial ecologist.”

  One of the most thrilling books I’ve read recently you can download for free from the National Academy of Sciences. Here are some excerpts from The New Science of Metagenomics: Revealing the Secrets of Our Microbial Planet (2007):Every process in the biosphere is touched by the seemingly endless capacity of microbes to transform the world around them. It is microbes that convert the key elements of life—carbon, nitrogen, oxygen, and sulfur—into forms accessible to all other living things. For example, although plants tend to get credit for photosynthesis, it is in fact microbes that contribute most of the photosynthetic capacity to the planet. All plants and animals have closely associated microbial communities that make necessary nutrients, metals, and vitamins available to their hosts. The billio
ns of benign microbes that live in the human gut help us to digest food, break down toxins, and fight off disease-causing microbes. . . .

  The combined activities of microbial communities affect the chemistry of the entire ocean and maintain the habitability of the entire planet. . . .

  Microbes can “eat” rocks, “breathe” metals, transform the inorganic to the organic, and crack the toughest of chemical compounds. They achieve these amazing feats in a sort of microbial “bucket brigade”—each microbe performs its own task, and its end product becomes the starting fuel for its neighbor. . . .

  The ultimate goal, perhaps in sight by 2027, would be a metacommunity model that seeks to explain and predict (and retrodict) the behavior of the biosphere as though it were a single superorganism. Such a “genomics of Gaia” would be the ultimate implementation of systems biology.

  • The transformative technique that makes all of this new science suddenly possible is the shotgun sequencing of the aggregate genomes of large samples of microbes, hence metagenomics. Microbes were long the “dark matter” of biology because, except for a few, they couldn’t be cultured in the lab. Now, with what is called functional metagenomics, you don’t have to bother with the organisms; you screen millions of DNA fragments from countless microbes, looking for new proteins that the fragments generate, and that tells you what the genes are used for.

 

‹ Prev