Microcosm
Page 20
This new research tingles with controversy. A debate is raging over the risk sposed by synthetic biology and other advances in biotechnology—the accidental release of dangerous new creatures, for example, or even intentional engineering of biological weapons. Thinking drugs could become thinking plagues. Synthetic biologists have also given a fresh spur to the debate over the morality of biotechnology in general. Today the world faces a huge, confusing surge of scientific research, with mice growing human neurons in their brains and deadly viruses being built from the ground up. In order to resolve these debates, we must think seriously about what it means to be alive and how biotechnology changes that meaning. And E. coli, the germ of our biotechnological age, has much to tell us. The face looks back, less a portrait than a mirror.
NEOLITHIC BIOTECH
Biotechnology was born many times, and each time it was born blind.
Humans began to manipulate other life-forms to make useful things, such as food and clothing, at least 10,000 years ago. In places such as Southeast Asia, Turkey, West Africa, and Mexico, people began to domesticate animals and plants. They probably did so unwittingly at first. Gathering plants, they picked some kinds over others, accidentally spreading the seeds on the ground. The wild ancestors of dogs that lingered near campfires might have fed on scraps and passed on their sociable genes to their pups. These species adapted to life with humans through natural selection. Once humans began to farm and raise livestock, natural selection gave way to artificial selection as they consciously chose the individuals with the traits they wanted to breed. Evolution accelerated as humans assembled a parade of grotesque creations, from flat-faced pugs to boulder-sized pumpkins.
The first Neolithic biotechnologists were manipulating microbes as well. They learned how to make beer and wine or, rather, how to allow yeast to make beer and wine. The job of humans was simply to create the best conditions in which the yeast could transform sugar to alcohol. Yeast also lifted bread with its puffs of carbon dioxide. Domesticated microbes evolved just as weedy teosinte evolved into corn and scrawny jungle fowls evolved into chickens. The yeast of winemakers became distinct from its wild cousins that still lived on tree bark.
With the invention of yogurt an entire ecosystem of bacteria evolved. Yogurt was first developed by nomadic herders in the Near East about five thousand years ago. They probably happened to notice one day that some milk had turned thick and tangy, and that it also proved slow to turn rancid. Plant-feeding bacteria had fallen into the milk and had altered its chemistry as they fed on it. The herders found that adding some of the yogurt to normal milk transformed it into yogurt as well. The bacteria in those cultures became trapped in a new ecosystem, and they adapted to it, evolving into better milk feeders and jettisoning many of the genes they no longer needed.
For thousands of years, humans continued to tinker with animals, plants, and microbes in this same semiconscious way. But as the microbial world unfolded beginning in the nineteenth century, scientists discovered new ways to manipulate nature. The first attempts were simple yet powerful. Louis Pasteur demonstrated that bacteria turned wine sour and contaminated milk. Heat killed off these harmful microbes, leaving children healthier and oenophiles happier.
As microbiologists discovered microbial alchemy, they searched for species that could carry out new kinds of useful chemistry. Chaim Weizmann, the first president of Israel, originally came to fame through his work in biotechnology. Living in Britain during World War I, he discovered bacteria that could manufacture acetone, an ingredient in explosives. Winston Churchill quickly took advantage of it by building a string of factories to breed the bacteria in order to make cheap acetone for the Royal Navy. The next generation of microbiologists began manipulating genes to make them even more efficient. By bombarding the mold that makes penicillin, scientists created mutants with extra copies of penicillin genes, allowing the mold to make more of the drug.
As scientists discovered how to manipulate life, they wondered what sort of world they were creating. In a 1923 essay, the British biologist J.B.S. Haldane indulged in some science fiction. He pretended to be a historian of the future looking back on the 1940 creation of a new strain of algae that could pull nitrogen from the air. Strewn on crops, it fertilized them so effectively that it doubled the yield of wheat. But some of the algae escaped to the sea, where it turned the Atlantic to jelly. Eventually it triggered an explosion in the population of fish, enough to feed all humanity.
“It was of course as a result of its invasion by Porphyrococcus that the sea assumed the intense purple colour which seems so natural to us, but which so distressed the more aesthetically minded of our great grandparents who witnessed the change,” Haldane wrote. “It is certainly curious to us to read of the sea as having been green or blue.”
For the next fifty years, hope and dread continued to tug scientists in opposite directions. Some hoped that biotechnology would offer an alternative to a polluted nuclear-powered modern world, a utopia in which poor nations could find food and health without destroying their natural resources. Yet the notion of rewriting the recipe for life sometimes inspired disgust rather than wonder. It might well be possible to create an edible strain of yeast that could feed on oil. But who would want to eat it?
Aside from scientists, few people took these speculations very seriously. For all the progress biotechnology made up until 1970, there was no sign that life would change anytime soon. And then, quite suddenly, scientists realized they had the power to tinker with the genetic code. They could create a chimera with genes from different species. And they began their transformation of life with E. coli. Monod’s motto took on yet another meaning: if scientists could genetically engineer E. coli, there was every reason to believe they would someday engineer elephants.
CUT AND PASTE
Before 1970, E. coli had no role in biotechnology. It does not naturally produce penicillin or any other precious molecule. It does not turn barley into beer. Most scientists who studied E. coli before 1970 did so to understand how life works, not to learn how to make a profit. They learned a great deal about how E. coli uses genes to build proteins, how those genes are switched on and off, how its proteins help make its life possible. But in order to learn how E. coli lives, they had to build tools to manipulate it. And those tools would eventually be used to manipulate E. coli not simply to learn about life but to make fortunes.
The potential for genetic engineering took E. coli’s biologists almost by surprise. In the late 1960s, a Harvard biologist named Jonathan Beckwith was studying the lac operon, the set of genes that E. coli switches on to feed on lactose. To understand the nature of its switch, Beckwith decided to snip the operon out of E. coli’s chromosome. He took advantage of the fact that some viruses that infect the bacteria can accidentally copy the lac operon along with their own genes. Beckwith and his colleagues separated the twin strands of the DNA from two different viruses. The strands containing the lac operon had matching sequences, so they were able to rejoin themselves. Beckwith and his colleagues added chemicals to the viruses that destroyed single-strand DNA, leaving behind only the double-strand operon. For the first time in history someone had isolated genes.
On November 22, 1969, Beckwith met the press to announce the discovery. He let the world know he was deeply disturbed by what he had just done. If he could isolate genes from E. coli, how long would it take for someone else to figure out a sinister twist on his methods—a way to create a new plague or to engineer new kinds of human beings? “The steps do not exist now,” he said, “but it is not inconceivable that within not too long it could be used, and it becomes more and more frightening—especially when we see work in biology used by our Government in Vietnam and in devising chemical and biological weapons.”
Beckwith flashed across the front page of The New York Times and other newspapers, and then he was gone. The debate over the dangers of genetic engineering disappeared. Other scientists went on searching for new ways to manipulate gene
s without giving much thought to the danger. Scientists who studied human biology looked jealously at the tools Beckwith and others could use on E. coli. To study a single mouse gene, a scientist might need the DNA from hundreds of thousands of mice. As a result, they knew very little about how animal cells translated genes into proteins. They knew even less about the genes themselves—how many genes humans carry, for example, or the function of each one.
Paul Berg, a scientist at Stanford University, spent many years studying how E. coli builds molecules, and in the late 1960s he wondered if he could study animal cells in the same way. At the time, scientists were learning about a new kind of virus that permanently inserts itself into the chromosomes of animals. The virus was medically important because it could cause its host cells to replicate uncontrollably and form tumors. Berg recognized a similarity between these animal viruses and some of the viruses that infect E. coli. In the 1950s, scientists had learned how to turn E. coli’s viruses into ferries to carry genes from one host to another. Berg wanted to know whether animal viruses could be ferries as well.
Berg began to experiment with a cancer-causing monkey virus called SV40. He pondered how he might insert another gene into it. Eventually he decided he would need to cut open the circular chromosome of SV40 at a specific point. But he had no molecular knife that could make that particular cut.
As it happened, other scientists had just found the knife. In the 1960s, scientists had discovered E. coli’s restriction enzymes, which slice up foreign DNA by grabbing on to certain short sequences. One of those scientists was Herbert Boyer, a microbiologist at the University of California, San Francisco. Boyer gave Berg a supply of a restriction enzyme he had recently discovered, called EcoR1.
Berg and his colleagues used EcoR1 to cut open SV40’s chromosome. At one end of SV40’s DNA they added DNA from a virus of E. coli called lambda. In order to fuse the two pieces of DNA together, Berg and his colleagues added to their ends some extra bases that would form bonds. When they were done, they had created a viral hybrid.
Since the hybrid carried the lambda virus’s genes for invading E. coli, Berg wondered whether it could invade the microbe. He asked one of his graduate students, Janet Mertz, to design an experiment. For Berg and Mertz, the experiment started out as yet another interesting question. But some who learned about their plans were filled with dread.
One of the first people to confront Berg with these worries was a bioethicist named Leon Kass. Like Berg, Kass had worked on E. coli, but he had become disillusioned by how fast scientific discoveries were being made and the lack of thought being given to their ethics. Kass warned Berg that manipulating genes could lead to moral quandaries. If scientists could insert genes in embryos, parents might pick out the traits they wanted in their children. They wouldn’t just upgrade genes that would cause sickle-cell anemia or other genetic disorders. They would look for ways to enhance even perfectly healthy children.
“Are we wise enough to be tampering with the balance of the gene pool?” Kass asked Berg.
Berg brushed off Kass’s warning, but when other virus experts began to question his plans, he stopped short. Mertz described to another researcher how she and Berg were going to create a sort of Russian doll with SV40 in lambda and lambda in E. coli. The researcher replied, “Well, it’s coli in people.”
If an SV40-carrying E. coli escaped from Berg’s laboratory, some scientists feared it might make its way into a human host. Once inside a person, it might multiply, spreading its cancer-causing viruses. No one could say whether it would do no harm or trigger a cancer epidemic. In the face of these uncertainties, Berg and Mertz decided to abandon the experiment.
“I didn’t want to be the person who went ahead and created a monster that killed a million people,” Mertz said later.
At the time, Berg’s lab was the only one in the world actively trying to do genetic engineering. The researchers’ methods were elaborate, tedious, and time-consuming. When they scrapped their SV40 experiment, they could be confident that no one would be able to immediately take up where they left off. But it would not be long before genetic engineering would become far easier—and thus far more controversial.
Berg and Boyer continued to study how EcoR1 cuts DNA. They discovered that the enzyme does not make a clean slice. Instead, it leaves ragged fragments, with one strand of DNA extending farther than the other at each end. That dangling strand can spontaneously join another dangling strand also cut by EcoR1. The strands are, in essence, sticky. Berg and Boyer realized no tedious tacking on of extra DNA was necessary to join two pieces of DNA from different species. The molecules would do the hard work on their own.
Boyer soon took advantage of these sticky ends. Instead of viruses, he chose plasmids, those ringlets of DNA that bacteria trade. Working with the plasmid expert Stanley Cohen, Boyer cut apart two plasmids with EcoR1. Their sticky ends joined together, combining the plasmids into a single loop. Each plasmid carried genes that provided resistance to a different antibiotic, and when Boyer and Cohen inserted their new hybrid plasmid in E. coli, the bacteria could resist both drugs. And when one of these engineered microbes divided, the two new E. coli also carried the same engineered plasmids. For the first time a living microbe carried genes intentionally combined by humans.
Once Boyer and Cohen had combined two E. coli plasmids, they turned to another species. Working with John Morrow of Stanford University, they cut up fragments of DNA from an African clawed frog and inserted it in a plasmid, which they then inserted in E. coli. Now they had created a chimera that was part E. coli, part animal.
When Boyer described his chimeras at a conference in New Hampshire in 1973, the audience of scientists was shocked. None of them could say the experiments were safe. They sent a letter to the National Academy of Sciences to express their concern, and a conversation spread through scientific circles. What could scientists realistically hope to do with engineered E. coli? What were the plausible risks?
The possibilities sounded as outlandish as anything Haldane had dreamed of fifty years earlier. E. coli could make precious molecules, such as human insulin, which could treat diabetes. E. coli might acquire genes for breaking down cellulose, the tough fibers in plants. A person who swallowed cellulose-eating E. coli might be able to live on grass. Or maybe engineering E. coli would lead to disaster. A cellulose-digesting microbe might cause people to absorb too many calories and become hideously obese. Or perhaps it might rob people of the benefits of undigested roughage—including, perhaps, protection from cancer.
Paul Berg and thirteen other prominent scientists wrote a letter to the National Academy of Sciences in 1974 calling for a moratorium on transferred genes—also known as recombinant DNA—until scientists could agree on some guidelines. The first pass at those guidelines emerged from a meeting Berg organized in February 1975 at the Asilomar Conference Grounds on the California coast. Rather than calling for an outright ban on genetic engineering, the scientists advocated a ladder of increasingly strict controls. The greater the chance an experiment might cause harm, the more care scientists should take to prevent engineered organisms from escaping. Some particularly dangerous experiments, such as shuttling genes for powerful toxins into new hosts, ought not to be carried out at all. The National Institutes of Health followed up on the Asilomar meeting by forming a committee to set up official guidelines later that year.
To scientists such as Berg, these steps seemed reasonable. They had taken time to give genetic engineering some serious reflection, and they had decided that its risks could be managed. Genetic engineering was unlikely to trigger a new cancer epidemic, for example, because from childhood on people were already exposed to cancer-causing viruses. Many scientists concluded that E. coli K-12 had become so feeble after decades of laboratory luxury that it probably could not survive in the human gut. A biologist named H. William Smith announced at Asilomar that he had drunk a solution of E. coli K-12 and found no trace of it in his stool. But to be even more ce
rtain that no danger would come from genetic engineering, Roy Curtiss, a University of Alabama microbiologist, created a superfeeble strain that was a hundred million times weaker than K-12.
Other scientists did not feel as confident. Liebe Cavalieri, a biochemist at the Sloan-Kettering Institute in New York, published an essay in The New York Times Magazine called “New Strains of Life—or Death.” Below the headline was a giant portrait of E. coli embracing one another with their slender alien pili. Meet your new Frankenstein.
Soon the scientific critics were joined by politicians and activists. Congress opened hearings on genetic engineering, and representatives introduced a dozen bills calling for various levels of control. City politicians took action as well. The mayor of Cambridge, Massachusetts, Alfred Vellucci, held raucous hearings on Harvard’s entry into the genetic engineering game. The city banned genetic engineering altogether for months. Protesters waved signs at scientific conferences, and environmental groups filed lawsuits against the National Institutes of Health, accusing it of not looking into the environmental risks of genetic engineering.
Many critics were appalled that scientists would presume to judge how to handle the risks of genetic engineering on their own. “It was never the intention of those who might be called the Molecular Biology Establishment to take this issue to the general public to decide,” James Watson wrote frankly in 1981. The critics argued that the public had a right to decide how to manage the risk of genetic engineering because the public would have to cope with any harm that might come of it. Senator Edward Kennedy of Massachusetts complained that “scientists alone decided to impose a moratorium, and scientists alone decided to lift it.”
Some critics also questioned whether scientists could be objective about genetic engineering. It was in their interest to keep regulations as lax as possible because they would be able to get more research done in less time. “The lure of the Nobel Prize is a strong force motivating scientists in the field,” Cavalieri warned. Along with scientific glory came the prospect of riches. Corporations and investors were beginning to court molecular biologists, hoping to find commercial applications for genetic engineering. Financial interests might lead some to oversell the promise of genetic engineering and downplay its risks. Cetus Corporation, a company that recruited molecular biologists to serve on its board, made this astonishing prediction: “By the year 2000 virtually all the major human diseases will regularly succumb to treatment by disease-specific artificial proteins produced by specialized hybrid micro-organisms.”