This pattern of inheritance is more commonly observed in schizophrenia. That identical twins share only a 50 percent concordance—i.e., if one twin is affected, then the other twin is affected only 50 percent of the time—clearly demonstrates that some other triggers (environmental factors or chance events) are required to tip the predisposition over an edge. But when a child of a schizophrenic parent is adopted at birth by a nonschizophrenic family, the child still has a 15 to 20 percent risk of developing the illness—about twentyfold higher than the general population—demonstrating that the genetic influences can be powerful and autonomous despite enormous variations in environments. These patterns strongly suggest that schizophrenia is a complex, polygenic illness, involving multiple variants, multiple genes, and potential environmental or chance triggers. As with cancer and other polygenic diseases, then, a gene-by-gene approach is unlikely to unlock the physiology of schizophrenia.
Populist anxieties about genes, mental illness, and crime were fanned further with the publication in the summer of 1985 of Crime and Human Nature: The Definitive Study of the Causes of Crime, an incendiary book written by James Q. Wilson, a political scientist, and Richard Herrnstein, a behavioral biologist. Wilson and Herrnstein argued that particular forms of mental illness—most notably schizophrenia, especially in its violent, disruptive form—were highly prevalent among criminals, likely to be genetically ingrained, and likely to be the cause of criminal behavior. Addiction and violence also had strong genetic components. The hypothesis seized popular imagination. Postwar academic criminology had been dominated by “environmental” theories of crime—i.e., criminals were the products of bad influences: “bad friends, bad neighborhoods, bad labels.” Wilson and Herrnstein acknowledged these factors, but added the most controversial fourth—“bad genes.” The soil was not contaminated, they suggested; the seed was. Crime and Human Nature steamrolled into a major media phenomenon: twenty major news outlets—the New York Times, Newsweek, and Science among them—reviewed or featured it. Time reinforced its essential message in its headline: “Are Criminals Born, Not Made?” Newsweek’s byline was more blunt: “Criminals Born and Bred.”
Wilson and Herrnstein’s book was met with a barrage of criticism. Even die-hard believers of the genetic theory of schizophrenia had to admit that the etiology of the illness was largely unknown, that acquired influences had to play a major triggering role (hence the 50—not 100—percent concordance among identical twins), and that the vast majority of schizophrenics lived in the terrifying shadow of their illness but had no history of criminality whatsoever.
But to a public frothing with concern about violence and crime in the eighties, the idea that the human genome might contain the answers not just to medical illnesses, but to social maladies such as deviance, alcoholism, violence, moral corruption, perversion, or addiction, was potently seductive. In an interview in the Baltimore Sun, a neurosurgeon wondered if the “crime-prone” (such as Huberty) could be identified, quarantined, and treated before they had committed crimes—i.e., via genetic profiling of precriminals. A psychiatric geneticist commented on the impact that identifying such genes might have on the public discourse around crime, responsibility, and punishment. “The link [to genetics] is quite clear. . . . We would be naïve not to think that one aspect of [curing crime] will be biological.”
Set against this monumental backdrop of hype and expectation, the first conversations to approach human genome sequencing were remarkably deflating. In the summer of 1984, Charles DeLisi, a science administrator from the Department of Energy (DOE) convened a meeting of experts to evaluate the technical feasibility of human genome sequencing. Since the early 1980s, DOE researchers had been investigating the effects of radiation on human genes. The Hiroshima/Nagasaki bombings of 1945 had sprayed hundreds of thousands of Japanese citizens with varying doses of radiation, including twelve thousand surviving children, now in their forties and fifties. How many mutations had occurred in these children, in what genes, and over what time? Since radiation-induced mutations would likely be randomly scattered through the genome, a gene-by-gene search would be futile. In December 1984, another meeting of scientists was called to evaluate whether whole-genome sequencing might be used to detect genetic alterations in radiation-exposed children. The conference was held at Alta, in Utah—the same mountain town where Botstein and Davis had originally conceived the idea of mapping human genes using linkage and polymorphisms.
On the surface, the Alta meeting was a spectacular failure. Scientists realized that the sequencing technology available in the mid-1980s was nowhere close to being able to map mutations across a human genome. But the meeting was a crucial platform to jump-start a conversation about comprehensive gene sequencing. A fleet of meetings on genome sequencing followed—in Santa Cruz in May 1985 and in Santa Fe in March 1986. In the late summer of 1986, James Watson convened perhaps the most decisive of these meetings at Cold Spring Harbor, provocatively titling it “The Molecular Biology of Homo sapiens.” As with Asilomar, the serenity of the campus—on a placid, crystalline bay, with rolling hills tipping into the water—contrasted with the fervent energy of the discussions.
A host of new studies was presented at the meeting that suddenly made genome sequencing seem within technological reach. The most important technical breakthrough, perhaps, came from Kary Mullis, a biochemist studying gene replication. To sequence genes, it is crucial to have enough starting material of DNA. A single bacterial cell can be grown into hundreds of millions of cells, thereby supplying abundant amounts of bacterial DNA for sequencing. But it is difficult to grow hundreds of millions of human cells. Mullis had discovered an ingenious shortcut. He made a copy of a human gene in a test tube using DNA polymerase, then used that copy to make copies of the copy, then copied the multiple copies for dozens of cycles. Each cycle of copying amplified the DNA, resulting in an exponential increase in the yield of a gene. The technique was eventually called the polymerase chain reaction, or PCR, and would become crucial for the Human Genome Project.
Eric Lander, a mathematician turned biologist, told the audience about new mathematical methods to find genes related to complex, multigenic diseases. Leroy Hood, from Caltech, described a semiautomated machine that could speed up Sanger’s sequencing method by ten- or twentyfold.
Earlier, Walter Gilbert, the DNA-sequencing pioneer, had prepared an edge-of-napkin calculation of the costs and personnel involved. To sequence all 3 billion base pairs of human DNA, Gilbert estimated, would take about fifty thousand person years and cost around $3 billion—one dollar per base. As Gilbert, with characteristic panache, strode across the floor to inscribe the number on a chalkboard, an intense debate broke out in the audience. “Gilbert’s number”—which would turn out to be startlingly accurate—had reduced the genome project to tangible realities. Indeed, put in perspective, the cost was not even particularly large: at its peak, the Apollo program had employed nearly four hundred thousand people, with a total cumulative cost of about $100 billion. If Gilbert was right, the human genome could be had for less than one-thirtieth of the moon landing. Sydney Brenner later joked that the sequencing of the human genome would perhaps ultimately be limited not by cost or technology, but only by the severe monotony of its labor. Perhaps, he speculated, genome sequencing should be doled out as a punishment to criminals and convicts—1 million bases for robbery, 2 million for homicide, 10 million for murder.
As dusk fell on the bay that evening, Watson spoke to several scientists about an unfolding personal crisis of his. On May 27, the night before the conference, his fifteen-year-old-son, Rufus Watson, had escaped from a psychiatric facility in White Plains. He was later found wandering in the woods, near the train tracks, captured, and brought back to the facility. A few months earlier, Rufus had tried to break a window at the World Trade Center to jump off the building. He had been diagnosed with schizophrenia. To Watson, a firm believer in the genetic basis for the disease, the Human Genome Project had come home—liter
ally. There were no animal models for schizophrenia, nor any obviously linked polymorphisms that would allow geneticists to find the relevant genes. “The only way to give Rufus a life was to understand why he was sick. And the only way we could do that was to get the genome.”
But which genome “to get”? Some scientists, including Sulston, advocated a graded approach—starting with simple organisms, such as baker’s yeast, the worm, or the fly, and then scaling the ladder of complexity and size to the human genome. Others, such as Watson, wanted to leap directly into the human genome. After a prolonged internal debate, the scientists reached a compromise. The sequencing of the genomes of simple organisms, such as worms and flies, would begin at first. These projects would carry the names of their respective organisms: the Worm Genome Project, or the Fruit Fly Genome Project—and they would fine-tune the technology of gene sequencing. The sequencing of human genes would continue in parallel. The lessons learned from simple genomes would be applied to the much larger and more complex human genome. This larger endeavor—the comprehensive sequencing of the entire human genome—was termed the Human Genome Project.
The NIH and DOE, meanwhile, jostled to control the Human Genome Project. By 1989, after several congressional hearings, a second compromise was reached: the National Institutes of Health would act as the official “lead agency” of the project, with the DOE contributing resources and strategic management. Watson was chosen as its head. International collaborators were swiftly added: the Medical Research Council of the United Kingdom and the Wellcome Trust joined the effort. In time, French, Japanese, Chinese, and German scientists would join the Genome Project.
In January 1989, a twelve-member council of advisers met in a conference room in Building 31 on the far corner of the NIH campus in Bethesda. The council was chaired by Norton Zinder, the geneticist who had helped draft the Asilomar moratorium. “Today we begin,” Zinder announced. “We are initiating an unending study of human biology. Whatever it’s going to be, it will be an adventure, a priceless endeavor. And when it’s done, someone else will sit down and say, ‘It’s time to begin.’ ”
On January 28, 1983, on the eve of the launch of the Human Genome Project, Carrie Buck died in a nursing home in Waynesboro, Pennsylvania. She was seventy-six years old. Her birth and death had bookended the near century of the gene. Her generation had borne witness to the scientific resurrection of genetics, its forceful entry into public discourse, its perversion into social engineering and eugenics, its postwar emergence as the central theme of the “new” biology, its impact on human physiology and pathology, its powerful explanatory power in our understanding of illness, and its inevitable intersection with questions of fate, identity, and choice. She had been one of the earliest victims of the misunderstandings of a powerful new science. And she had watched that science transform our understanding of medicine, culture, and society.
What of her “genetic imbecility”? In 1930, three years after her Supreme Court–mandated sterilization, Carrie Buck was released from the Virginia State Colony and sent to work with a family in Bland County, Virginia. Carrie Buck’s only daughter, Vivian Dobbs—the child who had been examined by a court and declared “imbecile”—died of enterocolitis in 1932. During the eight-odd years of her life, Vivian had performed reasonably well in school. In Grade 1B, for instance, she received A’s and B’s in deportment and spelling, and a C in mathematics, a subject that she had always struggled with. In April 1931, she was placed on the honor roll. What remains of the school report cards suggests a cheery, pleasant, happy-go-lucky child whose performance was no better, and no worse, than that of any other schoolchild. Nothing in Vivian’s story bears an even remote suggestion of an inherited propensity for mental illness or imbecility—the diagnosis that had sealed Carrie Buck’s fate in court.
* * *
I. The twisted intellectual journey, with its false leads, exhausting trudges, and inspired shortcuts, that ultimately revealed that cancer was caused by the corruption of endogenous human genes deserves a book in its own right.
In the 1970s, the reigning theory of carcinogenesis was that all, or most, cancers were caused by viruses. Pathbreaking experiments performed by several scientists, including Harold Varmus and J. Michael Bishop at UCSF, revealed, surprisingly, that these viruses typically caused cancer by tampering with cellular genes—called proto-oncogenes. The vulnerabilities, in short, were already present within the human genome. Cancer occurs when these genes are mutated, thereby unleashing dysregulated growth.
The Geographers
So Geographers in Afric-maps,
With Savage-Pictures fill their Gaps;
And o’er uninhabitable Downs
Place Elephants for want of Towns.
—Jonathan Swift, “On Poetry”
More and more, the Human Genome Project, supposedly one of mankind’s noblest undertakings, is resembling a mud-wrestling match.
—Justin Gillis, 2000
It is fair to say that the first surprise for the Human Genome Project had nothing to do with genes. In 1989, as Watson, Zinder, and their colleagues were gearing up to launch the Genome Project, a little-known neurobiologist at NIH, Craig Venter, proposed a shortcut to genome sequencing.
Pugnacious, single-minded, and belligerent, a reluctant student with middling grades, a surfing and sailing addict, and a former serviceman in the Vietnam War, Venter had an ability to lunge headlong into unknown projects. He had trained in neurobiology and had spent much of his scientific life studying adrenaline. In the mid-eighties, working at the NIH, Venter had become interested in sequencing genes expressed in the human brain. In 1986, he had heard of Leroy Hood’s rapid-sequencing machine and rushed to buy an early version for his laboratory. When it arrived, he called it “my future in a crate.” He had an engineer’s tinkering hands, and a biochemist’s love of mixing solutions. Within months, Venter had become an expert in rapid genome sequencing using the semiautomated sequencer.
Venter’s strategy for genome sequencing relied on a radical simplification. While the human genome contains genes, of course, the vast part of the genome is devoid of genes. The enormous stretches of DNA between genes, called intergenic DNA, are somewhat akin to the long highways between Canadian towns. And as Phil Sharp and Richard Roberts had demonstrated, a gene is itself broken up into segments, with long spacers, called introns, interposed between the protein-coding segments.
Intergenic DNA and introns—spacers between genes and stuffers within genes—do not encode any protein information.I Some of these stretches contain information to regulate and coordinate the expression of genes in time and space; they encode on and off switches appended to genes. Other stretches encode no known function. The structure of the human genome can thus be likened to a sentence that reads—
This . . . . . . is the . . . . . . str . . . uc . . . . . . ture . . . , , , . . . of . . . your . . . ( . . . gen . . . ome . . . ) . . .
—where the words correspond to the genes, the ellipses correspond to the spacers and stuffers, and the occasional punctuation marks demarcate the regulatory sequences of genes.
Venter’s first shortcut was to ignore the spacers and stuffers of the human genome. Introns and intergenic DNA did not carry protein information, he reasoned, so why not focus on the “active,” protein-encoding parts? And—piling shortcut on shortcut—he proposed that perhaps even these active parts could be assessed even faster if only fragments of genes were sequenced. Convinced that this fragmented-gene approach would work, Venter had begun to sequence hundreds of such gene fragments from brain tissue.
To continue our analogy between genomes and sentences in English, it was as if Venter had decided to find shards of words in a sentence—struc, your, and geno—in the human genome. He might not learn the content of the entire sentence with this method, he knew, but perhaps he could deduce enough from the shards to understand the crucial elements of human genes.
Watson was appalled. Venter’s “gene-fragment” strat
egy was indubitably faster and cheaper, but to many geneticists, it was also sloppy and incomplete, since it produced only fragmentary information about the genome.II The conflict was deepened by an unusual development. In the summer of 1991, as Venter’s group began to drudge up sequences of human gene fragments from the brain, the NIH technology transfer office contacted Venter about patenting the novel gene fragments. For Watson, the dissonance was embarrassing: now, it seemed, one arm of the NIH was filing for exclusive rights to the same information that another arm was trying to discover and make freely available.
But by what logic could genes—or, in Venter’s case, “active” fragments of genes—be patented? At Stanford, Boyer and Cohen, recall, had patented a method to “recombine” pieces of DNA to create genetic chimeras. Genentech had patented a process to express proteins such as insulin in bacteria. In 1984, Amgen had filed a patent for the isolation of the blood-production hormone erythropoietin using recombinant DNA—but even that patent, carefully read, involved a scheme for the production and isolation of a distinct protein with a distinct function. No one had ever patented a gene, or a piece of genetic information, for its own sake. Was a human gene not like any other body part—a nose or the left arm—and therefore fundamentally unpatentable? Or was the discovery of new genetic information so novel that it would merit ownership and patentability? Sulston, for one, was firmly opposed to the idea of gene patents. “Patents (or so I had believed) are designed to protect inventions,” he wrote. “There was no ‘invention’ involved in finding [gene fragments] so how could they be patentable?” “It’s a quick and dirty land grab,” one researcher wrote dismissively.
The controversy around Venter’s gene patents reached an even more fervent pitch because the gene fragments were being sequenced at random, without the ascription of any function to most of the genes. Since Venter’s approach often resulted in incomplete shards of genes being sequenced, the nature of the information was necessarily garbled. At times, the shards were long enough to deduce the function of a gene—but more commonly, no real understanding could be ascribed to these fragments. “Could you patent an elephant by describing its tail? What about patenting an elephant by describing three discontinuous parts of its tail?” Eric Lander argued. At a congressional hearing on the genome project, Watson erupted with vehemence: “virtually any monkey” could generate such fragments, he argued. Walter Bodmer, the English geneticist, warned that if the Americans granted gene-fragment patents to Venter, the British would start their own rival patenting effort. Within weeks, the genome would be balkanized—carved up into a thousand territorial colonies carrying American, British, and German flags.
The Gene Page 36