But has “knowing” altered everything? Erika’s fears have been alleviated, but there is very little that can be done about the mutant genes or their effects on her muscles. In 2012, she tried the medicine Diamox, known to alleviate muscle twitching in general, and had a brief reprieve. There were eighteen nights of sleep—a lifetime’s worth for a teenager who had hardly experienced a full night’s sleep in her life—but the illness has relapsed. The tremors are back. The muscles are still wasting away. She is still in her wheelchair.
What if we could devise a prenatal test for this disease? Stephen Quake had just finished his talk on fetal genome sequencing—on “the genetics of the unborn.” It will soon become feasible to scan every fetal genome for all potential mutations and rank many of them in order of severity and penetrance. We do not know all the details of the nature of Erika’s genetic illness—perhaps, like some genetic forms of cancer, there are other, hidden “cooperative” mutations in her genome—but most geneticists suspect that she has only two mutations, both highly penetrant, causing her symptoms.
Should we consider allowing parents to fully sequence their children’s genomes and potentially terminate pregnancies with such known devastating genetic mutations? We would certainly eliminate Erika’s mutation from the human gene pool—but we would eliminate Erika as well. I will not minimize the enormity of Erika’s suffering, or that of her family—but there is, indubitably, a deep loss in that. To fail to acknowledge the depth of Erika’s anguish is to reveal a flaw in our empathy. But to refuse to acknowledge the price to be paid in this trade-off is to reveal, conversely, a flaw in our humanity.
A crowd was milling around Erika and her mother, and I walked down toward the beach, where sandwiches and drinks were being laid out. Erika’s talk had sounded a resonantly sobering note through a conference otherwise tinged with optimism: you could sequence genomes hoping to find match-made medicines to alleviate specific mutations, but that would be a rare outcome. Prenatal diagnosis and the termination of pregnancies still remained the simplest choice for such rare devastating diseases—but also ethically the most difficult to confront. “The more technology evolves, the more we enter unknown territory. There’s no doubt that we have to face incredibly tough choices,” Eric Topol, the conference organizer, told me. “In the new genomics, there are very few free lunches.”
Indeed, lunch had just ended. The bell chimed, and the geneticists returned to the auditorium to contemplate the future’s future. Erika’s mother wheeled her out of the conference center. I waved to her, but she did not notice me. As I entered the building, I saw her crossing the parking lot in her wheelchair, her scarf billowing in the wind behind her, like an epilogue.
I have chosen the three cases described here—Jane Sterling’s breast cancer, Rajesh’s bipolar disease, and Erika’s neuromuscular disease—because they span a broad spectrum of genetic diseases, and because they illuminate some of the most searing conundrums of genetic diagnosis. Sterling has an identifiable mutation in a single culprit gene (BRCA1) that leads to a common disease. The mutation has high penetrance—70 to 80 percent of carriers will eventually develop breast cancer—but the penetrance is incomplete (not 100 percent), and the precise form of the disease in the future, its timeline, and the extent of risk are unknown and perhaps unknowable. The prophylactic treatments—mastectomy, hormonal therapy—all entail physical and psychological anguish and carry risks in their own right.
Schizophrenia and bipolar disorder, in contrast, are illnesses caused by multiple genes, with much lower penetrance. No prophylactic treatments exist, and no cures. Both are chronic, relapsing diseases that shatter minds and splinter families. Yet the very genes that cause these illnesses can also, albeit in rare circumstances, potentiate a mystical form of creative urgency that is fundamentally linked to the illness itself.
And then there is Erika’s neuromuscular disease—a rare genetic illness caused by one or two changes in the genome—that is highly penetrant, severely debilitating, and incurable. A medical therapy is not inconceivable, but is unlikely ever to be found. If gene sequencing of the fetal genome is coupled to the termination of pregnancies (or the selective implantation of embryos screened for these mutations), then such genetic diseases might be identifiable and could potentially be eliminated from the human gene pool. In a small number of cases, gene sequencing might identify a condition that is potentially responsive to medical therapy, or to gene therapy in the future (in the fall of 2015, a fifteen-month-old toddler with weakness, tremors, progressive blindness, and drooling—incorrectly diagnosed as having an “autoimmune disease”—was referred to a genetics clinic at Columbia University. Gene sequencing revealed a mutation in a gene linked to vitamin metabolism. Supplemented with vitamin B2, for which she was severely deficient, the girl recovered much of her neurological function).
Sterling, Rajesh, and Erika are all “previvors.” Their future fates were latent in their genomes, yet the actual stories and choices of their previvorship could not be more varied. What do we do with this information? “My real résumé is in my cells,” says Jerome, the young protagonist of the sci-fi film GATTACA. But how much of a person’s genetic résumé can we read and understand? Can we decipher the kind of fate that is encoded within any genome in a usable manner? And under what circumstances can we—or should we—intervene?
Let’s turn to the first question: How much of the human genome can we “read” in a usable or predictive sense? Until recently, the capacity to predict fate from the human genome was limited by two fundamental constraints. First, most genes, as Richard Dawkins describes them, are not “blueprints” but “recipes.” They do not specify parts, but processes; they are formulas for forms. If you change a blueprint, the final product is changed in a perfectly predictable manner: eliminate a widget specified in the plan, and you get a machine with a missing widget. But the alteration of a recipe or formula does not change the product in a predictable manner: if you quadruple the amount of butter in a cake, the eventual effect is more complicated than just a quadruply buttered cake (try it; the whole thing collapses in an oily mess). By similar logic, you cannot examine most gene variants in isolation and decipher their influence on form and fate. That a mutation in the gene MECP2, whose normal function is to add chemical modifications to DNA, may cause a form of autism is far from self-evident (unless you understand how genes control the neurodevelopmental processes that make a brain).
The second constraint—possibly deeper in significance—is the intrinsically unpredictable nature of some genes. Most genes intersect with other triggers—environment, chance, behaviors, or even parental and prenatal exposures—to determine an organism’s form and function, and its consequent effects on its future. Most of these interactions, we have already discovered, are not systematic: they happen as a result of chance, and there is no method to predict or model them with certainty. These interactions place powerful limits on genetic determinism: the eventual effects of these gene-environment intersections can never be reliably presaged by the genetics alone. Indeed, recent attempts to use illnesses in one twin to predict future illnesses in the other have come up with only modest successes.
But even with these uncertainties, several predictive determinants in the human genome will soon become knowable. As we investigate genes and genomes more deftly, more comprehensively, and with more computational power, we should be able to “read” the genome more thoroughly—at least in a probabilistic sense. Currently, only highly penetrant single-gene mutations (Tay-Sachs disease, cystic fibrosis, sickle-cell anemia), or alterations in entire chromosomes (Down syndrome), are used in genetic diagnosis in clinical settings. But there is no reason that the constraints on genetic diagnosis should be limited to diseases caused by mutations in single genes or chromosomes.VI Nor, for that matter, is there any reason that “diagnosis” be restricted to disease. A powerful enough computer should be able to hack the understanding of a recipe: if you input an alteration, one should be able to compu
te its effect on the product.
By the end of this decade, permutations and combinations of genetic variants will be used to predict variations in human phenotype, illness, and destiny. Some diseases might never be amenable to such a genetic test, but perhaps the severest variants of schizophrenia or heart disease, or the most penetrant forms of familial cancer, say, will be predictable by the combined effect of a handful of mutations. And once an understanding of “process” has been built into predictive algorithms, the interactions between various gene variants could be used to compute ultimate effects on a whole host of physical and mental characteristics beyond disease alone. Computational algorithms could determine the probability of the development of heart disease or asthma or sexual orientation and assign a level of relative risk for various fates to each genome. The genome will thus be read not in absolutes, but in likelihoods—like a report card that does not contain grades but probabilities, or a résumé that does not list past experiences but future propensities. It will become a manual of previvorship.
In April 1990, as if to raise the stakes of human genetic diagnosis further, an article in Nature magazine announced the birth of a new technology that permitted genetic diagnosis to be performed on an embryo before implantation into a woman’s body.
The technique relies on a peculiar idiosyncrasy of human embryology. When an embryo is produced by in vitro fertilization (IVF), it is typically grown for several days in an incubator before being implanted into a woman’s womb. Bathed in a nutrient-rich broth in a moist incubator, the single-cell embryo divides to form a glistening ball of cells. At the end of three days, there are eight and then sixteen cells. Astonishingly, if you remove a few cells from that embryo, the remaining cells divide and fill in the gap of missing cells, and the embryo continues to grow normally as if nothing had happened. For a moment in our history, we are actually quite like salamanders or, rather, like salamanders’ tails—capable of complete regeneration even after being cut by a fourth.
A human embryo can thus be biopsied at this early stage, the few cells extracted used for genetic tests. Once the tests have been completed, cherry-picked embryos possessing the correct genes can be implanted. With some modifications, even oocytes—a woman’s eggs—can be genetically tested before fertilization. The technique is called “preimplantation genetic diagnosis,” or PGD. From a moral standpoint, preimplantation genetic diagnosis achieves a seeming impossible sleight of hand. If you selectively implant the “correct” embryos and cryopreserve the others without killing them, you can select fetuses without aborting them. It is positive and negative eugenics in one go, without the concomitant death of a fetus.
Preimplantation genetic diagnosis was first used to select embryos by two English couples in the winter of 1989, one with a family history of a severe X-linked mental retardation, and another with a history of an X-linked immunological syndrome—both incurable genetic diseases only manifest in male children. The embryos were selected to be female. Female twins were born to both couples; as predicted, both sets of twins were disease-free.
The ethical vertigo induced by those two first cases was so acute that several countries moved immediately to place constraints on the technology. Perhaps understandably, among the first countries to put the most stringent limits on PGD were Germany and Austria—nations scarred by their legacies of racism, mass murder, and eugenics. In India, parts of which are home to some of the most blatantly sexist subcultures in the world, attempts to use PGD to “diagnose” the gender of a child were reported as early as 1995. Any form of sexual selection for male children was, and still is, prohibited by the Indian government, and PGD for gender selection was soon banned. Yet the government ban seems to have hardly staved the problem: readers from India and China might note, with some shame and sobriety, that the largest “negative eugenics” project in human history was not the systemic extermination of Jews in Nazi Germany or Austria in the 1930s. That ghastly distinction falls on India and China, where more than 10 million female children are missing from adulthood because of infanticide, abortion, and neglect of female children. Depraved dictators and predatory states are not an absolute requirement for eugenics. In the case of India, perfectly “free” citizens, left to their own devices, are capable of enacting grotesque eugenic programs—against females, in this case—without any state mandate.
Currently, PGD can be used to select against embryos carrying monogenic diseases, such as cystic fibrosis, Huntington’s disease, and Tay-Sachs disease among many others. But in principle, nothing limits genetic diagnosis to monogenic diseases. It should not take a film such as GATTACA to remind us how deeply destabilizing that idea might be. We have no models or metaphors to apprehend a world in which a child’s future is parsed into probabilities, or a fetus is diagnosed before birth, or becomes a “previvor” even before conception. The word diagnosis arises from the Greek “to know apart,” but “knowing apart” has moral and philosophical consequences that lie far beyond medicine and science. Throughout our history, technologies of knowing apart have enabled us to identify, treat, and heal the sick. In their benevolent form, these technologies have allowed us to preempt illness through diagnostic tests and preventive measures, and to treat diseases appropriately (e.g., the use of the BRCA1 gene to preemptively treat breast cancer). But they have also enabled stifling definitions of abnormalcy, partitioned the weak from the strong, or led, in their most gruesome incarnations, to the sinister excesses of eugenics. The history of human genetics has reminded us, again and again, that “knowing apart” often begins with an emphasis on “knowing,” but often ends with an emphasis on “parting.” It is not a coincidence that the vast anthropometric projects of Nazi scientists—the obsessive measurement of jaw sizes, head shapes, nose lengths, and heights—were also once legitimized as attempts to “know humans apart.”
As the political theorist Desmond King puts it, “One way or another, we are all going to be dragged into the regime of ‘gene management’ that will, in essence, be eugenic. It will all be in the name of individual health rather than for the overall fitness of the population, and the managers will be you and me, and our doctors and the state. Genetic change will be managed by the invisible hand of individual choice, but the overall result will be the same: a coordinate attempt to ‘improve’ the genes of the next generation on the way.”
Until recently, three unspoken principles have guided the arena of genetic diagnosis and intervention. First, diagnostic tests have largely been restricted to gene variants that are singularly powerful determinants of illness—i.e., highly penetrant mutations, where the likelihood of developing the disease is close to 100 percent (Down syndrome, cystic fibrosis, Tay-Sachs disease). Second, the diseases caused by these mutations have generally involved extraordinary suffering or fundamental incompatibilities with “normal” life. Third, justifiable interventions—the decision to abort a child with Down syndrome, say, or intervene surgically on a woman with a BRCA1 mutation—have been defined through social and medical consensus, and all interventions have been governed by complete freedom of choice.
The three sides of the triangle can be envisioned as moral lines that most cultures have been unwilling to transgress. The abortion of an embryo carrying a gene with, say, only a ten percent chance of developing cancer in the future violates the injunction against intervening on low-penetrance mutations. Similarly, a state-mandated medical procedure on a genetically ill person without the subject’s consent (or parental consent in the case of a fetus) crosses the boundaries of freedom and noncoercion.
Yet it can hardly escape our attention that these parameters are inherently susceptible to the logic of self-reinforcement. We determine the definition of “extraordinary suffering.” We demarcate the boundaries of “normalcy” versus “abnormalcy.” We make the medical choices to intervene. We determine the nature of “justifiable interventions.” Humans endowed with certain genomes are responsible for defining the criteria to define, intervene on, or even eliminat
e other humans endowed with other genomes. “Choice,” in short, seems like an illusion devised by genes to propagate the selection of similar genes.
Even so, this triangle of limits—high-penetrance genes, extraordinary suffering, and noncoerced, justifiable interventions—has proved to be a useful guideline for acceptable forms of genetic interventions. But these boundaries are being breached. Take, for instance, a series of startlingly provocative studies that used a single gene variant to drive social-engineering choices. In the late 1990s, a gene called 5HTTLRP, which encodes a molecule that modulates signaling between certain neurons in the brain, was found to be associated with the response to psychic stress. The gene comes in two forms or alleles—a long variant and a short variant. The short variant, called 5HTTLRP/short, is carried by about 40 percent of the population and seems to produce significantly lower levels of the protein. The short variant has been repeatedly associated with anxious behavior, depression, trauma, alcoholism, and high-risk behaviors. The link is not strong, but it is broad: the short allele has been associated with increased suicidal risk among German alcoholics, increased depression in American college students, and a higher rate of PTSD among deployed soldiers.
In 2010, a team of researchers launched a research study, called the Strong African American Families project, or SAAF, in an impoverished rural belt in Georgia. It is a startlingly bleak place overrun by delinquency, alcoholism, violence, mental illness, and drug use. Abandoned clapboard houses with broken windows dot the landscape; crime abounds; vacant parking lots are strewn with hypodermic needles. Half the adults lack a high school education, and nearly half the families have single mothers.
The Gene Page 54