Technology, I said before, is most powerful when it enables transitions—between linear and circular motion (the wheel), or between real and virtual space (the Internet). Science, in contrast, is most powerful when it elucidates rules of organization—laws—that act as lenses through which to view and organize the world. Technologists seek to liberate us from the constraints of our current realities through those transitions. Science defines those constraints, drawing the outer limits of the boundaries of possibility. Our greatest technological innovations thus carry names that claim our prowess over the world: the engine (from ingenium, or “ingenuity”) or the computer (from computare, or “reckoning together”). Our deepest scientific laws, in contrast, are often named after the limits of human knowledge: uncertainty, relativity, incompleteness, impossibility.
Of all the sciences, biology is the most lawless; there are few rules to begin with, and even fewer rules that are universal. Living beings must, of course, obey the fundamental rules of physics and chemistry, but life often exists on the margins and interstices of these laws, bending them to their near-breaking limit. The universe seeks equilibriums; it prefers to disperse energy, disrupt organization, and maximize chaos. Life is designed to combat these forces. We slow down reactions, concentrate matter, and organize chemicals into compartments; we sort laundry on Wednesdays. “It sometimes seems as if curbing entropy is our quixotic purpose in the universe,” James Gleick wrote. We live in the loopholes of natural laws, seeking extensions, exceptions, and excuses. The laws of nature still mark the outer boundaries of permissibility—but life, in all its idiosyncratic, mad weirdness, flourishes by reading between the lines. Even the elephant cannot violate the law of thermodynamics—although its trunk, surely, must rank as one of the most peculiar means of moving matter using energy.
The circular flow of biological information—
—is, perhaps, one of the few organizing rules in biology. Certainly the directionality of this flow of information has exceptions (retroviruses can pedal “backward” from RNA to DNA). And there are yet-undiscovered mechanisms in the biological world that might change the order or the components of information flow in living systems (RNA, for instance, is now known to be able to influence the regulation of genes). But the circular flow of biological information has been chalked out conceptually.
This flow of information is the closest thing that we might have to a biological law. When the technology to manipulate this law is mastered, we will move through one of the most profound transitions in our history. We will learn to read and write our selves, ourselves.
But before we leap into the genome’s future, allow a quick diversion to its past. We do not know where genes come from, or how they arose. Nor can we know why this method of information transfer and data storage was chosen over all other possible methods in biology. But we can try to reconstruct the primordial origin of genes in a test tube. At Harvard, a soft-spoken biochemist named Jack Szostak has spent over two decades trying to create a self-replicating genetic system in a test tube—thereby reconstructing the origin of genes.
Szostak’s experiment followed the work of Stanley Miller, the visionary chemist who had attempted to brew a “primordial soup” by mixing basic chemicals known to have existed in the ancient atmosphere. Working at the University of Chicago in the 1950s, Miller had sealed a glass flask and blown methane, carbon dioxide, ammonia, oxygen, and hydrogen into the flask through a series of vents. He had added hot steam and rigged an electrical spark to simulate bolts of lightning, then heated and cooled the flask cyclically to recapitulate the volatile conditions of the ancient world. Fire and brimstone, heaven and hell, air and water, were condensed in a beaker.
Three weeks later, no organism had crawled out of Miller’s flask. But in the raw mixture of carbon dioxide, methane, water, ammonia, oxygen, hydrogen, heat, and electricity, Miller had found traces of amino acids—the building units of proteins—and trace amounts of the simplest sugars. Subsequent variations of the Miller experiment have added clay, basalt, and volcanic rock and produced the rudiments of lipids, fats, and even the chemical building blocks of RNA and DNA.
Szostak believes that genes emerged out of this soup through a fortuitous meeting between two unlikely partners. First, lipids created within the soup coalesced with each other to form micelles—hollow spherical membranes, somewhat akin to soap bubbles, that trap liquid inside and resemble the outer layers of cells (certain fats, mixed together in watery solutions, tend to naturally coalesce into such bubbles). In lab experiments, Szostak has demonstrated that such micelles can behave like protocells: if you add more lipids to them, these hollow “cells” begin to grow in size. They expand, move about, and extend thin extrusions that resemble the ruffling membranes of cells. Eventually they divide, forming two micelles out of one.
Second, while self-assembling micelles were being formed, chains of RNA arose from the joining together of nucleosides (A, C, G, U, or their chemical ancestors) to form strands. The vast bulk of these RNA chains had no reproductive capability: they had no ability to make copies of themselves. But among the billions of nonreplicating RNA molecules, one was accidentally created with the unique capacity to build an image of itself—or rather, generate a copy using its mirror image (RNA and DNA, recall, have inherent chemical designs that enable the generation of mirror-image molecules). This RNA molecule, incredibly, possessed the capacity to gather nucleosides from a chemical mix and string them together to form a new RNA copy. It was a self-replicating chemical.
The next step was a marriage of convenience. Somewhere on earth—Szostak thinks it might have been on the edge of a pond or a swamp—a self-copying RNA molecule collided with a self-replicating micelle. It was, conceptually speaking, an explosive affair: the two molecules met, fell in love, and launched a long conjugal entanglement. The self-replicating RNA began to inhabit the dividing micelle. The micelle isolated and protected the RNA, enabling special chemical reactions in its secure bubble. The RNA-molecule, in turn, began to encode information that was advantageous to the self-propagation not just of itself, but the entire RNA-micelle unit. Over time, the information encoded in the RNA-micelle complex allowed it to propagate more such RNA-micelle complexes.
“It is relatively easy to see how RNA-based protocells may have then evolved,” Szostak wrote. “Metabolism could have arisen gradually, as . . . [the protocells learned to] synthesize nutrients internally from simpler and more abundant starting materials. Next, the organisms might have added protein synthesis to their bag of chemical tricks.” RNA “proto-genes” may have learned to coax amino acids to form chains and thus build proteins—versatile, molecular machines that could make metabolism, self-propagation, and information transfer vastly more efficient.
When, and why, did discrete “genes”—modules of information—appear on a strand of RNA? Did genes exist in their modular form at the very beginning, or was there an intermediate or alternative form of information storage? Again, these questions are fundamentally unanswerable, but perhaps information theory can provide a crucial clue. The trouble with continuous, nonmodular information is that it is notoriously hard to manage. It tends to diffuse; it tends to become corrupted; it tends to tangle, dilute, and decay. Pulling one end causes another to unspool. If information bleeds into information, it runs a much greater risk of distortion: think of a vinyl record that acquires a single dent in the middle. Information that is “digitized,” in contrast, is much easier to repair and recover. We can access and change one word in a book without having to reconfigure the entire library. Genes may have appeared for the same reason: discrete, information-bearing modules in one strand of RNA were used to encode instructions to fulfill discrete and individual functions.
The discontinuous nature of information would have carried an added benefit: a mutation could affect one gene, and only one gene, leaving the other genes unaffected. Mutations could now act on discrete modules of information rather than disrupting the function of the organism
as a whole—thereby accelerating evolution. But that benefit came with a concomitant liability: too much mutation, and the information would be damaged or lost. What was needed, perhaps, was a backup copy—a mirror image to protect the original or to restore the prototype if damaged. Perhaps this was the ultimate impetus to create a double-stranded nucleic acid. The data in one strand would be perfectly reflected in the other and could be used to restore anything damaged; the yin would protect the yang. Life thus invented its own hard drive.
In time, this new copy—DNA—would become the master copy. DNA was an invention of the RNA world, but it soon overran RNA as a carrier of genes and became the dominant bearer of genetic information in living systems.IV Yet another ancient myth—of the child consuming its father, of Cronus overthrown by Zeus—is etched into the history of our genomes.
* * *
I. Gurdon’s technique—of evacuating the egg and inserting a fully fertilized nucleus—has already found a novel clinical application. Some women carry mutations in mitochondrial genes—i.e., genes that are carried within mitochondria, the energy-producing organelles that live inside cells. All human embryos, recall, inherit their mitochondria exclusively from the female egg—i.e., from their mothers (the sperm does not contribute any mitochondria). If the mother carries a mutation in a mitochondrial gene, then all her children might be affected by that mutation; mutations in these genes, which often affect energy metabolism, can cause muscle wasting, heart abnormalities, and death. In a provocative series of experiments in 2009, geneticists and embryologists proposed a daring new method to tackle these maternal mitochondrial mutations. After the egg had been fertilized by the father’s sperm, the nucleus was injected into an egg with intact (“normal”) mitochondria from a normal donor. Since the mitochondria were derived from the donor, the maternal mitochondrial genes were intact, and the babies born would no longer carry the maternal mutations. Humans born from this procedure thus have three parents. The fertilized nucleus, formed by the union of the “mother” and “father” (parents 1 and 2), contributes virtually all the genetic material. The third parent—i.e., the egg donor—contributes only mitochondria, and the mitochondrial genes. In 2015, after a protracted national debate, Britain legalized the procedure, and the first cohorts of “three-parent children” are now being born. These children represent an unexplored frontier of human genetics (and of the future). Obviously, no comparable animals exist in the natural world.
II. The idea that histones might regulate genes had originally been proposed by Vincent Allfrey, a biochemist at Rockefeller University in the 1960s. Three decades later—and, as if to close a circle, at that very same institution—Allis’s experiments would vindicate Allfrey’s “histone hypothesis.”
III. The permanence of epigenetic marks, and the nature of memory recorded by these marks, has been questioned by the geneticist Mark Ptashne. In Ptashne’s view, shared by several other geneticists, master-regulatory proteins—previously described as molecular “on” and “off” switches—orchestrate the activation or repression of genes. Epigenetic marks are laid down as a consequence of gene activation or repression, and may play an accompanying role in regulating gene activation and repression, but the main orchestration of gene expression occurs by virtue of these master-regulatory proteins.
IV. Some viruses still carry their genes in the form of RNA.
PART SIX
* * *
POST-GENOME
The Genetics of Fate and Future
(2015– . . .)
Those who promise us paradise on earth never produced anything but a hell.
—Karl Popper
It’s only we humans who want to own the future, too.
—Tom Stoppard, The Coast of Utopia
The Future of the Future
Probably no DNA science is at once as hopeful, controversial, hyped, and even as potentially dangerous as the discipline known as gene therapy.
—Gina Smith, The Genomics Age
Clear the air! Clean the sky! Wash the wind! Take the stone from the stone, take the skin from the arm, take the muscle from the bone, and wash them. Wash the stone, wash the bone, wash the brain, wash the soul, wash them wash them!
—T. S. Eliot, Murder in the Cathedral
Let us return, for a moment, to a conversation on the ramparts of a fort. It is the late summer of 1972. We are in Sicily, at a scientific conference on genetics. It is late at night, and Paul Berg and a group of students have clambered up a hill overlooking the lights of a city. Berg’s news—of the possibility of combining two pieces of DNA to create “recombinant DNA”—has sent tremors of wonder and anxiety through the meeting. At the conference, the students are concerned about the dangers of such novel DNA fragments: if the wrong gene is introduced into the wrong organism, the experiment might unleash a biological or ecological catastrophe. But Berg’s interlocutors aren’t only worried about pathogens. They have gone, as students often do, to the heart of the matter: they want to know about the prospects of human genetic engineering—of new genes being introduced permanently into the human genome. What about predicting the future from genes—and then altering that destiny through genetic manipulation? “They were already thinking several steps ahead,” Berg later told me. “I was worried about the future, but they were worried about the future of the future.”
For a while, the “future of the future” seemed biologically intractable. In 1974, barely three years after the invention of recombinant DNA technology, a gene-modified SV40 virus was used to infect early mouse embryonic cells. The plan was audacious. The virus-infected embryonic cells were mixed with the cells of a normal embryo to create a composite of cells, an embryological “chimera.” These composite embryos were implanted into mice. All the organs and cells of the embryo emanated from that mix of cells—blood, brain, guts, heart, muscles, and, most crucially, the sperm and the eggs. If the virally infected embryonic cells formed some of the sperm and the egg cells of the newborn mice, then the viral genes would be transmitted from mouse to mouse vertically across generations, like any other gene. The virus, like a Trojan horse, might thus smuggle genes permanently into an animal’s genome across multiple generations resulting in the first genetically modified higher organism.
The experiment worked at first—but it was stymied by two unexpected effects. First, although cells carrying viral genes clearly emerged in the blood, muscle, brain, and nerves of the mouse, the delivery of the viral genes into sperm and eggs was extremely inefficient. Try as they might, scientists could not achieve efficient “vertical” transmission of the genes across generations. And second, even though viral genes were present in the mouse cells, the expression of the genes was firmly shut down, resulting in an inert gene that did not make RNA or protein. Years later, scientists would discover that epigenetic marks had been placed on viral genes to silence them. We now know that cells have ancient detectors that recognize viral genes and stamp them with chemical marks, like cancellation signs, to prevent their activation.
The genome had, it seemed, already anticipated attempts to alter it. It was a perfect stalemate. There’s an old proverb among magicians that it’s essential to learn to make things reappear before one learns to make things disappear. Gene therapists were relearning that lesson. It was easy to slip a gene invisibly into a cell and into an embryo. The real challenge was to make it visible again.
Thwarted by these original studies, the field of gene therapy stagnated for another decade or so, until biologists stumbled on a critical discovery: embryonic stem cells, or ES cells. To understand the future of gene therapy in humans, we need to reckon with ES cells. Consider an organ such as the brain, or the skin. As an animal ages, cells on the surface of its skin grow, die, and slough off. This wave of cell death might even be catastrophic—after a burn, or a massive wound, for instance. To replace these dead cells, most organs must possess methods to regenerate their own cells.
Stem cells fulfill this function, especially after catastroph
ic cell loss. A stem cell is a unique type of cell that is defined by two properties. It can give rise to other functional cell types, such as nerve cells or skin cells, through differentiation. And it can renew itself—i.e., give rise to more stem cells, which can, in turn, differentiate to form the functional cells of an organ. A stem cell is somewhat akin to a grandfather that continues to produce children, grandchildren, and great-grandchildren, generation upon generation, without ever losing his own reproductive fecundity. It is the ultimate reservoir of regeneration for a tissue or an organ.
Most stem cells reside in particular organs and tissues and give rise to a limited repertoire of cells. Stem cells in the bone marrow, for instance, only produce blood cells. There are stem cells in the crypts of the intestine that are dedicated to the production of intestinal cells. But embryonic stem cells, or ES cells, which arise from the inner sheath of an animal’s embryo, are vastly more potent; they can give rise to every cell type in the organism—blood, brains, intestines, muscles, bone, skin. Biologists use the word pluripotent to describe this property of ES cells.
The Gene Page 49