The answer had been standard for decades: through genes. Or to be more precise, by turning genes on and off. In Paris, in the 1950s, Monod and Jacob had demonstrated that when bacteria switch their diet from glucose to lactose, they turn glucose-metabolizing genes off and turn lactose-metabolizing genes on. Nearly thirty years later, biologists working on the worm had found that signals from neighboring cells—events of fate, as far as an individual cell is concerned—are also registered by the turning on and off of master-regulatory genes, leading to alterations in cell lineages. When one twin falls on ice, wound-healing genes are turned on. These genes enable the wound to harden into the callus that marks the site of the fracture. Even when a complex memory is recorded in the brain, genes must be turned on and off. When a songbird encounters a new song from another bird, a gene called ZENK is turned up in the brain. If the song isn’t right—if it’s a song from a different species, or a flat note—then ZENK is not turned on at the same level, and the song is not released.
But if fate is recorded in the body by the transient turning on and off of genes, then why is it irreversible? The question, seemingly absurd at first glance, has long been a biologist’s conundrum: If there is no mechanism to “lock” fate forward, there should be no mechanism to lock it backward. If genetic switches are transient, then why isn’t fate or memory transient? Why don’t we age backward?
This question bothered Conrad Waddington, an English embryologist working in the 1950s. When Waddington considered the development of an animal embryo, he saw the genesis of thousands of diverse cell types—neurons, muscle cells, blood, sperm—out of a single fertilized cell. Each cell, arising from the original embryonic cell, had the same set of genes. But if genetic circuits could be turned on and off transiently, and if every cell carried the same gene sequence, then why was the identity of any of these cells fixed in time and place? Why couldn’t a liver cell wake up one morning and find itself transformed into a neuron?
In embryonic development, Waddington saw transience hardening into permanence—wounds turning into calluses. In an inspired analogy, Waddington likened embryonic differentiation to a thousand marbles sent tumbling down a sloping landscape full of crags, nooks, and crevices. As every cell charted its unique path down the “Waddington landscape,” he proposed, it got trapped in some particular crag or cranny—thereby determining its identity.
But a cell’s identity, Waddington realized, has to be recorded in some manner beyond its genome; otherwise the landscape of development would be inherently unstable. Some feature of a cell’s interior or exterior environment must be altering the use of a cell’s genes, he surmised, enabling each cell to stamp the marks of its identity on its genome. He termed the phenomenon “epi-genetics”—or “above genetics.” Epigenetics, Waddington wrote, concerns “the interaction of genes with their environment [ . . .] that brings their phenotype into being.”
A macabre human experiment provided evidence for Waddington’s theory, although its denouement would not be obvious for generations. In September 1944, amid the most vengeful phase of World War II, German troops occupying the Netherlands banned the export of food and coal to its northern parts. Trains were stopped and roads blockaded. Travel on the waterways was frozen. The cranes, ships, and quays of the port of Rotterdam were blown up with explosives, leaving a “tortured and bleeding Holland,” as one radio broadcaster described it.
Heavily crisscrossed by waterways and barge traffic, Holland was not just tortured and bleeding. She was also hungry. Amsterdam, Rotterdam, Utrecht, and Leiden depended on regular transportation of supplies for food and fuel. By the early winter of 1944, wartime rations reaching the provinces north of the Waal and Rhine Rivers dwindled to a bare trickle, and the population edged toward famine. By December, the waterways were reopened, but now the water was frozen. Butter disappeared first, then cheese, meat, bread, and vegetables. Desperate, cold, and famished, people dug up tulip bulbs from their yards, ate vegetable skins, and then graduated to birch bark, leaves, and grass. Eventually, food intake fell to about four hundred calories a day—the equivalent of three potatoes. A human being “only [consists of] a stomach and certain instincts,” one man wrote. The period, still etched in the national memory of the Dutch, would be called the Hunger Winter, or Hongerwinter.
The famine raged on until 1945. Tens of thousands of men, women, and children died of malnourishment; millions survived. The change in nutrition was so acute and abrupt that it created a horrific natural experiment: as the citizens emerged from the winter, researchers could study the effect of a sudden famine on a defined cohort of people. Some features, such as malnourishment and growth retardation, were expected. Children who survived the Hongerwinter also suffered chronic health issues: depression, anxiety, heart disease, gum disease, osteoporosis, and diabetes. (Audrey Hepburn, the wafer-thin actress, was one such survivor, and she would be afflicted by a multitude of chronic illnesses throughout her life.)
In the 1980s, however, a more intriguing pattern emerged: when the children born to women who were pregnant during the famine grew up, they too had higher rates of obesity and heart disease. This finding too might have been anticipated. Exposure to malnourishment in utero is known to cause changes in fetal physiology. Nutrient-starved, a fetus alters its metabolism to sequester higher amounts of fat to defend itself against caloric loss, resulting, paradoxically, in late-onset obesity and metabolic disarray. But the oddest result of the Hongerwinter study would take yet another generation to emerge. In the 1990s, when the grandchildren of men and women exposed to the famine were studied, they too had higher rates of obesity and heart disease. The acute period of starvation had somehow altered genes not just in those directly exposed to the event; the message had been transmitted to their grandchildren. Some heritable factor, or factors, must have been imprinted into the genomes of the starving men and women and crossed at least two generations. The Hongerwinter had etched itself into national memory, but it had penetrated genetic memory as well.
But what was “genetic memory”? How—beyond genes themselves—was gene memory encoded? Waddington did not know about the Hongerwinter study—he had died, largely unrecognized, in 1975—but geneticists cannily saw the connection between Waddington’s hypothesis and multigenerational illnesses of the Dutch cohort. Here too, a “genetic memory” was evident: the children and grandchildren of famine-starved individuals tended to develop metabolic illnesses, as if their genomes carried some recollection of their grandparents’ metabolic travails. Here too, the factor responsible for the “memory” could not be an alteration of the gene sequence: the hundreds of thousands of men and women in the Dutch cohort could not have mutated their genes over the span of three generations. And here too, an interaction between “the genes and the environment” had changed a phenotype (i.e., the propensity for developing an illness). Something must have been stamped onto the genome by virtue of its exposure to the famine—some permanent, heritable mark—that was now being transmitted across generations.
If such a layer of information could be interposed on a genome, it would have unprecedented consequences. First, it would challenge an essential feature of classical Darwinian evolution. Conceptually, a key element of Darwinian theory is that genes do not—cannot—remember an organism’s experiences in a permanently heritable manner. When an antelope strains its neck to reach a tall tree, its genes do not record that effort, and its children are not born as giraffes (the direct transmission of an adaptation into a heritable feature, remember, was the basis for Lamarck’s flawed theory of evolution by adaptation). Rather, giraffes arise via spontaneous variation and natural selection: a tall-necked mutant appears in an ancestral tree-grazing animal, and during a period of famine, this mutant survives and is naturally selected. August Weismann had formally tested the idea that an environmental influence could permanently alter genes by chopping off the tails of five generations of mice—and yet, mice in the sixth generation had been born with perfectly intact tails. Evo
lution can craft perfectly adapted organisms, but not in an intentional manner: it is not just a “blind watchmaker,” as Richard Dawkins once famously described it, but also a forgetful one. Its sole driver is survival and selection; its only memory is mutation.
Yet the grandchildren of the Hongerwinter had somehow acquired the memory of their grandparents’ famine—not through mutations and selection, but via an environmental message that had somehow transformed into a heritable one. A genetic “memory” in this form, it could act as a wormhole for evolution. A giraffe’s ancestor might be able to make a giraffe—not by trudging through the glum Malthusian logic of mutation, survival, and selection, but by simply straining its neck, and registering and imprinting a memory of that strain in its genome. A mouse with an excised tail would be able to bear mice with shortened tails by transmitting that information to its genes. Children raised in stimulating environments could produce more stimulated children. The idea was a restatement of Darwin’s gemmule formulation: the particular experience, or history, of an organism would be signaled straight to its genome. Such a system would act as a rapid-transit system between an organism’s adaptation and evolution. It would unblind the watchmaker.
Waddington, for one, had yet another stake in the answer—a personal one. An early, fervent convert to Marxism, he imagined that discovering such “memory-fixing” elements in the genome might be crucial not just to the understanding of human embryology, but also to his political project. If cells could be indoctrinated or de-indoctrinated by manipulating their gene memories, perhaps humans could be indoctrinated as well (recall Lysenko’s attempt to achieve this with wheat strains, and Stalin’s attempts to erase the ideologies of human dissidents). Such a process might undo cellular identity and allow cells to run up the Waddington landscape—turning back from an adult cell to an embryonic cell, thus reversing biological time. It might even undo the fixity of human memory, of identity—of choice.
Until the late 1950s, epigenetics was more fantasy than reality: no one had witnessed a cell layering its history or identity above its genome. In 1961, two experiments performed less than six months, and less than twenty miles, from each other would transform the understanding of genes and lend credence to Waddington’s theory.
In the summer of 1958, John Gurdon, a graduate student at Oxford University, began to study the development of frogs. Gurdon had never been a particularly promising student—he once scored 250th in a class of 250 in a science exam—but he had, as he once described it, an “aptitude for doing things on a small scale.” His most important experiment involved the smallest of scales. In the early fifties, two scientists in Philadelphia had emptied an unfertilized frog egg of all its genes, sucking out the nucleus and leaving just the cellular husk behind, then injected the genome of another frog cell into the emptied egg. This was like evacuating a nest, slinking a false bird in, and asking if the bird developed normally. Did the “nest”—i.e., the egg cell, devoid of all its own genes—have all the factors in it to create an embryo out of an injected genome from another cell? It did. The Philadelphia researchers produced an occasional tadpole from an egg injected with the genome of a frog cell. It was an extreme form of parasitism: the egg cell became merely a host, or a vessel, for the genome of a normal cell and allowed that genome to develop into a perfectly normal adult animal. The researchers called their method nuclear transfer, but the process was extremely inefficient. In the end, they largely abandoned the approach.
Gurdon, fascinated by those rare successes, pushed the boundaries of that experiment. The Philadelphia researchers had injected nuclei from young embryos into the enucleated eggs. In 1961, Gurdon began to test whether injecting the genome from the cell of an adult frog intestine could also give rise to a tadpole. The technical challenges were immense. First, Gurdon learned to use a tiny beam of ultraviolet rays to lance the nucleus of an unfertilized frog egg, leaving the cytoplasm intact. Then, like a diver slicing into water, he punctured the egg membrane with a fire-sharpened needle, barely ruffling the surface, and blew in the nucleus from an adult frog cell in a tiny puff of liquid.
The transfer of an adult frog nucleus (i.e., all its genes) into an empty egg worked: perfectly functional tadpoles were born, and each of these tadpoles carried a perfect replica of the genome of the adult frog. If Gurdon transferred the nuclei from multiple adult cells drawn from the same frog into multiple evacuated eggs, he could produce tadpoles that were perfect clones of each other, and clones of the original donor frog. The process could be repeated ad infinitum: clones made from clones from clones, all carrying exactly the same genotype—reproductions without reproduction.
Gurdon’s experiment incited the imagination of biologists—not the least because it seemed like a science-fiction fantasy brought to life. In one experiment, he produced eighteen clones from the intestinal cells of a single frog. Placed into eighteen identical chambers, they were like eighteen doppelgängers, inhabiting eighteen parallel universes. The scientific principle at stake was also provocative: the genome of an adult cell, having reached its full maturity, had been bathed briefly in the elixir of an egg cell and then emerged fully rejuvenated as an embryo. The egg cell, in short, had everything necessary—all the factors needed to drive a genome backward through developmental time into a functional embryo. In time, variations on Gurdon’s method would be generalized to other animals. It would lead, famously, to the cloning of Dolly, the sheep, the only higher organism reproduced without reproduction (the biologist John Maynard Smith would later remark that the only other “observed case of a mammal produced without sex wasn’t entirely convincing.” He was referring to Jesus Christ). In 2012, Gurdon was awarded the Nobel Prize for his discovery of nuclear transfer.I
But for all the remarkable features of Gurdon’s experiment, it was his lack of success that was just as revealing. Adult intestinal cells could certainly give rise to tadpoles, but despite Gurdon’s laborious technical ministrations, they did so with great reluctance: his success rate at turning adult cells into tadpoles was abysmal. This demanded an explanation beyond classical genetics. The DNA sequence in the genome of an adult frog, after all, is identical to the DNA sequence of an embryo or a tadpole. Is it not the fundamental principle of genetics that all cells contain the same genome, and it is the manner in which these genes are deployed in different cells, turned on and off based on cues, that controls the development of an embryo into an adult?
But if genes are genes are genes, then why was the genome of an adult cell so inefficiently coaxed backward into an embryo? And why, as others discovered, were nuclei from younger animals more pliant to this age reversal than those from older animals? Again, as with the Hongerwinter study, something must have been progressively imprinted on the adult cell’s genome—some cumulative, indelible mark—that made it difficult for that genome to move back in developmental time. That mark could not live in the sequence of genes themselves, but had to be etched above them: it had to be epigenetic. Gurdon returned to Waddington’s question: What if every cell carries an imprint of its history and its identity in its genome—a form of cellular memory?
Gurdon had visualized an epigenetic mark in an abstract sense, but he hadn’t physically seen such an imprint on the frog genome. In 1961, Mary Lyon, a former student of Waddington’s, found a visible example of an epigenetic change in an animal cell. The daughter of a civil servant and a schoolteacher, Lyon began her graduate work with the famously cantankerous Ron Fisher in Cambridge, but soon fled to Edinburgh to finish her degree, and then to a laboratory in the quiet English village of Harwell, twenty miles from Oxford, to launch her own research group.
At Harwell, Lyon studied the biology of chromosomes, using fluorescent dyes to visualize them. To her astonishment, she found that every paired chromosome stained with chromosomal dyes looked identical—except the two X chromosomes in females. One of two X chromosomes in every cell in female mice was inevitably shrunken and condensed. The genes in the shrunken chromosome were uncha
nged: the actual sequence of DNA was identical between both chromosomes. What had changed, however, was their activity: the genes in that shrunken chromosome did not generate RNA, and therefore the entire chromosome was “silent.” It was as if one chromosome had been purposely decommissioned—turned off. The inactivated X chromosome was chosen randomly, Lyon found: in one cell, it might be the paternal X, while its neighbor might inactivate the maternal X chromosome. This pattern was a universal feature of all cells that had two X chromosomes—i.e., every cell in the female body.
We still do not know why this selective silencing occurs only in the X chromosome or what its ultimate function is. But the random inactivation of the X has an important biological consequence: the female body is a mosaic of two types of cells. For the most part, this random silencing of one X chromosome is invisible—unless one of the X chromosomes (from the father, say) happens to carry a gene variant that produces a visible trait. In that case, one cell might express that variant, while its neighboring cell would lack that function—producing a mosaic-like effect. In cats, for instance, one gene for coat color lives on the X chromosome. The random inactivation of the X chromosome causes one cell to have a color pigment, while its neighbor has a different color. Epigenetics, not genetics, solves the conundrum of a female tortoiseshell cat. (If humans carried the skin color gene on their X chromosomes, then a female child of a dark-skinned and light-skinned couple would be born with patches of light and dark skin.)
The Gene Page 47