by David Shenk
We all do. Neuroscientists would later discover that a particular region of the hippocampus is filled with “place cells” programmed to create cellular landscape maps from visual perception. While not photographically flawless in their registration of detail, these place cells help us recall the position of an object in relation to the position of other objects. Recall that in the very early stages of Ronald Reagan’s illness, he turned to his wife and said, “Well, I’ve got to wait a minute. I’m not quite sure where I am.” The early destruction of hippocampal place cells in Alzheimer’s disease is the reason for sudden “Where-am-I?” moments. Much like a brittle old road map in the closet, the spatial map in Reagan’s brain was disintegrating.
With only an intuitive understanding of place cells, the Greeks went on to develop a series of mnemonic devices rooted in the power of visual memory. Many were based on the architectural model that Simonides had inspired—a visualization of rooms in a home, for example, into which the mnemonist would “deposit” pieces of information: one name on a dining room table, another name in the fireplace, yet another in the hallway, and so on. Over time, people found that, with the right devices, the brain could be turned into a startlingly reliable reference tool.
Mnemonics proved to be a critical tool in the long human struggle toward enlightenment. So-called memory palaces and memory theaters were a dominant feature of the intellectual landscape for more than a thousand years, well through the Renaissance. The historian Frances Yates suggests that Shakespeare’s Globe Theater was actually based on the model of a memory theater. “I come to the fields and vast palaces of memory,” Saint Augustine wrote in Confessions. “Hidden there is whatever … has been deposited and placed on reserve and has not been swallowed up and buried in oblivion. When I am in this storehouse, I ask that it produce what I want to recall, and immediately certain things come out; some things require a longer search, and have to be drawn out as it were from more recondite receptacles.”
Today, we treat the brain differently. Even those who think for a living don’t rely on it as a data storehouse. In place of the vast internal memory palace, we have Post-it Notes, steno notebooks, Palm Pilots, libraries, and the Internet. We are awash in external memory, upon which we have built edifying worlds of art, literature, science, law, and philosophy. The modern brain is saved primarily for synthesis of ideas, emotional impressions, rhetorical flair, and amusement.
In the transformation, we have surrendered some of memory’s importance. In 2001, I do not need to remember a long list of names (I write them down), or the full text of a speech (I use note cards or a TelePrompTer), or every bone and blood vessel in the body (I can refer to a textbook). I do not need to know how to calculate a circumference (calculator), or even how to spell “calculator” (spell checker). I just need to know that these information resources exist, and how to use them. This is one of the essential truths of the post-Gutenberg age: We live in a world of shared information and understanding—the challenge is not to know it all, but to know how to know. “Our age is retrospective,” Emerson wrote in Nature. “It builds the sepulchres of the fathers. It writes biographies, histories, and criticism. The foregoing generations beheld God and nature face-to-face; we, through their eyes.”
Shared understanding and memory is an obvious step forward in human civilization, but it is not without its trade-offs. Emerson’s chief worry was that the flood of knowledge from others would cut us off from the wisdom of personal experience. But he was also very concerned by the steady loss of life-affirming skills due to the gradual adoption of more and more labor-saving and thought-saving conveniences. “The civilized man has built a coach,” he said, “but has lost the use of his feet. He is supported on crutches, but lacks so much support of muscle. He has a fine Geneva watch, but he fails of the skill to tell the hour by the sun.… His notebooks impair his memory; his libraries overload his wit … and it may be a question whether machinery does not encumber; whether we have not lost by refinement some energy … some vigor of wild virtue.”
It was not a new warning, even then. Plato had cautioned two thousand years before that the new tool of writing was “a recipe not for memory, but for reminder.” He prophesied a world with more external knowledge and less internal proficiency: “If men learn this, it will implant forgetfulness in their souls: they will cease to exercise memory because they rely on that which is written.”
The lesser-used mind does, almost inevitably, become a somewhat weaker instrument. Is it possible, in our superior modern world, with terabytes of instantly accessible knowledge and machines that practically think for us, that even healthy brains are in something of an insidious decline? We know that labor-saving devices like cars and washing machines have led to couch-potato lifestyles and a steady rise in obesity. What we don’t recognize quite so readily is a corresponding link between the rise of external memory and a decrease in brain exertion. Not doing the mental work means not building those internal connections between neurons.
We use our brains to build great tools that make our lives safer, cleaner, longer, easier. But these same tools also dull our minds.
Not surprisingly, modern science has already risen to treat this emerging problem. The start of the twenty-first century saw neuroscientists classifying mild memory loss as a new disease: mild cognitive impairment (MCI). Society seems eager for this, already flocking to herbal tonics like adrafinil, deprenyl, gingko biloba, and Piracetam as potential medical solutions to cognitive frustration.
It is intrinsically human to want to better ourselves with tools. But there may also be a price paid in the quest. Of all the quantitative perks acquired through technological progress—convenience, efficiency, longevity, the thrill of electronic contact—none add a whit to the one true qualitative pursuit: to make life more meaningful. To the contrary, the manic chase of material improvements can easily crowd out the pursuit of meaning.
If material gains are spiritually empty, the obverse is also true: hardship or loss can offer a window to spiritual transcendence. “The helpless victim of a hopeless situation,” Viktor Frankl says, “facing a fate he cannot change, may rise above himself, may grow beyond himself, and by so doing, change himself.”
Finding meaning through loss: it is an observation that anyone who has flunked a test, skinned a knee, or lost a friend can easily relate to. Meaning through loss is one of life’s chief—and most reliable and universal—paradoxes.
While it is perfectly understandable, then, to want to overcome loss—and we all yearn to, and always will—it’s equally important to remember loss’s utility. If Alzheimer’s is, as I have already argued, one of our best lenses on life and the meaning of loss, then the medical war on Alzheimer’s presents two substantial dangers:
1. We may get so distracted by the goal of defeating Alzheimer’s that we lose sight of the disease’s essential humanity.
2. In winning the war, should we be so fortunate, we will also be eliminating the lens that has served humanity so well for thousands of years. Defeating Alzheimer’s will be like defeating winter. Once it is gone, we’ll face less hardship, but we’ll also have lost one of life’s reliable touchstones.
The same lesson applies to other scientific ambitions. Since we now have the power to overcome nature—to tinker with our own genetics—it is crucial to try to realize what we’ll be giving up as we overcome our limitations. The burden ultimately rests with non-scientists to insist on a full exploration of these issues, since very few scientists will stop to explore them on their own accord.
One scientific movement that presses ahead without a full consideration of the ramifications is called posthumanism. “We are at the point of remaking human biology,” Gregory Stock, director of the Science, Technology and Society program at UCLA, wrote in 1998. Stock, author of Metaman: The Merging of Humans and Machines into a Global Superorganism, was talking about the virtues of germline engineering—the creation of knock-out people by adding and subtracting traits as we see
fit. Posthumanists ask: What qualities in particular would you like your next child to have? They want to take the design process away from natural selection and put it in the hands of individual human beings.
Now that we have transgenic mice with muscular dystrophy and asthma, it is no great stretch of the imagination to think about scientists manipulating human genes to enhance the prowess of the brain. Is the power of memory a gene-based trait? The man with the perfect memory, A. R. Luria’s remarkable human subject S., would have said so. Both his parents also had otherworldly memories, as did a cousin. In 1995, Tim Tully, a scientist at Cold Spring Harbor on Long Island, kicked off the brain-enhancement era with an insect counterpart to S., a transgenic fly that forms scent memories much faster than ordinary flies and keeps them forever. He called it the fly with photographic memory.
Advocates of germline engineering imagine a world not too far away that will be populated by “posthumans.” Our posthuman children, they predict, will have faster, more reliable brains, germ-resistant bodies, and other adaptations that make us clearly better. The apotheosis of posthumanism is the death of death: the “manufacturing” of an unlimited amount of time for humans to enjoy life. “I am now working on immortality,” University of California at Irvine evolutionary biologist Michael Rose told Wired magazine for its January 2000 issue. Rose is not the only serious scientist aiming to obliterate life’s ultimate boundary. The Silicon Valley genetic engineering firm Geron is trying to unlock the secrets of telomerase, an enzyme found in sperm cells and cancer cells that seems to be the key to keeping such cells youthful. Another corporate stem-cell researcher, William Haseltine, founder of Human Genome Sciences, predicted that this research arc will lead to what he called a “transubstantiated future” for human beings within seventy years—meaning that the generation born in the late twentieth century could be the last to face death as an inevitability.
The number of obstacles to immortality make it impossible to guess seriously about whether it will ever happen. But its plausibility demands that we begin to ask: Do we want this? Do we want perfect memories and endless lives?
In Gulliver’s Travels, Gulliver changes his mind about the glories of immortality after he sees that it is fraught with problems. But what about a clean immortality, one that would actually work well? What if, short of eternity, we could live four hundred relatively healthy years instead of seventy-five? Imagine being able to stick around and see your children’s children’s children’s children’s children’s children’s children’s children’s children’s children’s children’s children’s children’s children’s children’s children’s children. If we could do so in sound mind and relatively sound body, without being too much of a burden on our families or our communities, we could indeed live out the original aspirations of Gulliver—to first acquire great material wealth and a formidable education and then spend subsequent lifetimes putting those resources to great use, pursuing modernization, universal health care, and other high-minded humanistic pursuits.
It sounds wonderful, and perhaps would be in many ways. But it would also be a fundamental challenge to how we understand ourselves, and it is almost impossible to imagine the ramifications—good and bad—for humanity.
To be human as we know it today is to experience the cycles of life, to experience great loss and pain—not just the pain of tragedy but the pain of inevitability. The essential joy of life is embedded in our mortality, and in our forgetting. How we would be changing ourselves if we decide to cross the Rubicon to posthumanity is impossible to tell. Life as we know it is an incalculably complicated web of interdependence. Species rely on other species for survival, and abilities and capacities are balanced out by other abilities or inabilities. But more important than the unintended consequences of manipulating DNA is the essential loss of humanity that happens as soon as we begin doing so. As Plato, Nietzsche, Emerson, and others have argued, humanity is something we should savor for all of its frailties as well as its abilities. The only true wisdom, said Joseph Campbell in a paraphrase of the Caribou Eskimo Shaman Igjugarjuk, “lives far from mankind, out in the great loneliness, and can be reached only through suffering. Privation and suffering alone open the mind to all that is hidden to others.”
One cannot appreciate life’s majesty without experiencing its hardships. In the Wim Wenders film Wings of Desire, angels abandon a perfect but colorless heaven for a life on earth with all of its Technicolor problems. Perfection is boring and lifeless; reality, with its grit and loss, is fulfilling.
Perhaps no human being has experienced this hard truth as completely as S., the man with the perfect memory. Recall that although he remembered everything he ever came into contact with, S. could make sense of almost nothing. Simple stories baffled him, and even people’s faces were difficult for him to place because he recorded so much information about each moment’s expression. S. spent his entire life looking at everything with a magnifying glass, taking in so many details that he could never pull back far enough to make sense of the patterns. He saw the trees but not the forest. “The big question for him, and the most troublesome,” wrote Luria, “was how he could learn to forget.… there were numerous details in the text, each of which gave rise to new images that led him far afield, until his mind was a virtual chaos. How could he avoid these images, prevent himself from seeing details which kept him from understanding a simple story? … The problem of forgetting … became a torment for him.”
S. tried everything he could think of to forget. He tried writing things down, reasoning that if he wrote something down he wouldn’t need to remember it, would be free to forget it. “But I got nowhere,” he reported. “For in my mind, I continued to see what I’d written.” Even when he tried writing everything down on the same sort of paper with the same pencil, the information would not blur together as he’d hoped. He kept seeing it all distinctly in his mind’s eye.
So he tried burning it, literally. He would record information on paper, set it afire and then watch the paper burn into a charred scrap. But that didn’t work either. In his mind, he still saw the information under the black char.
The image of a man literally burning information in his struggle to forget is perhaps the most poignant way to marvel at memory and its gorgeous fragility. Our limitations are our strengths. Perhaps it is true that with very slight modifications our brains and bodies could be made virtually invulnerable. But in escaping loss we would also be escaping life.
We are like the dead in Thornton Wilder’s Our Town. As we drift away from life, no longer fearing to die nor craving and striving for our place in the sun, we can look back on the world that was, and see it as a whole.
—Morris Friedell
Chapter 17
THE MICE ARE SMARTER
Washington Hilton, Washington, D.C.: July 2000
Near the entrance to this hotel nineteen years ago, John Hinckley crouched to his knees and opened fire on Ronald Reagan and his staff, as the President left the building following a speech to labor union delegates.
Now Reagan’s daughter Maureen returned to this place of dreadful memory to address another assembly, the World Alzheimer Congress. It was the largest-ever professional gathering on the disease.
Her father was in the final stages. He had stopped talking, and was having some trouble walking. Maureen, meanwhile, had emerged as a powerful voice in the crusade against Alzheimer’s, skillfully using her family’s public tragedy to raise awareness and to campaign for more government research funding. In the Hilton’s expansive basement conference center, she joined physicians, nurses, counselors, and more than three thousand researchers for a broad review of the latest scientific developments and caregiving techniques.
There was also a pack of journalists. Everyone wanted to hear the bulletin about Dale Schenk’s vaccine—about how the knock-out mice were getting smarter.
The vaccine really seemed to be working, at least in animals. A number of labs around the world had replicated Sch
enk’s results, and one had gone further. At the University of South Florida, researchers gave vaccine injections to the knock-out mice and then months later tested their memories. They found just what they’d hoped: In water mazes, the mice did not bumble about the way they had been genetically designed to; instead, they learned how to navigate the maze just like normal mice. They were normal now, apparently. For these humanized mice, at least, Alzheimer’s disease was now preventable.
This has to be taken in context, of course: Mice don’t really get Alzheimer’s, so they can’t really be cured of it. The mice whose brains had been artificially contaminated with plaques had now been artificially cleansed of them; it seemed like a step in the right direction, but one had to remember that a disease that only exists in humans can only be cured in humans.
In a small room crowded with microphones and cables, Schenk and his boss Ivan Lieberburg gave a briefing to an international press corps, sharing even better news: Phase I safety trials had ended with no observable side effects of the vaccine on humans. Phase II multidose human trials would therefore soon commence, this time to test the drug’s effectiveness. They could have the first results in as few as eighteen months—at the end of 2001. That’s when the world would first get an inkling of whether or not this was going to be a useful treatment for Alzheimer’s, or a cure, or nothing at all.