Unsuspecting Souls
Page 28
Perhaps some other answer makes more sense, which is that life does not begin; it simply is—at every stage, during every moment of the gestation process, life unfolds. In a sense, this is a rendering of God’s response to Moses, when Moses asks God’s name and the Lord responds: “I am that I am,” or “I am who I am.” Am: first-person singular indicative of the verb “be”—the most concise and accurate rendering of my existence: “I am.” I cannot enter into the now of the present moment with any more potency than to utter those two words, pronoun and verb: “I am.” Emerson wrote in his journal for 1827, “It is said to be the age of the first person singular.” It may be more accurate to say that the age struggled to find the first-person singular—and that the great grinding power of the age made it difficult for the majority of people to say, “I am,” and have it carry meaning.
It does no good to ask our basic question another way, such as “When does human life begin?” because of course the newly formed conceptus is indeed a human, as distinct from a walrus or a dog. To ask about ensoulment takes us even farther afield. Thus, the task of trying to locate the moment of life, in the twenty-first century, brings us quite quickly to an intellectual and moral impasse. And yet, the contemporary realization that we “have a life” has brought us to a place where people on both sides of the argument—pro-life and pro-choice—need such a declaration.
If we resort to asking about when an individualized life actually begins, we find ourselves thrown back on John Locke’s formulation that only persons possess rights. But, for Locke, not every human is a person, only one “who has reason and reflection, and considers itself as itself, the same thinking thing, in different times and different places.” The logical conclusion here is that young infants lack that kind of self-awareness, and so the state cannot guarantee that creature any human rights.9
Finally, some theologians find it more informative to talk about potentiality. The fetus, or even the young infant, while lacking a personality, has the potential to become, in the near future, a fully realized individual. But that formulation raises even more problems. For any agency to carry out research on stem cells, say, someone, or some official body, must make the determination about the beginning of a life. That is why in England, for example, a group of scientists, ethicists, and educators, naming itself after one of its members, Lady Warnock, made such a declaration, in 1985. The Warnock Committee believed that if a date could be determined when “personhood” began, then researchers themselves could legally experiment on embryos. After a great deal of deliberation and testifying, the committee decided that experiments on embryos—extracting stem cells, for instance—could proceed up to the fourteenth day of the embryo’s growth, one day before the appearance of the so-called primitive streak (the spine).
The secretary of health in Britain, Kenneth Clarke, elaborated on the committee’s selection of a life-date this way: “A cell that will become a human being—an embryo or conceptus—will do so within fourteen days. If it is not implanted within fourteen days it will never have a birth. . . . The basis for the fourteen day limit was that it related to the stage of implantation which I have just described, and to the stage at which it is still uncertain whether an embryo will divide into one or more individuals, and thus up to the stage before true individual development has begun. Up to fourteen days that embryo could become one person, two people, or even more.” We cannot, it seems, escape the calendar.
We can see what has happened to “life” by looking at it not just from its starting point but from its endpoint, as well. When we think of death, most of us immediately think of pain and suffering. The image of a hospital, tubes, and feeding lines comes immediately to mind; we’ve all seen it too many times. But people did not always die under such clinical conditions; death in the first part of the nineteenth century offers a radically different picture. I can best describe the change in death over the course of the century by looking at the changes to one particular word, euthanasia. I rely here primarily on the work of Shai J. Lavi and his book, The Modern Art of Dying: A History of Euthanasia in the United States.
In the simplest of terms, in the early years of the nineteenth century, Lavi points out, euthanasia “signified a pious death blessed by the grace of God.” This is what I called earlier the good death or the easy death, a bounteous gift from God, who saw fit to make the dying person’s last hours on Earth ones free of pain. Usually, the person has earned this gift by leading a life filled with God’s grace. This is the earliest meaning, in the fifteenth century, of euthanasia (literally, eu meaning “good,’ and thanatos, “death”), and it characterizes the first decades of the nineteenth century. This one word, euthanasia, changes its meaning over the course of the century, which reflects the changes toward both the human being and death. By the middle part of the century, the word takes a remarkable turn. The priest has left the deathbed and has been replaced by the physician, who now administers analgesics to the dying person to make his or her passage an easier and hastier one.
At that moment, no one faces death with anything but dread. Death involves nothing beyond acute pain and suffering; it comes to every person in exactly the same way, no matter how one has lived one’s life. Euthanasia meant that, with the aid of the physician, the patient could then face a fairly painless and speedy death: The physician brought into being a chemical version of the good or easy death. A key question underlies this new attitude toward death: “Why should the person suffer so?”
Lavi wants to know how this new idea of euthanasia as “the medical hastening of death” came to occupy its place as “a characteristically modern way of dying.” Asking this question raises several related and key nineteenth-century issues. First of all, in order to experience pain, one must believe in and possess a sensate body; and, as we have seen, bodies were fast disappearing. Second, with the loss of the body came the concomitant and logical need to eradicate pain. By mid-century, a person could purchase an analgesic for any minor or major pain. And finally, and perhaps the most important issue, death itself had begun to disappear. The old-fashioned Christian notion of experiencing death through protracted pain, to know of suffering at its base in imitation of the suffering of Jesus Christ on the cross, had lost its hold on the imagination—religious and secular.
A new philosophy, then, permeates the middle years of the nineteenth century: Let’s get this ordeal over with so that the family can move on to grieving and gathering their old lives together once again. What has been eliminated from the picture is anything that the dying person might learn or even just experience, up until the very last second of his or her breathing, through confronting the suffering and pain that is death. Death no longer exists as an integral part of life, but as something distinct and separate. This represents a change of the greatest order. For as the physician intercedes, takes over, and administers his many painkillers to the dying person, the heart of the Christian life comes to mean very little.
The physician has assumed total control. Death will wrap its arms around the dying person at the pace that the medical person dictates and which he then very deliberately puts into practice. The patient’s passing will occur at the physician’s chosen speed, on his chosen schedule. The physician now assumes the role of God. But, in the process, the dying person loses an enormous amount, or I should say, the physician deprives the dying person of so much that is crucial to his very being—the person’s will, drive, determination, the reflective time to ponder the meaning of living and the significance of passing. Instead of a strong-willed person, the patient turns into a doped-up, drugged-out victim, believing that the one sensation that defines his or her essence at that moment—intense and searing pain—is awful, bad, something to avoid at all costs. It’s that attitude toward death that prompted Ivan Illich to call our contemporary world an “amortal society.” He believed that most people find it impossible to die their deaths. The medical industry has robbed them of that opportunity. People have called in the medical profession to
save them from the frightful suffering of death.
Finally, the definition of euthanasia toward the end of the century takes on a legal status: By 1869, the Oxford English Dictionary makes clear, people used the word “especially as a reference to a proposal that the law should sanction the putting painlessly to death of those suffering from incurable and extremely painful diseases.” Unfortunately, one of those incurable diseases turned out to be death. Beyond that, as physicians diagnosed more and more diseases, it created more and more opportunities to put euthanasia into practice.
The look on the face of the dying person must be one of peace and calm and of the most somber resolve, not much different from the stoic expression on the faces of family members in early studio photographs. A new ethic begins to emerge in the nineteenth century: Expression is all. Surface carries all the meaning. The gesture, the look, the image—these all constitute the archetype of the new dying person. What lies beneath the skin—in the heart and soul—the life and the character, mean much less by comparison. The physician, in very real terms, works his alchemy in collusion with that other artist of the world of chemicals, the embalmer.
Historians like to describe society as slowly getting medicalized. Turn on virtually any evening television program, or listen to any sports broadcast, and every commercial tells the audience what medicines they should demand from their physicians. The phrases linger in the mind: Ask your doctor if Cialis or Viagra or Clarinex is right for you. A good many Americans now diagnose themselves, believing they have contracted the latest illness, whose etiology has been broadcast to them on the screen. They find out what is ailing them by checking their symptoms online. Side effects seem not to matter at all: One must pay a price for a pain-free existence. The New York Times Magazine investigates some bizarre and rare illness almost every week, in thrilling imitation of the most seductive detective story. And a popular author like Oliver Sacks, well, he has made a good living out of people who seem to mistake their hats for their wives. The analgesics that the nineteenth century developed came tumbling into the twentieth century with myriad variations and types; they fill the aisles in drug stores today. One of the most ubiquitous hyphenated “self” words today is “self-diagnosis,” followed closely by “self-help.”
In the nineteenth century, in a prelude to our current medicalized and narcotized state, medicine literally enters the home. That now standard piece of furniture in the modern house, the medicine cabinet, first appears around 1828, in England, as a “medicine chest.” Medicine had reached equality with the refrigerator and the stove and the sink: all necessities for healthy and hygienic living. Medicine had so much taken over the household that on the door of the chest, to tell them how much of a certain drug to take, family members pasted a chart, called a posological table, every family member with his or her name and drug and dosage. Posological derives from a word that enters the vocabulary at this time, posology (from the Greek posos, “how much?” and ology, “the knowledge of”). Inside their cabinets, people mostly stored a range of analgesics, or more commonly, anesthetics.
Anesthetics—ether, chloroform, nitrous oxide, laudanum; agents to numb every sensate feeling—came to the rescue for what everyone had come to believe was the bleakest moment in the life of a person in the nineteenth century: his or her own inevitable (and sometimes protracted) death. In the nineteenth century, then, while people began to own a life—such as it was—they at the same time began to surrender all control over their own deaths, to disown their own deaths. Many writers and poets and life scientists believed that people truly moved through the world, up to their very last inhalations, in the state that Percy Bysshe Shelley had recognized and named “suspended animation.”
At a certain moment, the physician, like the man of the cloth, left the side of the patient’s bed, allowing technology to take over completely. By the end of the nineteenth century, euthanasia carried the meaning familiar to most of us now: “The use of anesthetics to guarantee a swift and painless death.” This was followed by attempts to make euthanasia legal. We have now gone beyond modern euthanasia, according to the American Medical Association, to the “intentional termination of the life of a human being by another,” or what has come to be called “mercy killing”—wherein one person takes the life of another absent that other person’s consent.
Witness the arguments and confusion surrounding the case of Terri Schiavo, who survived in a persistent vegetative state for fifteen years, until her husband obtained permission from the state, in 2005, to “pull the plug”—that is, to remove her feeding tube. Her parents objected, arguing that their daughter still retained a degree of consciousness. In a bizarre encounter with virtual reality, Senator Bill Frist, a medical doctor, diagnosed Terri Schiavo on a television monitor and declared that she “was not somebody in a persistent vegetative state.” Such an absurdity cost him a run for the presidency of the United States. Jack Kevorkian (“Doctor Death”), recently released from jail, understands the nature of death in the twenty-first century perhaps better than most. And still, only two states in the nation, Oregon and Washington, have passed legislation for something called physician-assisted suicide, or what the states refer to as death with dignity laws.
The Dutch had legalized euthanasia, with the patient’s consent, in 2002. In 2004, Dutch health offices considered guidelines that doctors could follow for “euthanizing terminally ill people ‘with no free will,’ including children, the severely mentally retarded and patients in irreversible comas.”10 After lengthy discussions, the Netherlands became the first country to legalize euthanasia—including ending the life of someone suffering from a terminal illness or an incurable condition, without his or her approval.
Debates about legalizing euthanasia, which we think of as so modern, first took place in this country in 1870, which makes sense, since the idea of the body was losing its significance, the idea of death was fast coming to an end, and life itself was being reformulated into some kind of very clearly definable entity. Deciding for oneself the exact moment when one will die may offer the last chance for one to regain one’s will, or to know that one does indeed possess a will. One of the most insightful poets of the modern condition, T. S. Eliot, uses anesthesia—the nineteenth-century form, ether—for the opening image of his poem “The Love Song of J. Alfred Prufrock”: “Let us go then you and I,/When the evening is spread out against the sky/ Like a patient etherized on a table.” For Eliot, the poet who laments the disappearance of Christian ritual in modern lives, anesthesia does not mean freedom from pain, but mere languor and dissipation, the slow dissolving of all meaning. For Eliot, all of us need to wake up.
With both death and life gone, the possibilities seem endless. The idea of having a life has become, in the twenty-first century, only a transitional idea, a momentary resting place in the history of the human being. We are not only creatures in need of management and control and direction. We have now laid ourselves wide open to much more than a range of professionals. We now face an army of very serious and very determined bio-technical engineers determined to take over and guide our lives. I “have” a body that can now be harvested for its resources, for my organs, tissues, joints. My growing up turns out to be an investment in what goes by the name these days of biocapital. One historian of medicine, Catherine Waldby, figures my living tissues and organs into something she calls “biovalue,” for the medical profession can, as she says, “redeploy” them for those who need transplants in order that those people can return to the workplace—hence the “value”—and hence add more value to the capital economy.11
Nikolas S. Rose, a sociologist, goes on to define the new bioeconomy, in almost the same terms as imperialism, as “a space to be mapped, managed, and understood; it needs to be conceptualized as a set of processes and relations that can be known and theorized, that can become the target of programs that seek to increase the power of nations or corporations by acting within and upon that economy.” Within the parameters of biocap
italism, no one asks, “When does life begin?” The biotech future has moved us all well beyond such petty concerns. The new, enriched world forces each of us to ask, instead, “Just what is my life worth?”12
Robbed of every ounce of our essence, we move through the world as new forms of ghosts. Some people are now wandering through the world as robots or cyborgs—and choosing it on their own. In the 1860s and 1870s, the women who operated the new writing machine called the typewriter in business offices across America were themselves referred to as typewriters. In 1893, a woman named Henrietta Swan Leavitt went to work for the Harvard College Observatory to measure the brightness of individual stars. Astronomers referred to her, according to her biography, as “one of several ‘computers,’ the term for women who did the grunt work of astronomy for 25 cents an hour” (the minimum wage).13 I have the chilling sensation that the same sort of dehumanizing conflation of machine and flesh is taking place today, only this time it is not some simple machine like a typewriter that directs things, but various kinds of highly sophisticated technological devices. And because we love our computers so much these days, we do not recognize the ways they have shaped our lives.
By coming up against computer programs in nearly every task we carry out during the day—on word processing and sending mail, of course, but also in playing games, running appliances, driving cars, talking to friends, buying tickets, paying bills, heating meals, washing clothes, performing the most mundane of office tasks—one begins to act, without really being conscious of the change right away, in imitation of the computer—that is, in a rather rigid and programmatic way. People give their inputs to other people, interface with friends, and impact social situations, to say nothing of carrying on relationships online, including a bizarre configuration called cybersex.