Five Days at Memorial

Home > Other > Five Days at Memorial > Page 42
Five Days at Memorial Page 42

by Sheri Fink


  When Simmons saw the program he ripped up his copy of the unused version of the show’s ending. The jury in the television case found the doctor not guilty of first-degree murder.

  After the broadcast, medical professional organizations released more statements of support for Pou and the two nurses, as if the fictional show proved their innocence. “Their acts were those of heroism,” said the American College of Surgeons. The chairman of the department where Pou had trained, a grandfatherly man who was deeply fond of her, had written the statement. It went so far as to assert that Pou, who had voluntarily stopped performing surgery, had been denied her constitutional right to due process because she was “forbidden to practice—a situation that gives the impression that she has been deemed guilty without review of the records.”

  DR. EWING COOK was elated by the Boston Legal episode. “Boy, that’s good for her,” he said aloud when he watched it. “I hope that’s what goes on in the grand jury in New Orleans.” The writers had captured what he felt. Nobody who was not there at Baptist could judge.

  Cook was still feeling the effects of his time there. He’d had surgery, for kidney stones that he attributed to dehydration. He had tried not to drink much while at the sweltering hospital to avoid having to go to the bathroom.

  Cook’s lawyer had managed to keep him out of trouble. After the subpoena, he had never been called in for an interview. Cook worked a couple of hours a day now at two rural hospitals. He and his wife had moved far west of New Orleans and 110 feet above sea level, out of range of any storm surge a hurricane might cast against the earth again.

  WHILE FRANK MINYARD had commissioned many forensic reports on the Memorial dead, he lacked the views of an ethicist, someone who could situate the alleged acts of the health professionals in a panorama of history, philosophy, law, and ever-changing societal norms. This was a perspective Minyard wanted, even though his job by law was merely to decide whether the deaths were technically homicides—caused by human intervention. In advance of a grand jury, he was doing his own unnecessary, unbidden—but, he felt, vital—investigation.

  Minyard reached out to the noted bioethicist Arthur Caplan, who had appeared on CNN soon after the allegations emerged and opined that a jury might consider “very, very extenuating circumstances” a defense for mercy killing. Now Caplan reviewed the records of the nine LifeCare patients on the seventh floor and concluded that all were euthanized, and that the way the drugs were given was “not consistent with the ethical standards of palliative care that prevail in the United States.” Those standards make clear, Caplan wrote, that the death of a patient cannot be the goal of a doctor’s treatment.

  Caplan knew that the history of thought, law, and policy on aid in dying could be arrayed along two axes. One was whether or not the patient had requested to die, making him or her either a voluntary or involuntary participant. The other was whether the aid in dying came in an active form, such as the giving of drugs, versus what was referred to as “passive” withdrawal or non-initiation of life-sustaining treatment. The poles of these two axes were known as voluntary, involuntary, active, and passive euthanasia.

  Whether killing someone who wishes to be killed is an act of mercy or an act of murder was a question that had divided humanity from ancient times, millennia before the advent of critical care medicine focused the modern mind on it. In a story related in the Bible, King Saul, injured in battle, asked his armor bearer to finish him off. He refused, “for he was sore afraid.” Saul then fell on his own sword and called out to a passing young man, “Stand over me and kill me! I am in the throes of death, but I’m still alive.” The young man did so and later told the story to King David, saying, “I knew that after he had fallen he could not survive.” David condemned the young man to death for his actions.

  Physician involvement in killing had also long divided opinion, back to the time of ancient Greece and Rome. Hippocrates’s thoughts eventually held sway, and many medical schools still honor his tradition by having graduating doctors swear an oath descended from the one attributed to him: “I will not give a lethal drug to anyone if I am asked, nor will I advise such a plan….”

  This marked an important transition in medicine. “For the first time in our tradition there was a complete separation between killing and curing,” anthropologist Margaret Mead told the eminent psychiatrist Maurice Levine, who recounted their conversation in a widely quoted 1961 lecture reprinted in his book Psychiatry & Ethics. “Throughout the primitive world, the doctor and the sorcerer tended to be the same person. He with the power to kill had power to cure, including specially the undoing of his own killing activities. He who had the power to cure would necessarily also be able to kill [….] With the Greeks the distinction was made clear. One profession, the followers of Asclepius, were to be dedicated completely to life under all circumstances, regardless of rank, age or intellect—the life of a slave, the life of the Emperor, the life of a foreign man, the life of a defective child.”

  Mead added: “This is a priceless possession which we cannot afford to tarnish, but society always is attempting to make the physician into a killer—to kill the defective child at birth, to leave the sleeping pills beside the bed of the cancer patient.” Mead was convinced, Levine said, that “it is the duty of society to protect the physician from such requests.”

  The Christian acceptance of mortal suffering as redemptive only solidified the Hippocratic stance. In notable historical cases even the exigencies of the battlefield could not shake doctors’ exclusive commitment to preserve life. After Napoleon Bonaparte’s troops were struck by plague in Jaffa, in May of 1799 he told his army’s chief medical officer, René-Nicolas Dufriche Desgenettes, that if he were a doctor, he’d put an end to the sufferings of the plague patients and the danger they represented to the army. He would give them an overdose of opium, a product of poppies that contains the opiate painkiller morphine. Bonaparte would, he said, want the same done for him. The doctor recalled later in his memoirs that he disagreed, in part on principle and in part because some patients survived the disease. “My duty is to preserve life,” he wrote.

  Less than two weeks later, Turkish troops closed in on their position. Bonaparte ordered that those in the hospital not strong enough to join the retreat be poisoned with laudanum, a tincture of opium. Dr. Desgenettes refused. The fifty or so patients left in the hospital, seemingly close to death, were poisoned instead by the chief pharmacist, but apparently he gave an insufficient dose. The Turks found several alive in the hospital and protected them.

  Although stories of wartime mercy killings of injured soldiers frequently appear in fictional novels and movies, it is extremely difficult to find a real, documented case of physician involvement. In the nineteenth century, however, a movement arose to challenge the physicians’ absolutist views on preserving life. In the United States and Europe, some non-physicians criticized doctors’ penchant for prolonging lives at all costs. They advocated using anesthetic drugs developed in the 1800s not only to ease the pain of dying but also to help it along. Known as “euthanasiasts,” these advocates called their proposal “euthanasia”—a Greek-derived term (eu = “good,” thanatos = “death”) that English-language writers had for centuries used to mean “a soft quiet death, or an easy passage out of this world.”

  Many doctors argued against the proposed use of their skills to bring about dying, fearing the public would lose trust in the profession. Allowing death to claim patients naturally struck them as far different from causing patients’ deaths. “To surrender to superior forces is not the same thing as to lead an attack of the enemy upon one’s own friends,” editors of the Boston Medical and Surgical Journal opined in 1884.

  Still, the movement for euthanasia grew in the United States and Europe, and it morphed. Some advocates noted the great burdens the sick, mentally ill, and dying placed on their families and society. Helping them die would be both merciful and a contribution to the greater collectivist good. Why not, some asked
, extend to terminally ill people what few would deny their sick animals, regardless of whether they were capable of expressing the wish to die? These were lives not worthy of living.

  These ideas found particular resonance at a time of widespread economic privation, suffering, and hunger in post–World War I Germany. Attention focused on the costs of caring for the elderly, disabled, mentally ill, and other dependent individuals, many warehoused in church-run asylums. (Also couched in terms of public health was the growing international support for eugenics—improving the gene pool of the society—and these individuals were seen as a threat to the purity and superiority of the German race.)

  In an effort to save money and resources during wartime in the early 1940s, the Nazis took the ideas to their logical extreme and implemented programs of involuntary euthanasia of these populations. By some counts up to 200,000 people with mental illnesses or physical disabilities were executed, the Darwinian notion of survival of the fittest employed to justify the murders. After these programs were shut down, their administrators were sent to orchestrate mass killings of Jews and others in extermination camps in Poland.

  Doctor and nurse mass murderers of more recent ilk, some who have killed many dozens of patients before being stopped—Howard Shipman, Michael Swango, and Arnfinn Nesset among them—have similarly targeted the very sick and elderly, as well as those unable to communicate and neglected by their families. On arrest, some have invoked similar justifications, claiming to have euthanized suffering patients to put them out of their misery.

  Psychiatrists have profiled these killers, identifying them as grandiose narcissists who tend to bristle at criticism, or to see themselves as saviors or gods unable to do wrong, or who get a thrill out of ending suffering and deciding when somebody should die.

  Decades after World War II, arguments for legalizing voluntary euthanasia again gained traction in several European countries. In 1973, a Dutch court ruled that euthanasia and physician-assisted suicide (whereby a doctor provides medicine that a person can take to commit suicide) were not punishable under certain circumstances, and imposed only a symbolic, suspended sentence. These acts were decriminalized in the 1980s and formally legalized by a vote of the Dutch parliament in 2001. Similar laws passed in Belgium in 2002 and Luxembourg in 2009. In Belgium, one pharmacy chain made home euthanasia kits available for about €45, complete with the sedative drug used at Memorial, midazolam; along with the anesthetic drug sodium thiopental (Pentothal), which Dr. Ewing Cook used at Memorial to euthanize pets; and a paralyzing agent that stops breathing. The kits were intended for use by doctors in patients’ homes. Doctors could prescribe them for specific patients who had signed a request for euthanasia at least a month in advance, after having discussed it with two independent doctors. The Dutch and Belgian laws did not require a terminal medical condition for a euthanasia request to be granted.

  In each country, legality rested on different guidelines, which at first appeared to offer important safeguards. For example, in the Netherlands, euthanasia was supposed to be limited to people who made repeated requests to die and were experiencing, as certified by two doctors, unbearable suffering without the possibility of improvement. However, a study of the program showed these rules were not always followed, and a small proportion of people were killed each year without having made an explicit request. There were few prosecutions in these cases. Were the Dutch merely more honest about their practices? Or did the legalization of one form of euthanasia bleed, inexorably, into the other, darker kind?

  While it was a problem that some ill or injured people had no option of participating in the program because they could not speak for themselves and had not let their wishes be known, involuntary, active euthanasia was, at the time Caplan made his review of the LifeCare deaths, not legal anywhere. Taking the life of someone who had not expressed the wish to die would contravene the principle that people have a right to decide what doctors can do to their bodies. It would also put the physician or other decision maker in the position of judging what quality of life is acceptable to another human being. The possibility of abuse (for example, insurance payouts for family members) was too great.

  However, while not legal, in practice what was considered acceptable in the Netherlands had expanded to include this type of active euthanasia. The 2002 Groningen Protocol for Euthanasia in Newborns, devised by leading Dutch medical authorities, outlined conditions for taking the lives of very ill or brain-damaged babies with the substituted consent of their parents. While this was not explicitly legal, doctors who followed the guidelines were not prosecuted. Babies—albeit sick, disabled babies, but babies nonetheless—were being euthanized openly again in Europe.

  The Netherlands’s premier advocacy and counseling organization for euthanasia and choice in dying, the NVVE, promoted social acceptance of euthanasia under conditions that were not yet legal in the hopes that they someday would be. People, particularly the elderly, who were reasonably healthy, but who were becoming an increasingly dependent burden on their families, had a profoundly diminished “quality of life,” and felt “after many years on this earth, life has been completed,” should be entitled to aid in dying, according to the group. So, too, should people with dementia and difficult-to-treat chronic psychiatric illnesses. A Dutch court approved of euthanasia for a woman with advanced dementia who repeatedly communicated her wish to die.

  In contrast with the European countries that formally legalized euthanasia in the first decade of the twenty-first century, in the United States, intentionally ending a life to relieve suffering remained illegal. The American Medical Association’s influential Code of Medical Ethics continued to prohibit active euthanasia.

  The debate in the United States focused instead on what some call passive euthanasia, the withdrawal of life support and withholding of medical treatment. In 1975, not long after the widespread adoption of high-tech intensive care medicine and only a decade and a half after the trial of Nazi leader Adolf Eichmann in Jerusalem had focused attention on the horrors of mass euthanasia, the parents of a comatose young woman, Karen Ann Quinlan, asked doctors in New Jersey to remove her from a ventilator. She had stopped breathing and suffered brain damage after taking the sedative Valium and drinking several gin and tonics with friends. She was not expected ever to recover, and friends and family recalled her having said she would never want to be kept alive that way. Doctors refused to discontinue life support, but the New Jersey Supreme Court ruled that this could be done on the basis of Quinlan’s constitutional rights to privacy and liberty, as exercised by her father. The respirator was turned off.

  Quinlan breathed on her own and survived nine more years, but her case proved a landmark. Subsequently, state courts ruled in other cases that the right to refuse treatment flowed from established rights to privacy, liberty, self-determination, and informed consent. The right to refuse treatment had already been established, in the case of some Jehovah’s Witnesses, on the basis of freedom of religion.

  The climate of American medicine had changed since the Clarence Herbert case Dr. Baltz and his colleagues had discussed at Memorial in the 1980s. Doctors treating the comatose Mr. Herbert had been charged with murder for withdrawing life support and intravenous fluids, even as they contended that this accorded with his prior wishes and the requests of his family members, who did not want him on “machines.” A California appeals court decided the case should be dismissed because the burdens to Mr. Herbert of continued treatment, although minimal, outweighed its benefits to him, as his prognosis was “virtually hopeless for any significant improvement in condition.” Stopping treatment, the court ruled, taking its lead from a presidential ethics commission, was indistinct from never having started it and was not in this case equivalent to active euthanasia. Shutting off an ordinary IV, the court likewise decided, was no different from shutting off a ventilator, as long as the treatment was legitimately refused by a patient or surrogate decision maker.

  The case set a bindi
ng precedent only in part of California, but these concepts had gained wide acceptance by the time of Katrina. The US Supreme Court in 1990 considered the case of thirty-three-year-old Nancy Cruzan, severely brain damaged in a Missouri car accident years earlier, whose parents sought to remove the feeding tube that nourished her. The Court agreed by a five-to-four margin that the right to liberty included the right to refuse life-sustaining medical care and die. However, the ruling allowed states to require clear and convincing proof of the patient’s wishes to discontinue care, not just what was believed to be in the patient’s best interest. A Missouri judge allowed Cruzan’s nutrition to be discontinued after acquaintances gave evidence this would have been her wish. The case led to increased adoption of living wills and advance directives that documented treatment preferences prior to a catastrophe.

  The next battleground was assisted suicide: whether it should be legal for doctors to prescribe drugs certain patients could take to end their lives. Having the option of a painless death at a time of one’s choosing could ease the senses of terror, loss of control, and suffering experienced by people with grave progressive diseases such as metastatic cancer, advocates argued. They questioned why only people who relied on life support or medical treatments that could be withdrawn should have the freedom to choose a dignified death with medical assistance.

  Opponents countered that removing life support allows nature to take its course whereas assisting suicide is intended to shorten life, long considered unethical and akin to active euthanasia. Hundreds of years after sorcery’s amputation from medicine, did Americans want doctors again to conjure death? Could the societal embrace of suicide for terminally ill or disabled people lead members of those groups to feel more worthless, devalued, and abandoned? Would it discount the meaning to be had from family reconnections, insights, and various forms of spiritual enrichment and personal growth that may accompany death’s approach?

 

‹ Prev