However, transplants weren’t the only impetus for a new definition: the process of dying itself was undergoing enormous change. Part of this change began from within the physical hospital itself. Intensive care units became more standard in hospitals beginning in the 1950s, and the advanced life-supporting equipment they made use of meant that more people died in the hospital than in their bedrooms. By the 1960s, 75 percent of those who were dying were in a hospital or nursing home for at least eight days prior to death. As critic Jill Lepore wrote in the New Yorker in 2009, “For decades now, life expectancy has been rising. But the longer we live the longer we die.”31 In 2009, the median length of time patients spent in hospice—palliative services for those facing terminal illness—prior to death was 21.1 days. This figure includes hospice patients who died in hospitals, nursing homes, rehabilitation centers, and private residences.32 This physical shift in space paralleled an intellectual and moral shift in attitudes toward death and, inevitably, toward disease. Both now involved and were practically indistinguishable from the institution of medicine and all the machines, interventions, and expectations that entailed.
This also meant that while respirators may have kept many patients alive in that their hearts were beating, the substance and quality of that life and the vitality of their brain function was questionable. No case exemplified this struggle more than that of Karen Ann Quinlan—a case that became every bit as important for medical ethics and end-of-life decisions as the revelations about the Tuskegee experiment had been for informed consent and ethical research. The twenty-one-year-old Quinlan mixed drugs and too much alcohol one night in 1975, and friends found her not breathing; her oxygen-deprived brain sustained significant damage, and she was left in a persistent vegetative state with a respirator controlling her breathing. When it was clear to her parents there was no room for improvement, they asked her physicians to remove her respirator. They refused, saying she did not meet the criteria for brain death. Remember that practical applications of the definition of brain death were primarily for the emerging field of transplantation, not for issues related to quality of life and prolonging life. In the same way physicians didn’t have a viable framework for the ethics of organ allocation on their own, this conceptual framework offered little guidance to the Quinlans. Next, the state of New Jersey asserted that it would prosecute any physicians who helped the young woman die. Joseph Quinlan, Karen Ann’s father, took his request to court and was first denied; eventually, the New Jersey Supreme Court ruled in his favor, noting that her death would be from natural causes and therefore could not be considered a homicide.33
Though some low-level basic brain function remained, all Karen Ann’s cognitive and emotional capabilities were wiped out. Lepore reports that pressed to testify as to her mental abilities, one medical expert characterized the extent of her injuries in the following manner: “The best way I can describe this would be to take the situation of an anencephalic monster. An anencephalic monster is an infant that’s born with no cerebral hemisphere … If you take a child like this, in the dark, and you put a flashlight in back of the head, the light comes out of the pupils. They have no brain. O.K.?”34
No, there was certainly no road map for scenarios like this in 1975, and the Quinlan case brought the bedside discussion of death to the government. It would take Karen Ann Quinlan ten long years to die from infection, but by that point the legal and ethical quagmire consumed those inside and outside the medical community. Similar to the way the right to life would grip those of us who watched the Terri Schiavo case and the fight her parents waged to keep her feeding tube inserted unfold in 2005, the Quinlan case was emotionally fraught and captured the attention of many, even those on the outside. The difference between the two cases—one of which is cast as the right to die, while the other represents the fight for the right to life—is, of course, historical and social context.
In 1968, Pope Paul VI issued the influential encyclical letter entitled “Of Human Life,” which argued for the sanctity of life beginning at the moment of conception, a central argument used by pro-life factions. In 1973, the United States Supreme Court ruled in Roe v. Wade, further polarizing both sides of the abortion rights debate. Add to this the growing fascination with the institutionalized nature of dying and the not-too-distant shadow of Nazi atrocities, and the fray the Quinlans joined that fateful day in 1975 was rife with agendas, perspectives, and controversy surrounding death. The Quinlan case embodied all these competing forces and beliefs, and its legacy continues in conversations about how to die today. We are still fascinated with death, though the focus now is on the role of government in end-of-life decisions. The kerfuffle over alleged “death panels” that health care reform precipitated in 2010 illustrates this anxiety over who is involved in dying all too well.
The Advent of Medicare
Because experimentation and exploitation so often involved minorities, the burgeoning movement to reform medical experimentation on humans was closely linked with the various civil rights movement of the 1960s and ’70s. Decisions about experimentation were no longer seen as the province solely of physicians, nor were the attendant issues strictly medical ones. Now, they were open to political, academic, legal, and philosophical debates. At the same time legislation was enacted to better protect human subjects, another groundbreaking development for patients (and for the role of government in shaping health care policy) took shape: Medicare. According to the Centers for Medicare and Medicaid Services, President Franklin D. Roosevelt felt that health insurance was too controversial to try to include in his Social Security Act of 1935, which provided unemployment benefits, old-age insurance, and maternal-child health.35 It wasn’t until 1965 that President Lyndon B. Johnson got Medicare and Medicaid enacted as part of the Social Security Amendments, extending health coverage to almost all Americans over the age of sixty-five and, in the case of Medicaid, providing health services to those who relied on welfare payments.36
In 1972, President Nixon expanded Medicare to include Americans under sixty-five who received Social Security Disability Insurance (SSDI) payments as well as individuals with end-stage renal disease. The way Medicare was set up was not an accurate reflection of the chronic conditions many of its recipients had. At its inception in 1966, its coverage, benefits, and criteria for determining the all-important medical necessity were all directed to the treatment of acute, episodic illness. This model, along with increased payment rates for physicians who treated Medicare patients, would continue virtually unchecked or unchanged for twenty-five years.37 It reflects the biomedical approach to disease, one that emphasizes treatments and cures, but it falls apart when it comes to patients with chronic disease, who make up the majority of Medicare patients.
Coverage of renal disease constituted the first time a specific ongoing disease was singled out for coverage, and concern over the overall costs of health care prompted Nixon to use federal money to bolster the creation of Health Maintenance Organizations (HMOs) as models for cost efficiencies.38 For many patients with chronic disease, HMOs would come to represent restrictions on treatment and providers rather than improved access to health coverage at best. If Nixon thought HMOs might help control health care costs, he was wrong. Health care spending and Medicare and Medicaid deficits have plagued virtually all presidents since Nixon’s time, and reached a pivotal point in 2010, when President Barack Obama helped push through the Patient Protection and Affordable Care Act.
It is not surprising that the move toward collecting more data to advance our understanding of chronic disease, toward the development of informed consent and codified research ethics, and toward expanded health coverage all happened in the decades immediately following World War Two. While the consequences of these changes are numerous, one of the simplest and most wide-reaching was the creation of a social distance between doctor and patient, and then hospital and community. As David Rothman describes, “bedside ethics gave way to bioethics.”39 This increased focus on ethics
and collective decision making protected patients in new ways, but the social distance also created a gap in trust between patient and physician. Such a gap left plenty of space for the disease activists and patient advocates who emerged during the various social justice campaigns that followed.
Chapter 3
Disability Rights, Civil Rights, and Chronic Illness
When Aviva Brandt, a forty-three-year old former Associated Press reporter, suddenly became ill in July 2007, she expected to either feel better quickly or receive a diagnosis for an ongoing problem. What started out as pneumonia, chest pain, and a hospitalization rapidly turned into ongoing, debilitating pain and fatigue, frequent infections, and other immune and autoimmune complications. More than four years later, she has undergone numerous tests and consulted with specialists in rheumatology and neurology and still has not received a final diagnosis that accounts for her extreme fatigue, diffuse pain, and various neurological symptoms.
“Healthy people don’t understand how a person can be sick for months and years and have doctors still not know what’s wrong with her. Some people get a funny look in their eye, like they think it must all be in my head because otherwise wouldn’t I have a diagnosis by now? Medical science is so advanced, with all this technology and such, so how come good doctors can’t figure out what’s wrong with me?” Brandt asks.
She desperately wants a specific diagnosis, a URL she can pull up and recognize herself in. Beyond the intellectual disappointment of not getting an answer, more pragmatic questions plague her: Would her current medication regimen differ if she knew what was wrong? What else could she be doing to improve her health and quality of life? Will her lack of a specific diagnosis hamper her ability to receive much-needed Social Security Disability Insurance benefits? And of course, there is the niggling frustration of how to answer the inevitable question directed at her from friends, family, and just about everyone she comes into contact with:
So, what’s wrong with you, anyway?
Brandt’s concerns reveal just how powerful—and, at times, dangerous—labels and categories are when it comes to living with symptoms. Some patients also take issue with the “illness” moniker itself, preferring the more benign “condition.”
Dr. Sarah Whitman, the psychiatrist who specializes in treating people with chronic pain, finds such semantic distinctions critical.
“In my work, I don’t use the term chronic pain patient, but insist on patient with chronic pain. It’s a small difference in phrasing, but one that reflects what you see first—the disease or the person,” she says. It’s a sentiment that stretches across diagnostic boundaries. The term PWD, person with diabetes, is a common one in the diabetes online community, rather than the term “diabetic,” as in “He is a diabetic.” A person has diabetes; a person is not diabetes.
When I posed the distinction between illness, disease, and condition to patients with diverse health problems on my blog and in interview questions, responses ran the gamut: Some patients preferred to use “illness” because it was less scientific-sounding and clinical than “disease.” On the other hand, other patients saw value in using the word “disease” since it conferred a type of validity and justification for their chronic pain that other words could not. For example, people living with migraine disease confront the claims their constant pain is “just a headache,” much the way patients with chronic fatigue syndrome are told they are “just tired.” (The rest of us get headaches and feel tired, so is what makes these complaints different simply a question of fortitude?) When I’m speaking about my health, I tend to describe PCD as a rare genetic disease, one that is somewhat similar to cystic fibrosis. The use of the word “disease” hasn’t been a conscious one, but in retrospect I do see how the word offers some built-in credibility I didn’t have when I was incorrectly diagnosed with “atypical asthma.” Since practically no one has ever heard of PCD—including health care professionals, who, even if they have heard the term in passing, don’t know much about how it works or how it is managed—I find the aspect of defining one disease by comparing it to another more interesting. Cystic fibrosis isn’t nearly as common as, say, heart disease, diabetes, asthma, or arthritis, but it is more common and much more recognized by the general public than PCD is, so I leverage that familiarity.
“I tend to use ‘illness’ more than ‘condition’ or ‘disease,’” Brandt says. “To me, ‘condition’ refers to something I live with that doesn’t have much impact on my daily life. I have many allergies and asthma, but because they’re almost entirely under control, I consider them a condition I have. No big deal. Sure, there’s places, animals, and foods I need to avoid, but I’ve been living with that my entire life and it’s second nature to the point that it’s essentially something I deal with subconsciously.”
I feel the same way about my thyroid condition: I take a daily pill, I check my thyroid hormones regularly through blood work, and beyond that, I don’t give it too much thought. My medication controls the symptoms, and as long as I adjust my dosage when needed, it has very little bearing on the activities of my daily life. Naturally, the diseases that do incapacitate me and have regular, direct influence on my quality of life, my productivity, and my relationships are the ones I focus on; there is nothing as primal and immediate as the act of drawing breath.
Aviva Brandt also emphasizes the quality-of-life aspect in her perspectives on illness and her semantic choices. “My mystery illness, on the other hand, affects every single part of my life. I can’t forget it or ignore it. It affects my entire family, especially my young daughter, who went from being home with mommy all the time to full-time daycare at age two and a half when I suddenly got too sick to take care of her on my own at home,” she says. “I like the word ‘disease,’ and will probably use it if and when I finally get a diagnosis. But to me, the word implies that you know what you have. It’s a scientific word in some ways. And since I’m in limbo-land, it doesn’t feel like I have the right to use it yet.”
Patients don’t want to be reduced to a laundry list of symptoms or a disease label, yet science matters, and the words we choose to describe and categorize illness have enormous reach. A label can bestow many things: a medical billing code for insurance purposes; a course of treatment or medication; entrée to a particular community of like patients; validation for physical symptoms. On the other hand, the lack of a label or classification radiates complexity outward, too, from personal doubts and skepticism to difficulty securing necessary benefits or work accommodations. Ambiguity is often the enemy of patients.
The relationship between illness and disability is equally complicated. Not everyone with a physical disability has a chronic illness, and not everyone with a chronic illness is considered disabled by his or her symptoms; but there is a lot of crossover. Writer Susan Wendell makes a useful distinction between the “healthy disabled” and the “unhealthy disabled.” The healthy disabled are those whose physical symptoms and limitations are fairly stable and predictable. She writes, “They may be people who were born with disabilities … or were disabled by accidents or illnesses later in life, but they regard themselves as healthy, not sick, they do not expect to die any sooner than any other healthy person their age, and they do not need or seek much more medical attention than other healthy people.”1 Population in this group is in flux, since some conditions do progress and, as Wendell notes, some people with relatively stable disabilities have other health conditions; but in general they are “healthy.”
When I think of Aviva Brandt’s ongoing medical and testing odyssey and the experiences of patients with diseases as diverse as multiple sclerosis to arthritis and many others, it is clear Wendell is onto something. People with chronic illness may not reside permanently in the land of the “unhealthy disabled,” but many of us spend enough time there that we know this much: we do not fully belong in the world of the healthy, either. While people living with disabilities may not spend as much time actively “sick” as people with so
me chronic illnesses do, marginalization is often all too familiar.
“The metaphor that I keep returning to is ‘curb cuts’; if you’re an able-bodied person navigating a city sidewalk, you probably don’t notice curb cuts, but if you’re in a wheelchair they make all the difference in the world,” observes Duncan Cross (not his real name), a thirty-something patient with Crohn’s disease, an autoimmune disease that affects the bowel and gastrointestinal tract. “Lots of places used to build sidewalk curbs with total disregard for that fact, because they had no awareness of the wheelchair user’s experience. Finally, folks in wheelchairs were able to get the message through—those three inches of concrete might as well be Hadrian’s Wall, from their perspective. And most places started building their sidewalks differently as a result,” he says.
With chronic illness, the solutions, like the symptoms, are not that concrete. As Wendell writes, “Many of us with chronic illnesses are not obviously disabled; to be recognized as disabled, we have to remind people frequently of our needs and limitations. That in itself can be a source of alienation from other people with disabilities, because it requires repeatedly calling attention to our impairments.”2 She’s right: if you’re part of a community that has fought for decades to gain footing in personal and professional realms, as the disability rights community has, then the experience of deliberately naming problems and (necessarily) demanding recognition and accommodations for them could run contrary to those goals. The unpredictability of symptoms and their severity that sets chronic illness apart from certain physical disabilities can also make for “unreliable activists,” individuals who might be able to run workshops or attend policy meetings one day and be bedridden the very next.3
In the Kingdom of the Sick: A Social History of Chronic Illness in America Page 6