In the Kingdom of the Sick: A Social History of Chronic Illness in America
Page 5
If we look at current perspectives and definitions of chronic illness from the Centers for Disease Control and Prevention (CDC), there is a telling change in focus from earlier iterations. In detailing the causes of chronic disease, which current data suggest almost one out of every two Americans lives with, the CDC listed lack of physical activity, poor nutrition, tobacco use, and alcohol consumption as responsible for much of the illness, suffering, and premature death attributed to chronic diseases in 2010.12 Given that heart disease, stroke, and cancer account for more than 50 percent of deaths annually, and each is linked to lifestyle and behaviors, it is not a surprise that these four factors are highlighted.13 Such emphasis implies something more than merely causation. It denotes agency on the part of patients whose choices and behaviors are at least somewhat complicit in their illnesses. This parallels older attitudes toward infectious diseases: if patients weren’t living a certain way (in squalor) or acting a certain way (lasciviously), they wouldn’t be sick. At the same time, it separates certain chronic conditions and the patients who live with them from the forward momentum of medical science: we can kill bacteria, we can eradicate diseases through vaccination, we can transplant organs, but the treatment and prevention of many conditions is the responsibility of the patient.
Questions of correlation and causation depend on data. It was during the post–World War Two time period that the methodical collection of statistics on chronic disease began. In the America President Dwight D. Eisenhower inherited in the 1950s, the average lifespan was sixty-nine years. Huge trials were under way to develop a new polio vaccine and new treatments for polio, and heart disease and stroke were more widely recognized as the leading causes of death from noninfectious disease. In 1956, Eisenhower signed the National Health Survey Act, authorizing a continuing survey to gather and maintain accurate statistical information on the type, incidence, and effects of illness and disability in the American population. The resulting National Health Interview Survey (1957) and the National Health Examination Survey (1960) produced data that helped researchers identify and understand risk factors for common chronic diseases like cardiovascular disease and cancer.14 These developments, which occurred right before the Surgeon General’s seminal 1964 report on smoking and lung cancer, revealed an awareness of the pattern between how we lived and the diseases that affected us the most.
The legacy of this association between smoking (a behavior) and lung cancer (an often preventable disease) and policies to reduce smoking in the United States has helped shape public health in the twentieth and twenty-first centuries. From seat belt laws to smoke-free public spaces, the idea that government could at least partially intervene in personal decision making and safety issues started to gain real traction in mid-twentieth-century America. An aggressive (and successful) public health campaign to vaccinate against infectious diseases like polio was one thing; moving from infectious, indiscriminate disease to individual choice and behavior was quite another, and backlash against such public health interventions was strident.
Dr. Barry Popkin, a professor in the Department of Nutrition at the University of North Carolina, is on the front lines of the political debate over taxing soda and other sugary beverages and sees similarities in the pushback against policies like the beverage tax. He points to the public health success in saving lives through seat belt laws, or to data showing that the taxing of cigarettes cuts smoking and saves thousands of lives. “It’s the same with fluoridizing water,” he notes, adding how in the 1950s the American Medical Association accused President Eisenhower of participating in a Communist plot to poison America’s drinking water. “There’s not a single public health initiative that hasn’t faced these arguments,” he says.
At the same time these data collection and public health programs took off, the National Institutes of Health (NIH), a now-familiar and influential research organization, gained prominence. Originally formed in 1930, it wasn’t until 1946 that the NIH took on the kind of power that we recognize today. The medical research boon ushered in by World War Two and bolstered by the Committee on Medical Research (CMR) threatened to retreat to prewar levels, a move that politicians and scientists alike were loath to see happen. They rallied to increase funding and support of the NIH. Tapping into the patriotic fervor of the time, they argued that medicine was on the cusp of its greatest achievements, from antibiotics to chemotherapy, and that supporting continued research wasn’t just for the well-being of Americans as individuals but was critical to national self-interest, too.15 The timing of the combination of increased technology and patriotic zeal undoubtedly benefited researchers, and in many ways it benefited patients, too. However, the case for national self-interest also created a serious gap when it came to individual patients’ rights.
Not surprisingly, military metaphors were often invoked to champion the cause: the battle against disease was in full effect. War, long considered “good” for medicine, was also the physical catalyst for the increased focus on medical research and innovations. Military language first came into favor in the late 1800s, when bacteria and their biological processes were identified, and it continued to proliferate as researchers began to understand more and more about how diseases worked. By the early 1970s, the application of military terms to the disease process was troubling enough to attract the precision lens of essayist Susan Sontag’s prose. She wrote, “Cancer cells do not simply multiply; they are ‘invasive.’ Cancer cells ‘colonize’ from the original tumor to far sites in the body, first setting up tiny outposts … Treatment also has a military flavor. Radiotherapy uses the metaphors of aerial warfare; patients are ‘bombarded’ with toxic rays. And chemotherapy is chemical warfare, using poisons.”16 The problem is, this scenario leaves little room for those who fight just as hard and do not win.
It is no coincidence that the tenor of the National Cancer Act of 1971, signed into law in December of that year by President Richard M. Nixon, reflected this military attitude. The act, characterized as “bold legislation that mobilized the country’s resources to fight cancer,” aimed to accelerate research through funding.17 Just as experts and politicians were unabashedly enthusiastic that infectious disease would soon be conquered with the new tools at their disposal, those involved in cancer treatment believed we were on the precipice of winning this particular “war,” too. On both fronts, this success-at-all-costs attitude would have profound implications for patients with chronic illness.
Patient Rights, Provider Privilege: Medical Ethics in the 1950s–1970s
Until the mid-1960s, decisions about care and treatment fell under the domain of the individual physician, even if those decisions involved major ethical and social issues.18 Whether it was navigating who should receive then groundbreaking kidney transplants or what constituted quality of life when it came to end-of-life decision making, these pivotal moments of conflict in modern-day patients’ rights and informed consent took the sacrosanct doctor-patient relationship and transformed it into something larger than itself.
A major call for ethical change came in the form of a blistering report from within the medical establishment itself. In 1966, anesthesiologist and researcher Henry K. Beecher published his famous whistle-blowing article in the New England Journal of Medicine that detailed numerous abuses of patients’ rights and patients’ dignity. In doing so, he brought to light the sordid underside of the rigorous clinical trials pushed forth under the guise of patriotism in the 1940s. The examples that constituted Beecher’s list of dishonor included giving live hepatitis viruses to retarded patients in state institutions to study their etiology, and injecting live cancer cells into elderly and senile patients without disclosing the cells were cancerous to see how their immune systems responded.19 Such abuses were all too common in post–World War Two medical research, which social medicine historian David Rothman accurately describes as having “lost its intimate directly therapeutic character.”20 Is the point of research to advance science or to improve the life of the pa
tient? The two are not mutually inclusive. The loss Rothman describes cannot be glossed over, and the therapeutic value of treatments and approaches is something we continue to debate today. So significant was this disclosure of abuse that Beecher was propelled to the muckraker ranks of environmentalist Rachel Carson (author of Silent Spring), antislavery icon Harriet Beecher Stowe (author of Uncle Tom’s Cabin, and not a relative of Henry Beecher), and food safety lightning rod Upton Sinclair (author of The Jungle). In the tradition of these literary greats, Beecher’s article brought to light an indictment of research ethics so great that it transformed medical decision making.21
While war might have been good for medicine in terms of expediency and efficiency, it was often catastrophic for the subjects of the clinical trials it spurred. The Nuremberg Code (1947) was the first international document to guide research ethics. It was a formal response to the horrific human experiments, torture, and mass murder perpetrated by Nazi doctors during World War Two. It paved the way for voluntary consent, meaning subjects can consent to participate in a trial, can consent without force or coercion, and that the risks and benefits involved in clinical trials are explained to them in a comprehensive manner. Building on this, the Declaration of Helsinki (1964) was the World Medical Association’s attempt to emphasize the difference between care that provides direct benefit for the patient and research that may or may not offer direct benefit.22
More than any other document, the Patient’s Bill of Rights, officially adopted by the American Hospital Association in 1973, reflected the changing societal attitudes toward the doctor-patient relationship and appropriate standards of practice. Though broad concepts included respectful and compassionate care, the specifics of the document emphasized the patient’s right to privacy and the importance of clarity in explaining facts necessary for informed consent. Patient activists were critical of the inability to enforce these provisions or mete out penalties, and they found the exception that allowed a doctor to withhold the truth about health status when the facts might harm the patient to be both self-serving and disingenuous. Still, disclosure of diagnosis and prognosis is truly a modern phenomenon. Remember that in the early 1960s, almost all physicians didn’t tell their patients when they had cancer; in one study, a staggering 90 percent of physicians called this standard practice. As a testament to these wide-scale changes, by the late 1970s most physicians shared their findings with their patients.23
Certainly the lingering shadow of the infamous Tuskegee syphilis study reflects the ongoing question of reasonable informed consent. In the most well-known ethical failure in twentieth-century medicine, researchers in the Tuskegee experiment knowingly withheld treatment from hundreds of poor blacks in Macon County, Alabama, many of whom had syphilis and were never told this, so that the researchers could see how the disease progressed. Informed consent was not present, since crucial facts of the experiment were deliberately kept from the men, who were also largely illiterate. The experiment began in 1932, and researchers from the U.S. Public Health Service told the men they were getting treatment for “bad blood.” In exchange for participation, the men were given free medical exams and free meals. Even after a standard course of treatment using penicillin was in place for syphilis in 1947, the appropriate treatment was kept from the subjects. In short, researchers waited and watched the men die from a treatable disease so they could use the autopsies to better understand the disease process.24
The experiment lasted an intolerable forty years, until an Associated Press news article broke the story on July 25, 1972. Reporter Jean Heller wrote, “For 40 years, the U.S. Public Health Service has conducted a study in which human guinea pigs, not given proper treatment, have died of syphilis and its side effects.” In response, an ad hoc advisory panel was formed at the behest of the Assistant Secretary for Health and Scientific Affairs. It found the experiment “ethically unjustified,” meaning that whatever meager knowledge was gained paled in comparison to the enormous (often lethal) risks borne by the subjects, and in October 1972, the panel advised the study be stopped. It did finally stop one month later, and by 1973, a class action lawsuit was filed on behalf of the men and their families. The settlement, reached in 1974, awarded $10 million and lifetime medical care for participants and, later, for their wives, widows, and children.25 A formal apology for the egregious abuse of power and disrespect for human dignity did not come until President Bill Clinton apologized on behalf of the country in 1997. “What was done cannot be undone,” he said. “But we can end the silence. We can stop turning our heads away. We can look at you in the eye and finally say, on behalf of the American people: what the United States government did was shameful.”26
The lack of informed consent was a major aspect of the morally and ethically unjustified Tuskegee experiment. Similarly, it is hard to argue that orphans, prisoners, and other populations that had served as recruiting grounds for experiments throughout the late nineteenth century and much of the twentieth century could have freely objected to or agreed to participate in experiments—freedom of choice being a hallmark of supposed “voluntary participation.” Widespread coercion undoubtedly took place. The rush to develop vaccinations and more effective treatments for diseases during World War Two blurred the line between medical care that was patient-centered and experimentation that fell short of this criterion. It is tempting to think such distasteful subject matter is in the past, but considering how many millions of patients with chronic illness depend on and participate in research trials to improve and perhaps even save their lives, it is an undeniable part of the present, too. When we factor in the complex issue of informed consent and the use of DNA in clinical trials today, we can start to see just how thorny these ethical questions remain.
Another poignant example of a breach in medical ethics is revealed by author Rebecca Skloot in The Immortal Life of Henrietta Lacks. The riveting narrative traces the cells taken from a poor black mother who died of cervical cancer in 1951. Samples of her tumor were taken without her knowledge. Unlike other human cells that researchers attempted to keep alive in culture, Henrietta’s cells (called HeLa for short) thrived and reproduced a new generation every twenty-four hours. Billions of HeLa cells now live in laboratories around the world, have been used to develop drugs for numerous conditions, and have aided in the understanding of many more. So prolific are the cells and their research results that some people consider them one of the most important medical developments of the past hundred years.27 And yet despite the enormity and immortality of Henrietta Lacks’s cells and the unquestionable profits their experiments have yielded, scientists had no consent to use them, and her descendants were never given the chance to benefit from them.
On the heels of Henry K. Beecher’s whistle-blowing article and heated debate over transplantation and end-of-life care, U.S. senators Edward Kennedy and Walter Mondale spearheaded Congress’s creation of a national commission on medical ethics in 1973. This helped solidify a commitment to collective (rather than individual) decision making and cemented the emergence of bioethics as a distinct field.28 With advances in reproductive technology, stem-cell research, and other boundary-pushing developments in medicine we see today, the importance of bioethics is far-reaching. New rules were put into place for researchers working with human subjects to prevent a “self-serving calculus of risks and benefits,” and written documentation came to replace word-of-mouth orders. Now, the ubiquitous medical chart that had once been a private form of communication primarily for physicians was a public document that formally recorded conversations between patients and physicians. In 1974, Congress signed into law the National Research Act, which created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. This commission was tasked with identifying the principles that should govern experiments involving human subjects and then suggesting ways to improve protection of participants.29
Physicians were divided about the various changes and interventions from politi
cians, public policy experts, ethicists, and legal experts that emerged during this time period. While some of the most powerful disclosures (such as the Beecher article) came from physicians themselves, many also feared the intrusion into the private relationship between doctor and patient and the resulting changes in power they made possible. When physicians at the Peter Bent Brigham Hospital (now Brigham and Women’s Hospital) in Boston successfully transplanted a kidney from one identical twin to another in 1954, it was a milestone in surgical history as well as in collective decision making. Previously, in the closed relationship between physician and patient, the treating physician was responsible for exhausting every means of treatment for that patient. Quite simply, this model did not give physicians the answers for the complex new set of problems that arose once kidney transplants moved from the experimental to the therapeutic stage. Was it ethical, some wondered, to remove a healthy organ from a healthy donor, a procedure that could be considered “purposeful infliction of harm?”30 Even with donor consent, was participation truly voluntary, particularly when family members were asked to donate? How should physicians handle the triage and allocation of such a scarce resource? These questions were too complicated for the individual physician to address.
Kidney transplantation, which was soon followed by advances in heart transplantation, raised yet another crucial ethical question for physicians, patients, and society at large: How should we define death, especially when one patient’s death could potentially benefit another patient? For those languishing with end-stage illness and chronic disease, as well as the huge number of patients who would go on to receive transplants in coming decades, there was hardly a more important question. As transplantation increased, it became clear that using heart death as the definition would not work, since heart death put other transplantable organs at risk.