In the Kingdom of the Sick: A Social History of Chronic Illness in America

Home > Other > In the Kingdom of the Sick: A Social History of Chronic Illness in America > Page 4
In the Kingdom of the Sick: A Social History of Chronic Illness in America Page 4

by Laurie Edwards


  Industrialization and urbanization were responsible for the emergence of diseases like polio, but changes in the way people communicated were responsible for spreading public health goals, too. Disease wasn’t just about scientific theories; it was a social phenomenon. The America that emerged after World War Two was fighting a war in Korea and was consumed with the Cold War and McCarthyism, and a new form of technology brought these events—and, more importantly, the intellectual and emotional basis for them—into the home. Television was an important player in spreading the “gospel of health” and promoting newly focused public health and medical research goals. A well-run state depended on people adopting a preferred public health agenda, and mass communication of health literature allowed that to happen.45 Putting health information in the hands of the general public took it out of the exclusive domain of the doctor in the laboratory or operating room and brought it into the realm of the patient’s narrative and subjective experience.

  It is in this context that we reconsider Melissa McLaughlin’s chronic fatigue syndrome and fibromyalgia, or Emerson Miller’s HIV, the latest additions to an increasingly widening scope of conditions we can treat but we cannot cure.

  “The fact that you’re just not going to get better seems unbelievable to most people, I guess,” says Melissa McLaughlin. One frustration for her is people who can’t understand that patients cannot control or fix everything. “It’s easier for them to believe that there is something you can control … There must be something you can do that you aren’t doing! Eating raw foods, forcing yourself to exercise, thinking your way out of it, trying the latest drugs that promise a cure in their commercial: something should work, and if you’re not better, then you’re not working hard enough. It’s frustrating, it’s everywhere (even, sometimes, in my own mind), and it’s just wrong. It’s just wrong: I can’t think or eat or exercise my way out of these illnesses, no matter how hard I try.” Even Melissa’s doctors followed suit, urging her to exercise more often even though it made her pain and fatigue much worse.

  On the other hand, we have our great fear of HIV, the infectious disease that does not bend to our will. Shame is often embedded in its mode of transmission, and so far its wily ability to mutate has made it impervious to the very same vaccination process that revolutionized modern medical science. Emerson Miller doesn’t believe he will see a cure in the lifetime of current researchers, and, in fact, he worries that the progress we have made may actually have a negative impact on the search for a cure and on vigilance against the spread of HIV.

  “I don’t want the sense of urgency to go away,” he says, hoping that the knowledge there is a drug cocktail that can effectively reduce viral load does not mean people will take the disease less seriously, particularly those who may contract the virus through preventable life choices.

  The journey from Plato and Socrates to the Enlightenment and Industrialization to more modern public health advances is a circuitous one. By the middle of the twentieth century, the ability of scientists, physicians, and public health officials to alter the course of diseases that once devastated the population made it possible for people to adapt their thoughts on illness and disability; no longer were they considered to be inevitable and immovable components of daily life. This attitude would have strong repercussions for the next generation of patients, the ones touched by the other big medical emergence of the postmodern era: chronic illness. The period immediately after World War Two was a time of what scholar Gerald Grob describes as irresistible progress, a time when it seemed like science was on the brink of curing so much of what ailed us.46 With so many concrete victories to point to, the existence of illnesses that would not go away—chronic conditions that were somehow beyond the reach of medical science—would appear that much more unpalatable.

  Chapter 2

  An Awakening

  Medicine and Illness in Post–World War Two America

  Every semester, when I ask my health sciences students to define what medical ethics means to them, I usually hear the same chorus of responses: treating the patient as a whole person. Advocating for the patient. My nursing students often chime in with the term “non-malfeasance”—avoiding doing harm to the patient. They often share examples of when those ethics were challenged without divulging personal details, since patient confidentiality is taken seriously in our classroom, thanks to the Health Insurance Portability and Accountability Act (HIPAA) of 1996. Teenage patients who want a course of treatment different from what their parents want for them, or an elderly patient’s wishes not being respected by the next of kin tasked with difficult decisions, are common examples. Usually, it is when we explore instances of perceived lapses in judgment or ethics that we circle around to the most exhaustive understanding of why the students choose to define ethics as they do.

  Perched over Boston’s bustling Huntington Avenue in our fourth-floor classroom at Northeastern University, steps from Harvard teaching hospitals like Brigham and Women’s and Beth Israel Deaconess Medical Center, as well as other renowned institutions like Children’s Hospital, the Dana-Farber Cancer Institute, and the Joslin Diabetes Clinic, we are all fortunate. When I need to, I can take a left down Huntington Avenue, walking past the take-out restaurants and dive bars with dollar wing specials, past where the E Line street-level trolley cuts through the main thoroughfare, filled with high school students, nurses and medical residents, young mothers with children in strollers, and college students with iPod ear buds. When I walk through Brigham’s main revolving door I am no longer a writer and health sciences writing lecturer; I am a patient with a rare disease who depends on the innovative treatments and technology at hospitals like this. Quite literally, I am crossing Sontag’s threshold from the kingdom of the well into the kingdom of the sick. In many respects, I know this kingdom and its attendant customs and interactions more intimately than I do the realm of the healthy. Appointments and hospital admissions are frequent in my world, and the diagnostic tests, procedures, and treatments I sign consent forms for are a routine part of my life.

  My students often make this same walk down Huntington to their respective clinical and co-op placements, field placements the school arranges, minutes from their dorms and apartments and their classrooms and labs. They learn about patient care in some of the most medically advanced and prestigious research hospitals in the world. They too cross into another kingdom, shedding their college student personas and adopting the mindset of the health care apprentice. We each have our roles, and it is easy to forget that not all patients and providers have access like this.

  My students’ definitions of medical ethics are on point, but it isn’t without more probing and more discussion that we land upon the topics of informed consent and patients’ rights. I think this is partially a good thing; they see these principles at work so regularly in their rotations that the principles are just that: routine. Lists of patients’ rights are posted on hospital walls and in emergency room bays throughout hospitals. Informed consent for procedures big and small often—not always, but often enough—entails a quick overview and a dash of a perfunctory signature. But every now and then, a student will question what we often take at face value.

  How informed is consent if, say, the patient doesn’t have a good grasp of English and there isn’t time for a translator, or the resources to provide one? To what extent do patients who have no health insurance and therefore limited ability to seek different care have their basic rights upheld? How helpful or equitable are online resources if they assume a digital literacy and access that some patients don’t have?

  Most often, it is when something goes wrong that we stop and think about the potential risks listed on the procedures we agree to undergo, or consider just what it means to be treated with respect and dignity regardless of our origin, religion, or financial status. Because those of us with appropriate access to consent forms and patients’ rights have the luxury to navigate a medical establishment that is at least moderately succe
ssful in upholding these basic promises, we don’t have to stop and consider them as much as we might otherwise.

  The ethical treatment of patients may depend in part on whether we think our illnesses say more about us than our health. On the surface, if we are just looking at obesity rates, cardiovascular disease, or a decline in physical activity precipitated by a digital lifestyle, it is easy to claim that yes, perhaps they do. If we consider the association between the environment in which we live and the risk of developing certain cancers and other conditions, then that is another layer of probability. However, the question probes at something much deeper than that. If our illnesses reveal strength or weakness in us, then so too does the way we treat the individual patient living with illness.

  In the decades just following World War Two and leading up to the social justice movements of the 1960s and ’70s, many of the concepts most of us take for granted had a fairly egregious track record. Informed consent was at best an afterthought, at worst deliberately ignored, and medical decision making was too often deeply skewed toward those with power. The 1950s and ’60s were a pivotal turning point in patients’ rights, ethics, and medical decision making. For patients living with chronic and degenerative diseases, the timing of this was critical.

  Chronic Illness as an Emerging Priority

  On the heels of World War Two, America was coming down from the heady throes of patriotism and was exposed to more innovative medical technology. The establishment of the independent, nonprofit national Commission on Chronic Illness in May 19491 indicated a growing awareness of the demands of chronic disease. The Commission on Chronic Illness was a joint creation of the American Hospital Association, the American Medical Association, and the American Public Welfare Association2 and its initial goals included gathering and sharing information on how to deal with the “many-sided problem” of chronic illness; undertaking new studies to help address chronic illness; and formulating local, state, and federal plans for dealing with chronic illness.3 This included plans to dispel society’s belief that chronic illness was a hopeless scenario, create programs that would help patients reclaim a productive space in society, and coordinate disease-specific groups with a more universal program that would more effectively meet the needs of all patients with chronic illness, regardless of diagnosis.4

  These goals indicate that when chronic illness was emerging as a necessary part of the postwar medical lexicon, it was seen as a social issue, not just a physical or semantic one. Many of these goals are the same ones patients and public health officials point to today, signaling that either the commission was particularly forward-thinking—or, that we have yet to mobilize and systematically address the unique needs of the chronically ill the way other movements have mobilized in the past.

  Still, the Commission on Chronic Illness was an important concrete step in the process to recognize and address chronic illness. It defined chronic illness as any impairment characterized by at least one of the following: permanence, residual disability, originating in irreversible pathological alteration, or requiring extended care or supervision.5 Now, we have many variations of the same theme. Sometimes, the length of time symptoms must persist differs; sometimes, the focus is on ongoing treatment rather than supervision. Rosalind Joffe, a patient with chronic illness who is a life coach specializing in helping executives with chronic illness stay employed, offers three important characteristics experts agree are often found in chronic illness: the symptoms are invisible, symptoms and disease progression vary from person to person, and the disease progression and worsening or improvement of symptoms are impossible to predict.6 I’ve always found the “treatable, not curable” mantra a helpful one in discussing chronic illness, since it allows for all those variances in diagnoses, disease course, and outcomes. In some cases, treatment could be as simple as an anti-inflammatory drug to manage mild arthritis or daily thyroid medication to correct an imbalanced thyroid hormone level. At the other end of the spectrum are diseases like cystic fibrosis, where the treatment progresses to include organ transplantation (which is a life-extender, not a cure).

  To get a sense of just how broad the spectrum of what we could define as chronic illness is, consider sinusitis, a very common chronic condition affecting some thirty-one million patients annually.7 Its frequency, duration, and treatment (because even those who undergo surgery for it are rarely fully cured) technically fit the basic meaning of a chronic illness, a prime example of the utility of substituting “condition” for “illness.” However, sinus congestion is not the ailment we usually associate with chronically ill patients. That this umbrella term reaches far enough to encompass AIDS is a telling shift and adds to that basic premise that chronic illness is treatable, not curable.

  More than being a straightforward counterpart of acute illness, the very notion of chronic illness is one rooted in social and class consciousness. Ours is a society that values youth, physical fitness, and overachievement. By the middle of the twentieth century, this elevation of the importance of the perceptions of others played out in rigid social conformity, as well as in anxiety about that conformity. Scholars and writers of the time worried that people were living in “slavish compliance to the opinions of others—neighbors, bosses, the corporation, the peer group, the anonymous public.”8 Given the external events of the time—McCarthyism, the Cold War, the space race—it is not hard to see why maintaining the status quo and the cloak of homogeneity would have been appealing to many, and why in the ensuing years, so many would rebel from that same conformity.

  As I write this, the term “self-improvement” conjures up images of extreme dieting and aggressive cosmetic surgeries and enhancements more often than it does industriousness or work ethic. In fact, the drive for perfection often spurs the desire for short cuts or immediate results our technology-driven culture makes possible. If science can improve on imperfections, shouldn’t we take advantage of its largesse? The middle of the twentieth century ushered in the idea that we must somehow stack up across all social and professional strata in our lives. News headlines are filled with stories that track stars’ adventures in surgical reconstruction, and daytime television commercials are rife with weight loss ads and other enhancement products that offer big rewards with supposedly little risk. This upsurge in enhancement technologies is what physician, philosopher, and bioethicist Carl Elliott calls the American obsession with fitting in, countered by the American anxiety over fitting in too well. The very nature of chronic illness—debilitating symptoms, physical side effects of medications, the gradual slowing down as diseases progress—is antithetical to the cult of improvement and enhancement that so permeates pop culture.

  Autoimmune diseases, which affect nearly twenty-four million Americans,9 are a prime example of chronic illnesses that defy self-improvement. At their core, autoimmune disorders occur when the body mistakenly begins to attack itself. The concept first took root in 1957, but in The Autoimmune Epidemic, Donna Jackson Nakazawa points out that it wasn’t until the 1970s that the concept gained widespread acceptance. While heart disease, cancer, and other chronic conditions had been tracked for decades, as late as the 1990s no government or disease-centered organization had collected data on how many Americans lived with the often baffling conditions that make up autoimmune diseases.10 The mid-twentieth-century America in which the notion of autoimmune disease made its debut represents a pivotal time period in the evolution of chronic illness. The country had just moved past the frenetic pace of immunization and research that followed World War Two. Patients’ rights and informed consent began to be recognized as important issues, particularly with the emerging field of organ transplantation, and those topics plus the advent of managed care plans in the 1960s each contributed to the beginning of a marked change in how medicine and society looked at disease.

  With autoimmune diseases, the specific part of the body that is attacked manifests itself in a wide variety of conditions, from the joints and muscles (rheumatoid arthritis and lupu
s) to the myelin sheath in the central nervous system (multiple sclerosis) to the colon or muscles (Crohn’s and polymyositis). It isn’t so much a question of whether autoimmune disorders are “new” conditions as it is a question of correctly identifying them and sourcing the origin of that fateful trigger. Sometimes, something as innocuous as a common, low-grade virus can be the trigger that jumpstarts the faulty immune response, and research suggests many of us carry genes that leave us more predisposed to developing autoimmune disease. However, when we look at alarming increases in the number of patients being diagnosed with conditions like lupus, the role of the environment, in particular the chemicals that go into the household products we use, the food we consume, and the technology we employ every day, is of increasing significance.

  Nakazawa takes a strong position on this relationship: “During the four or five decades that science lingered at the sidelines … another cultural drama was unfolding in America, the portentous ramifications of which were also slipping under the nation’s radar. Throughout the exact same decades science was dismissing autoimmunity, the wheels of big industry were moving into high gear across the American landscape, augmenting the greatest industrial growth spurt of all time.”11

  It is simply not possible to discuss disease in purely scientific language. Culture informs the experience of illness, and living with illness ultimately shapes culture. From the interconnectedness of the way we work and communicate virtually to the way we eat to the products we buy, the innovation that has so drastically changed the course of daily life and culture has an unquestionable impact on health and on the emergence of disease. Technology and science inform culture as well, and cultural mores influence what research we fund, and how we use technology.

 

‹ Prev