Book Read Free

Elderhood

Page 8

by Louise Aronson


  I didn’t question the framework that portrayed the vast majority of human beings as something other than normal—none of us did. Fortunately for me, there was no question that children mattered at least a little in medicine, despite being fundamentally abnormal.

  Starting with fertilization, we spent weeks listening to lectures, discussing cases in small groups, and looking at tissue under the microscope, moving chronologically from embryology through to physical maturity. We learned about bodily changes and psychosocial development in neonates, toddlers, children, and adolescents, topics touched on again in our second year, when we were introduced to childhood diseases and deformities, congenital and acquired. If you added those weeks of classroom learning to our required third-year pediatrics rotations—a month in the hospital and a weekly session or two in a pediatrics clinic during the outpatient block—all medical students spent several months learning about children. That sounds like good training for future doctors, unless you consider that childhood constitutes a quarter of most lives, and a few months was by no means a quarter of our education. In those days, it never occurred to me to wonder how priorities were selected in medicine or why an age group’s health care needs and health service use didn’t affect how much time we spent studying it.

  Scientific knowledge is considered objective, but it operates within the same social structures and biases as those who conduct it. For most of the history of Western medicine,1 no distinction was made between the treatment of children and adults. That approach paralleled the societal approach to childhood. Kids were not essentially different from adults, just smaller. Accordingly, they were put to work as soon as they could walk, as they still are today in many parts of the world. Over centuries, they were portrayed in paintings as adults of small stature, with clothing and bodily proportions the same as those of grown-ups and thus anatomically inaccurate. In the mid-1800s, England and several European countries passed child labor laws, and soon thereafter childhood became a more distinct period of life. The specialty of pediatrics emerged in response to those larger societal changes. Since the best medical literature of the day came from across the Atlantic, physicians in the United States were aware of the new specialty but weren’t much interested. Children were the purview of women, and medicine the purview of men. Although the first pediatrics organizations were founded in the late 1800s in the United States, pediatrics didn’t get much general attention until the lead-up to World War I, when the powerful realized our country would have more soldiers if fewer children died.2 Like so much history, pediatrics owes its existence both to some people doing the right thing for the right reasons and to advantaged others doing that same thing because it also helped preserve their power and supremacy.

  When the numbers of women entering medicine transitioned from token to near parity, women’s health emerged as an area of scholarship and care worthy of clinics, departments, research grants, and centers of excellence. The class ahead of mine, the class of 1991, was the first at Harvard Medical School to be half female, half male. That’s nearly thirty years ago, yet we still speak of “health” and “women’s health,” implying they are distinct entities and that health is a mostly male condition. Concern for the health of brown- and black-skinned people, who make up a majority of human beings worldwide and a significant proportion of the U.S. population, followed a similar trajectory. By the start of the twenty-first century, with the student bodies of many schools beginning to resemble our diverse population, the marked racial and ethnic disparities in health care began receiving attention and funding. Now medical schools have diversity deans and the National Institutes of Health operates a National Institute on Minority Health and Health Disparities.

  These efforts, helpful and needed as they are, also paradoxically reinforce the status quo. They are overlays, not essentials, focused on “minorities,” even in places like California, where so-called nonwhites are a majority. When people are defined by what they are not, we are in trouble.

  Social forces and cultural rationales determine what doctors study and value. Throughout medical history, this has played out in brilliant discoveries, lifesaving treatments, and better health, but also in worse health, injuries, and deaths. The negative outcomes have been unintentional—sometimes. Other times, there have been overt travesties and pervasive conditions of nefarious neglect. Among the glaring blemishes on the tip of the iceberg of my profession’s not-so-distant past are the Tuskegee syphilis experiments on black men, the blaming of mothers for their children’s autism, and the sterilization of poor and disabled people without their consent. In science and medicine, as in the rest of life, bias infiltrates our thoughts, actions, emotions, and priorities in ways we can only partially control, and then only if so inclined. Without exception, most types of human beings haven’t received the focus, funding, respect, and care they needed for good health until they were associated with a pressing national concern or could advocate for themselves.

  In medical school, we occasionally encountered Norm’s 60 kg sister, “Norma.” The reason for her infrequent appearances was another unstated but easily ascertained curricular lesson. What mattered most about Norma by far—indeed, what warranted her place in our studies—were her sex organs, hormones, and reproductive abilities, which apparently didn’t pertain to most of medicine. We sometimes discussed diseases with racial or ethnic predilections—terminology that almost always referred to anyone who was not of European descent. We learned that the differences between Norm and Norma, and between Norm and people who were black or brown, both complicated medical research and created problems in patient care. For example, black people didn’t respond to certain first-line medications for high blood pressure the way Norm did, and this “unresponsiveness” put them at an especially high risk for strokes. Similarly, Norma had failed to read the heart attack manual and regularly presented with “atypical” symptoms—ones different from Norm’s—thereby causing dangerous delays in her diagnosis and treatment. It turned out that differences in gender, race, and ethnicity affected disease course, drug effectiveness, clinical care, and health outcomes, including mortality. Fortunately for me, by the time I entered my clinical years, some people had begun suggesting that excluding a majority of the population from trials that help us understand and treat diseases might not be such a terrific idea. Norm’s norms weren’t universal after all.

  Thirty years later, medicine has made incremental, not fundamental, progress. In recent years, as I have traveled the country giving a talk that mentions Norm, medical students nod and smile in recognition. Despite the increased attention to most forms of human diversity in medical schools today, future doctors still learn little about older patients, and most of them rarely question why that might be. I don’t like that unreflective bias, but I understand it. My classmates and I were exactly the same. Had I been asked whether I had learned enough geriatrics in medical school, a question that once appeared on the Association of American Medical Colleges’ (AAMC) annual graduation survey, I would have said yes. Three-quarters of recent medical students give that same answer, though few get much more training in the physiology or care of older adults than I did. The problem with knowing very little about a topic is that you don’t know how much you don’t know, and the problem with valuing one social group less than others is that your ignorance about them doesn’t bother you. In the classrooms, clinics, and culture of medicine, even a small dose of geriatrics strikes most people as more than enough. Perhaps that’s why the words geriatrics, elderly, and old are entirely absent from current AAMC surveys,3 which dutifully list so many other specialties and populations.

  We in the class of 1992 considered ourselves open-minded, thoughtful, and compassionate. While we lobbied for better care of underserved and vulnerable people and more attention to women’s and LGBT health, it never occurred to us that we might be leaving out an entire social group. Working with old people either didn’t occur to or have much appeal for anyone.

  At the same t
ime, illness in old age wasn’t entirely ignored during my medical school years. Like women, old people often presented “atypically” when experiencing common conditions, and, like children, they routinely developed life-stage specific conditions. Children and adults each had specialized doctors, hospitals, and clinics because they needed them. Old people only had nursing homes, which weren’t really medical facilities at all, as their name made perfectly clear. Besides, nursing homes had been around for ages—you could tell by looking at many of them—and clearly served a purpose. People moved there when they couldn’t really do much, and—close as they were to death—it didn’t seem surprising that we couldn’t do much for them. The residents we worked with in the hospitals often pointed out that medicine’s usual goals of saving lives and curing disease seemed misplaced or ill-advised in many older patients. There being no apparent alternatives, they focused on sending older patients back to their nursing homes as quickly as possible.

  For years after I became a geriatrician, if I’d been asked whether we were taught much about conditions particular to old age, I would have said, Very little. Several of my medical school textbooks argue otherwise. Those thick volumes contain detailed information on common geriatric syndromes, including delirium, incontinence, and falls, and occasional mention of unique disease presentations and needs. Yet such topics didn’t get attention proportional to their effect on patients’ lives or prevalence in hospitals and clinics. Even when they were covered, their impact was undermined by what’s known as the “hidden curriculum,”4 a “set of commonly held understandings, customs, rituals and taken-for-granted aspects in the clinical setting.”5 The second-class citizenship of older patients in medicine is entrenched and systematic.

  Among my accumulated professional texts is the maroon 1987 edition6 of a medical book long considered the bible of history-taking and physical examination. Curious about its contents, given my recollections of medical school, I looked up the most common disorder associated with old age. Cognition and dementia received just two and three lines, respectively, in the index, while heart disease, which occurs in both middle and old age, had over ninety lines and multiple subheadings, such as causes, assessment of, and techniques of examination, none of which appear under dementia. Near dementia, another D condition also got three listings. Drusen are yellow or white spots of extracellular material that accumulate in the back of the eye with age. Although now known to sometimes occur along with macular degeneration, a serious eye disease, that was not the case when the book gave drusen and dementia equal attention. Also notable is a “D word” that’s missing altogether. Even then, death was an exceedingly common diagnosis that warranted a careful physical examination.

  Those omissions were typical of twentieth-century medicine. The fifty major textbooks from across medical specialties were organized into disease-oriented chapters with little or no end-of-life care content. We all die of something, mostly diseases, and death from those diseases rarely occurs all of a sudden. A fifth of the way into the twenty-first century, most books have chapters on death and dying, yet the different physiology and pathophysiology of older bodies and late-stage illness and lives continue to get short shrift.

  A key reason for this is historical. Medical advances led to cures of diseases that for millennia had killed people, surely and usually rapidly: infections, childbirth, and blocked bowels, then high sugars and high blood pressure, failing hearts and kidneys, as well as certain traumas and tumors. By later in the twentieth century, doctors could unclog arteries to prevent heart attacks and forestall strokes, replace failing vital organs via transplantation, and treat certain cancers with targeted therapies. The phrase “miracle of modern medicine” felt apt.

  As people lived longer, it became clear that those cures had consequences. They developed chronic and more slowly lethal diseases, as longer-lived cells had more time and opportunities for replication errors and toxic exposures, and as damage accumulated in organs like brains, hearts, lungs, livers, intestines, and kidneys. Parts like ears, eyes, joints, and feet sometimes wore out even when essential internal organs held on.

  Although American medicine now recognizes the “epidemic of chronic disease” and “aging epidemic” as among health care’s main challenges, chronic ailments and the often older or aged patients who suffer from them, along with the professionals who focus on them and the tools and techniques for treating them, remain relegated to second-class citizenship.

  DIFFERENT

  In an essay he asked me to read, a resident physician described how he had allocated just fifteen minutes for the admission of a dying patient, figuring it was “yet another dying old woman.” After admitting another patient, he went to see the dying woman, who turned out to be in her forties, though physically and mentally in exactly the condition he expected. Suddenly, the time he’d allotted for her care seemed wholly inadequate. In the last part of the essay, he wrote about how bad he felt and how he had ended up confessing his mistake to several other residents. Most of them had had similar experiences. The young doctors agreed the takeaway lesson of the experience was that they needed to list age more prominently on handoffs to each other so the receiving physician could plan their time appropriately.

  “Have you written the ending yet?” I asked via e-mail. “It’s done,” he replied. “It ends with the lesson. I thought that’s what the journal wanted?” Apparently neither he nor his co-residents had noticed that, given two people in identical medical condition, they ascribed greater time and value to the final care of one over the other. It chilled me, though not as much as the alternative: that they did notice the differential but thought it morally justifiable.

  Over a half century ago, in his seminal book, The Nature of Prejudice, the Harvard psychologist Gordon Allport pointed out, “People who are aware of, and ashamed of, their prejudices are well on the road to eliminating them.”7 In contrast are those who are neither aware nor ashamed. In medical settings, grossly prejudicial comments about old people are uttered without shame or with obliviousness to their blatant bias. Such comments are acceptable in medicine because they are acceptable in life. The devaluing of old people is ubiquitous and unquestioned, a great unifier across the usual divides of class, race, geography, and even age.

  In the 1960s, the U.S. physician Robert Butler coined the term ageism, which he defined as “a process of systematic stereotyping of and discrimination against people because they are old,8 just as racism and sexism accomplish this with skin color and gender.” Butler helped establish the National Institutes on Aging in the United States and founded the first department of geriatrics at an American medical school. He won the Pulitzer Prize for General Nonfiction for his 1975 book, Why Survive?: Being Old in America. Many of the book’s observations are as on-target today as they were forty years ago.

  “Aging,” wrote Butler, “is the neglected stepchild of the human life cycle. Though we have begun to examine … death, we have leaped over that long period of time preceding death9 known as old age.” He ascribed this neglect to ageism, noting that older adults are often viewed as universally sharing certain negative attributes, including senility and rigid thoughts and beliefs. In fact, old age is the most varied time of life; there are the eighty-year-olds who hold public office, work in factories, and run marathons, and there are those who live in nursing homes because they can no longer walk, think, or care for themselves.

  Why, then, might people ascribe such uniform negativity to old age? Butler had the following explanation: “Ageism allows the younger generations to see older people as different10 from themselves; thus, they subtly cease to identify with their elders as human beings.” While this makes sense, it doesn’t fully explain the widespread need to hold older adults apart. It’s also true that we feel sympathy for people with malaria, lung disease, or cancer, but most of us don’t and won’t have those challenges. We are safe. Not so for old age. Barring an early death, old age is every human’s fate, and generally not one met with eager
anticipation. In some ways, even death is more attractive. It’s more clear-cut, more definitive; we are either alive or dead. For many, it is the way in which life might be compromised by advanced age, limping slowly rather than leaping toward death, that brings the greatest dread.

  A few years ago, the National Council on Aging released a public service video about flu prevention featuring an attractive sixty-five-year-old actress being denied the vaccine at her doctor’s office because she looked too young. A week later, the million-dollar Palo Alto Longevity Prize was announced as “dedicated to ending aging.” While the flu video provided important information, it implied that attractiveness and being sixty-five or older are mutually exclusive. The Longevity Prize may inspire important advances, but it also raises questions about whether we should be trying to “cure” one part of normal human development or reward exclusively biological approaches to existential challenges.

  While well-intentioned efforts, both examples illustrate a common way that age bias adversely affects our approach to aging. We tolerate negative attitudes about old age to degrees that we—at least publicly and officially—no longer tolerate racism or sexism. We treat old age as a disease or problem, rather than as one of three major life stages. We approach old age as a singular, unsavory entity and fail to adequately acknowledge its great pleasures or the unique attributes, contributions, physiology, and priorities of older adults.

  Our age bias is so profound that actions viewed as outrageous when applied to other groups are considered acceptable when it comes to older adults. It’s virtually impossible to imagine release of a video in which a health professional refuses to give the flu vaccine to an attractive patient because of her skin color, or a prize with the goal of accelerating childhood so parents are less burdened by years of dependency and expense.

 

‹ Prev