Elderhood

Home > Other > Elderhood > Page 16
Elderhood Page 16

by Louise Aronson


  But maybe that’s just my bias.

  Forty-two years and 14.1 miles from the hospital where Huey P. Newton showed up with his gunshot wound, a patient of mine arrived at the emergency department of our university hospital. Mabel was confused, with slurred speech and a fluctuating level of consciousness, meaning she was alert at some times and sleepy and hard to interact with at others. Tests were done to determine what was going on, including a toxicology, or “tox screen,” for certain legal and illegal drugs, which is a fairly routine test when patients have what clinicians call “altered mental status.”

  So far so good, right? Except that Mabel was ninety-four years old and had been bedbound and fed through a tube for nearly five years after a devastating stroke. While she was too confused to be able to relay the details of her condition, it could have been diagnosed from the doorway of her emergency department cubicle: the ancient face, the unnatural kink of her neck and left arm, the feeding tube puckering her gown where it jutted up and dangled from her abdomen. Alternatively, the doctors and nurses taking care of her might have glanced at her chart that prominently listed her age and underlying conditions, or they could have asked a few basic questions of her concerned daughter, who stood at the bedside stroking her mother’s forehead and murmuring soothing words, or her two sons in the waiting room. Any one of those approaches, all such routine parts of the initial patient evaluation that they have standardized labels—“general appearance,” “chart biopsy,” and “past medical history,” respectively—would have made clear that by far the most likely cause of Mabel’s altered mental status was delirium.18

  Unless an acutely confused older patient is hit by a car or found with a heroin needle in a vein, they should be considered delirious until proven otherwise. Yet the doctors caring for bedbound ninety-four-year-old Mabel sent a toxicology screen. It may well be they did this out of habit or because they were following an (illogical, expensive) age-blind protocol. But those aren’t the only possible explanations. I know because I had a “control” patient, also old and quite debilitated, who was seen in the same emergency department in the same month as Mabel.

  For the better part of the last decade of his life, from his mid-seventies to his mid-eighties, my father became delirious every time he entered the hospital. He had delirium from a heart attack, a knee replacement, cardiac bypass surgery, pneumonia, an allergic reaction, a bladder infection, another orthopedic surgery, and two falls. On exactly zero of these occasions was a toxicology screen sent by the emergency department staff tending him. Since drug abuse is less common in females than males and less likely among the very old and homebound, it would have made far more sense to send a tox screen on my father than on Mabel. But that’s not what happened, even though their presentations were similar. Medically, both were old and frail, with long lists of diagnoses. Socially, both were accompanied by supportive, educated families, and had me as their advocate. Really, aside from her negative risk factors of greater age and disability, the biggest difference between Mabel and my father was skin color. My father was white, while Mabel was African American. It seemed clear that the resident treating Mabel couldn’t imagine her beautifully appointed single-family home filled with generations of family photographs and religious artifacts—not that any of that should have been relevant to the care she deserved or received.

  I wouldn’t send a tox screen on a bedbound ninety-four-year-old, but I might consider sending one for a younger person, and part of that choice might be based on skin color. If I did, I would note my prejudice, push it from my thoughts, and make an effort to see the patient with an open mind. That’s progress, but according to the research on biases in medicine,19 it’s probably not enough. And if people like me—or, worse, people who have such biases and don’t see them as problems—are the ones shaping and running our health care system, is it any wonder so many types of people routinely receive lesser care? Almost all of us fall into one or more of the categories subject to health care biases, and even those few who don’t today will one day, barring sudden death, by virtue of ill health or old age. When we reach elderhood in a world where a study of four- to seven-year-olds found that 66 percent wouldn’t want to be old,20 no matter who we were beforehand, we all become part of a vast, vulnerable, and underserved population.

  The handling of Mabel’s and my father’s delirium that month illustrates the importance of intersectionality,21 society’s multiply interacting and historically ever-evolving systems of privilege and oppression, inclusion and exclusion. Intersectionality shows that it’s never accurate to consider a person solely on the basis of just one of their defining categories. Mabel wasn’t just black or female or old, she was all three, and also disabled, heterosexual, educated, Christian, well groomed, and so much more. All those factors are always relevant in human experience, playing out not only across individual lifetimes but through generations in ways both good and bad, depending on who you are and when and where you live.

  Joan Didion had one explanation for Huey Newton’s experience in the Kaiser emergency department until the facts shattered her understanding of him and his position in society. But what if her explanation remained true in some ways, just not in the way she expected it to? Wasn’t he both a historical outsider and a Kaiser member? And what if all the other possible explanations on the differential diagnosis were also true? Indeed, what if her mistake was not in evoking the wrong theory or assumption so much as in believing any one explanation could be sufficient to account for human behavior?

  At the risk of noting so many forms of bias that each loses its impact as list cedes to litany and the battle cry to which this writing aspires devolves into noise, I will merely itemize those for which I have seen scientifically sound and morally distressing data22: racism, classism, sexism, ageism, homophobia, xenophobia, and prejudice based on religion, primary language, literacy, substance use, housing status, gender fluidity, behavior in the medical setting, and various diagnoses, both physical and mental. These medical biases lead to bad care and impaired trust and unnecessary suffering, as well as high costs, avoidable disease, and deaths. It’s not that each of us manifests all prejudices or that we manifest one or more of them all the time, but—like all human beings—all health care professionals are some of these things at least some of the time, sometimes intentionally and sometimes not.

  If two patients of the same age and grooming and class,23 one brown and one white, present with the exact same heart attack, the white one will receive heart- and lifesaving treatment sooner, especially if the white one is a he rather than a she. If two women arrive with abdominal pain eventually shown to be from the same cause, the white woman will get more pain medication than the brown woman, especially if the white one’s primary language is English24 and the brown one’s is not. Equally disturbing from the point of view of a well-intentioned doctor, it seems then that even if I mean well, make regular efforts to educate myself, and feel real and genuine connection with my patients whose background differs from mine, I may do them harm.

  Perhaps even more concerning, systems themselves—the National Institutes of Health, my medical center, and that colossal enterprise called American medicine—can reflect assumptions, norms, and values that perpetuate inequalities, endangering many of the same people those institutions want to serve. There are countless examples where a different norm is listed for older adults based on averages, not outcomes, from lab test results to thresholds for intervention. In most cases, we don’t know for sure whether those different standards reflect age-specific norms or inadequately investigated age-based pathology. When I was in training, we were told old people’s blood pressure ran high “normally” or “naturally.” When the topic was studied a few years later, it turned out that when old people had “normal-old” blood pressures—numbers that would have been considered high in youth or middle age—they had more strokes, just like the young and middle-aged.

  Medical norms often shape policy that in turn harms patients.
Not only is hearing loss considered a “normal part of aging,” allowing most insurance plans to refuse to pay for hearing aids, but American medicine and health policy call hearing “normal” in old people at levels that lead to early intervention in children in order to improve their ability to function, learn, and communicate. Meanwhile, we know older adults with hearing loss develop cognitive impairment 3.2 years sooner than those with normal hearing, and older people with mild, moderate, and severe hearing loss are two, three, and five times more likely to develop dementia. Although we don’t know that hearing loss causes dementia (something else could be simultaneously leading to both), it’s easier to treat the hearing part of that nefarious couple. Besides, hearing loss in old age is associated with functional loss, isolation, family discord, medical miscommunications, depression, anxiety, and paranoia. All that scientific data on harms, but we have higher thresholds for intervention in old age, and insurance doesn’t pay for hearing aids.

  Addressing biases and prejudices is complex. Doctors, it turns out, are people. Our behavior has all the variability, inconsistency, prejudices, and complexity that come with that categorization. The issue, then, is not whether “-isms” exist in medical practice but how they manifest, at what cost to patients, and what can be done to make medical care more just in the complex adaptive systems of medical culture, hospitals, clinics, and clinicians. This is where structural approaches are essential. Systems and policies can institutionalize or compensate for human biases and failings—the choice is ours.

  Until 2014, kidney transplants in the United States were allocated using a “first referred, first served” system. That sounds like a fair, impartial procedure, and it was probably well intentioned. Yet practice data from across the country show that doctors are less likely to refer kidney failure patients of color for transplantation in a timely manner (or sometimes at all). Those later referrals meant African American and Latino patients had lower chances of getting a new kidney; substantially lengthened time on dialysis, a procedure that takes hours, often generates cycles of fatigue and nausea, and compromises a person’s ability to work and live to their potential; and greater chances of dying. Since 2014, patients’ places on the transplant list are decided by when they began dialysis,25 not referral. This has eliminated some of the structural, race-based inequities in kidney transplantation. Others persist at the ever-moving, murky intersection of history, society, and medicine.

  Sometimes systemic injustice is intentional. Other times it occurs because science prioritizes what’s easy to measure rather than what matters. When a patient has a heart attack, we look at outcomes such as time from hospital arrival to catheter treatment, use of certain medications, and mortality. But the heart attack of an eighty-year-old with a seventeen-item problem list differs from the heart attack of an otherwise healthy fifty-five-year-old who collapses while jogging, even if the same heart vessel is clogged to the same degree at the same place. Standard medical measures leave out outcomes of critical importance to the older patient, such as return to prior cognitive function, loss of key abilities and independence, and risk of nursing home placement. Who a person is and where they are in their life always matters. Age blindness is another form of bias.

  8. ADULT

  OBLIVIOUS

  In 2012 a group of doctors at Johns Hopkins made a video called The Unknown Profession.1 The setup was simple. One winter’s afternoon, they walked around Baltimore with a video camera and asked people the question “What is a geriatrician?” They interviewed people of different ages, ethnic and racial backgrounds, and levels of education. Most had no idea and tried to guess. My favorite response was “a person who scoops ice cream at Ben and Jerry’s.” But the interview that sticks with me was of a middle-aged woman trying to find clues in the sound or root of the word geriatrician. She was shocked when she learned the real definition, telling the videographer that she had spent the last few years caring for her elderly parents and had never come across that word.

  The specialty only emerged in the United States in 1978, and was just a decade old the year I started medical school. Geriatricians were a rare breed, and geriatrics was not on my or any of my classmates’ horizon of possible specialties. I heard of just one in four years of med school. That geriatrician worked at one of the two small community hospitals outside Boston where students sometimes did rotations. Working in the emergency department there during my fourth year, I vaguely recall a sighting of her and a lack of clarity about exactly what it was she did. The doctors in the emergency department found her amusing, though they were relieved to summon her when they had an old patient they didn’t know what to do with. From them I got the sense that because the geriatrician considered so-called social issues—Could the patient go home safely? Were there day-to-day activities with which he needed help? Why, really, had he come to the emergency department?—she was a lesser breed of doctor. True doctors, my supervisors made clear by their teaching topics and actions, dealt exclusively with biology and diseases and procedures.

  San Francisco was no different than Boston in that regard. In our three years of internal medicine training, there was much talk about who would choose which subspecialty, but I don’t recall anyone mentioning geriatrics. In my second year, as I began considering my postresidency career, I knew only that I wanted to be an expert in whole-person care and to serve people who really needed me. Many of us in the primary care track felt similarly. We debated whether to stay in academics or work in a community practice, whether to continue honing our skills as general internists or informally subspecialize in some way, maybe by working with AIDS patients. Although I enjoyed my older patients, I’d still encountered only that one geriatrician in Cambridge years earlier and had no idea what a geriatrician did or how it might differ from what a general internist or cardiologist or rheumatologist did when caring for an older patient. As a result, I didn’t realize geriatrics was the one subspecialty that would give me everything I wanted, and more. I figured I’d be a general internist taking care of adults of all ages, mostly in a clinic and occasionally in the hospital.

  In retrospect, I had gravitated toward older patients from the start but didn’t take note until a medical student pointed it out. Our team had admitted a very old, very small Chinese woman who spoke little English and always had at least one, and usually several, members of her large family in her room to help her. She’d come into the hospital struggling to breathe, with no appetite or interest in anything going on around her. After treatment for pneumonia and heart failure, she was a different person. Her bright brown eyes tracked the people and conversations in the room, even when she couldn’t understand them. When we told her through her translating relatives that she could go home soon, she put a hand in front of her mouth to hide the teeth missing from her giant smile.

  Once my team had filed out of her room into the hall, I reviewed the day’s plan with the intern and student in charge of her care. While walking to another floor to see our next patient, the fourth-year medical student grinned at me. She had been on my team at another hospital the year before, and we knew each other fairly well.

  “You love old patients,” she said.

  I looked at her, surprised and a little defensive, as if affection for an older person qualified as a shameful secret and she’d just outed me. Although I didn’t have the wherewithal to think it through in that instant, enough people made fun of older patients that I wasn’t sure I wanted to be known for having a special interest in them. It took me a second to realize that my student had been grinning because I had been grinning, and I had been grinning because seeing our nonagenarian patient’s smile and renewed hope had made me very happy. I’d also enjoyed working with her family, whose dedication filled me with respect and admiration.

  But something else about my smiling patient appealed to me, something I’m now loath to admit, much less write: she was cute. Small things of all kinds have always attracted me, and our patient had started small and shrunken in old
age. She had tiny, well-formed features and wore a maroon watch cap night and day over her short, salt-with-a-dash-of-pepper hair. In the large hospital bed, she tucked the sheets under her armpits so only her arms, neck, and head showed, a diminutive figure framed by the white pillow, white sheet, and pale blanket.

  Calling an older person cute2 is considered infantilizing and insulting—largely because it is often one or both. As a social justice advocacy website based in San Diego notes, “ ‘Cute,’ said of an old woman, does not mean ‘hot.’ It means that she has said or done something that would not be at all remarkable coming from a normal person but does not fit the speaker’s stereotypes about old3 …” Having seen its demeaning uses, I’m sensitive to people’s distaste for its use about old people. At the same time, I wonder if the greater problem comes from the old than the cute.

  Cute isn’t the same as pretty or handsome; it implies an emotional attractiveness, not just a physical one. My family thinks I’m cute, and I love that. Since cute is also more likely to be invoked for something or someone small and we shrink with age, old people often earn that label. In old age, foot arches fall, spinal vertebrae and the spaces between them decrease in height, tendons and joints contract. I hope to be seen as cute once those things happen to me, though only if the word is used without condescension. The problem with much of today’s use of cute is its relationship to mechanistic and commercial notions of human value. Those judgments lead younger people to make indiscriminate assumptions about old age and incompetence, and older people to bemoan and deny normal age-related changes, thereby contributing to their own devaluation. Yet most of the time being called cute is a good thing, and not just when applied to someone sexually attractive. It’s positive when applied to kids, beloved animals, and winsome behaviors or objects. If used for old people to similarly express affection and appeal, it is not an insult but an acknowledgment that every life stage has unique charms.

 

‹ Prev