As I wave my hand near his chest wall to accelerate drying, I look around, noting that the bed and floor are littered with medical detritus: plastic and paper wrappers and sheaths, as well as pieces of the patient’s cut and ripped-off clothes, remnants of which dangle from his body amid catheters, leads, and blood. I am surprised by how little blood there is and how much of it appears iatrogenic—the result of his medical care rather than his injuries.
Just then, near my right ear, a female voice says loudly, “What are you doing?”
It’s the surgeon, the woman I knew from medical school.
“Give me that.” She rips the antiseptic bottle from my hand, pulls off the cap, pours the liquid onto the patient’s chest, and drops the bottle. Color soaks the sheets and drips onto the floor as she reaches for what she needs from the open chest tube insertion tray on the table behind us. In emergencies, time matters more than some protocols.
“Hold him still,” she barks, and I do.
Blood appears instantly along the neat line she makes with her scalpel. She moves her gloved fingers and a clamp into the space she has created, lifting and separating the skin and subcutaneous tissue from the structures beneath. The patient’s insides are strikingly pale. A red trail of blood flows out from the wound and down his side, creating a small pool on the sheet below.
She works quickly. An instrument goes in, she leans forward, and there’s a visible give as the metal shaft enters the pleural space. She puts her long finger into the aperture she’s just made, inserting it as far as the knuckle, then moves it around. I do nothing but watch. Her motions are ones I might use when preparing a Thanksgiving turkey, but she’s working on a living human being. I am not disgusted so much as mystified and appalled by the things one person can do to another.
I remember the patient bucking, which seems questionable in retrospect, though not impossible. In truth, the trauma I recall most vividly from that day is my own.
A quarter century later, I remember that as I painted our patient’s chest wall using the same deliberate technique I now use before injecting a patient’s knee or shoulder, the surgeon was looking at me with disgust, not disappointment—professionally, in other words, rather than as a friend or acquaintance. Her glance, as much as the trauma room events themselves, seared this moment into my uneven memory of that long-ago summer.
For days and weeks and years after that day in the trauma room, I couldn’t discuss what had happened with anyone. I felt ashamed by my incompetence and discomfort, and by something else I couldn’t name.
In medical school, there had been those who couldn’t wait to do the audacious things doctors do in and for human bodies, things that would be considered preposterous or criminal in other circumstances. Those students weren’t by any means all future surgeons, though I suspect most, if not all, future surgeons fell into this category. Without hesitation, and with what seemed to me uncanny instincts about what was required and how to do it, they made themselves useful. They fit right into medical culture, and I did not. Terrified of hurting patients, I awaited guidance and permission.
As disturbing to me as my own failings was the fact that a woman I had known as nice could violate another person’s body so casually and with such brute force. I could not think of how to politely phrase a question about what made that possible for her, or how to ask it with genuine curiosity. I also recognized that a portion of my discomfort had to do with gender, since she, like me, was female, and female violence is less likely to be physical. That thinking, I knew, was not entirely fair to either her or most men, even if I thought it for reasons based on abundant social, biological, and historical truths.
Here is a summary of what went on in that trauma room: the patient needed a chest tube. The surgeon did what needed doing the way it needed to be done, quickly and accurately. I neither understood nor adequately accomplished my own small task. We kept the patient alive long enough to make it to surgery and then walked away, acting like it was just another day at the office, because that’s what it was.
These are facts.
But so, too, are these: metal, plastic, and fingers were shoved into most of the patient’s orifices and through his flesh to create new holes in his endangered body. In his first moments in the trauma room, when he cried out or tried to complain or resist, he was forced into submission. At no time in the process did any one of the many people not doing something critical at that moment tell him what was happening or why, in case he could understand. At no time after did anyone pull the team together to discuss what we saw, and what we did, and what we might have done differently or better.
In situations like that—settings and circumstances where at least some people deem the violence necessary—there are so many facts. And many opportunities to do, and be, better.
I know this because I have seen doctors who are equally skilled in compassion, communication, procedures, and crisis management. Most of us are better at some of those than others, yet too often the first two are considered added bonuses, while the second two are seen as essential. This perspective is a defining trait of medical culture, though few recognize it as the ethically charged selection of priorities that it is. They see only the benefits of procedures and only the challenges of teaching, encouraging, and evaluating compassion and communication. Such valuing of certain sorts of knowledge over others is also a choice.
MODERN
The twentieth century was a period of rapid growth and progress in medicine, the highest of high points in medical history. Pathologists discovered causes and mechanisms of diseases and age-related changes, and researchers developed new diagnostic and therapeutic options, from EKGs and surgery to antibiotics, hormones, and other lifesaving medications. Medical advances in areas like cardiology, oncology, dialysis, and joint replacement in particular saved lives of people over fifty. Humans stopped dying when they once would have, gaining years and sometimes decades of meaningful life. They lived into old age with more “treatable” medical conditions—approached with the same fix/cure mentality that had reaped such great rewards earlier in their lives.
But treatment that had been beneficial in younger adults could be problematic in old age. Patients were increasingly kept alive in warehouses, either those called “skilled nursing facilities” or nursing homes or others, scarier still, where machines breathed for them and fed them, where they lay day and night, unable to move or talk and receiving few, if any, visitors.
For most of the century, the majority of research to elucidate specific diseases and their treatment was done on young or middle-aged people. Of course, old-age specialists studied old people, but they focused on the oldest and frailest patients and on geriatric syndromes, conditions such as falls and frailty of critical importance to old people but of little interest to most doctors.1 That left much of old age and most people in their later sixties and beyond languishing in a no-man’s-land2 between standard adult internal medicine and geriatrics.
In the late 1930s and 1940s, panic about declining birth rates and increasing longevity fueled an interest in the study of aging and geriatrics in developed countries. By the 1950s, gerontological societies and journals existed in at least seventeen countries, mostly in Europe, and by midcentury geriatrics was at least an unofficial specialty in most of those countries, though nowhere with large numbers or high prestige. In 1953 the United States had three geriatrics professors, and Glasgow appointed the first UK professor in 1964.
Unfortunately, those developments did little to improve the experiences of older patients. Not only were old-age specialists few in number, but their efforts were often stymied by the medical establishment. In the 1950s, American geriatricians complained about the unwillingness of general hospitals to admit older patients and the paucity of geriatrics-trained nurses.3 Similarly, a 1956 British government report noted that “the old age group are currently receiving a lower standard of service than the main body of consumers and there are also substantial areas of unmet need among th
e elderly.”4 They cautioned that doctors should not be writing off ailments in old patients with “the facile explanation as being due to ‘old age.’ ”
This still happens. The best response to this combination of social prejudice and medical laziness came from a nonagenarian who went to see a doctor about knee pain. After a history and exam of the knee, the doctor said, “What do you expect? The knee is ninety-five years old!” To which the old man replied, “Yes, but so is the other one, and it doesn’t bother me a bit.”
In the wake of World War II, improved diagnostic and treatment methods led to increased awareness of the roles of functional and psychological factors in the health of older adults, and some expansion of special institutions for people with dementia and other age-specific conditions—at least in some countries. Many national health systems in Europe, Japan, and elsewhere combined medical and social care to effectively allow older adults to remain at home (as almost all wish) and to prevent costly hospitalizations and nursing home placements. The United States bucked this logical and socially responsible trend.
The line between medical and social care is created by politics, not biology. Most European countries began providing glasses, hearing aids, walkers, and dentures as part of national health care. Not so in the United States, where they continue to be viewed as “nonmedical,” leaving individuals or families to pay for them. Today, the very poor can sometimes get them through Medicaid or charitable organizations, and the well-off can easily buy them. Everyone else is out of luck. The thinking behind this is that medical problems require drugs or surgery; a condition that doesn’t need one or both of those isn’t medical, even if it is a bodily dysfunction that affects well-being and health. In the United States, you can get laser treatments that might not help your eye disease, but not the glasses that will enable you to keep active despite your visual loss. And you can get a cochlear implant but not a hearing aid. By calling costly, “sexy” interventions medical, and cheaper more functionally focused devices nonmedical, American health care supports the high-profit pharmaceutical and device industries (that fund political candidates from both major parties) at the expense of the far larger numbers of citizens who would benefit from assistive devices (the people many politicians would like to represent but cannot because not supporting big health means not getting reelected5).
INDOCTRINATION
PubMed is the search engine for the National Library of Medicine’s comprehensive biomedical and life sciences journal article database, an online resource where doctors look up almost everything. Put in the word violence and dozens of key phrases pop up. However, none of them address the violence doctors inflict on patients. Searching violence by doctors yields articles on violence toward or against doctors.
At this moment in American history, violence figures daily in the news, the perceived need for violence is highly subjective, and certain people are more likely to be its victims than others. Police and prosecutors, policymakers and the public, are all examining how they contribute, consciously and unintentionally, to our society’s explicit and structural violence. Yet, in my profession, we are not reconsidering our own violent acts from new or varied perspectives.
It’s not that we’re avoiding current events, but doctors look at violence as we look at everything: from a position of power and privilege on our turf (conceptual as well as concrete). These days medical professionals are talking more about race and racism and how violence affects the lives of our patients, trainees, and colleagues. But we aren’t looking at the unnecessary violence of our own work—in particular, those instances when we say we have no choice, claiming there’s no other means to our unquestionably laudable ends and that people who question our violence in those moments, when lives or organs hang in the balance, clearly do not understand what we’re up against. In medicine—as in law, policing, politics, and education—we labor under the delusion that our challenges are unique, our coping mechanisms justified, our fundamental assumptions accurate, and our moral imperative sacrosanct.
When my PubMed search came up empty, I e-mailed a renowned academic, a person well versed in the medical literature, to ask whether I was missing some key search phrase or literature. Her reply made it clear that no one is studying violence from this particular angle, at least not directly. It’s also worth mentioning that her speculations about the people who might know about violence by doctors all studied the topics of problem patients and problem doctors. While these groups matter, I am at least as interested in violence by those of us who are not problems, we who are by all measures just doing our jobs and doing them well.
A 2002 World Health Organization report notes that violence lacks a singular meaning, since both acceptable behavior and harm are culturally influenced, subjective, and fluid. Then it offers this definition: the intentional use of physical force or power, threatened or actual, against oneself, another person, or against a group or community, that either results in or has a high likelihood of resulting in injury, death, psychological harm, maldevelopment, or deprivation.
Strictly speaking, by this definition, violence in medicine is inherent and ubiquitous. In most doctor-patient encounters, the physician holds the power. We have license to use some types of physical force. Many medical decisions, discussions, procedures, and prescriptions carry a high likelihood of harm or trauma, as does our deprivation-filled, hierarchical, and psychologically demanding training process. As we move through our days, violence is a constant threat and frequent reality.
But this definition neglects the issue of intent, as in the person’s primary goal in the act. In medicine, force or power are generally exerted with the goal of improving a patient’s health or saving a life, not with the intention of harming or killing, though those things regularly happen as well. As a child with a ruptured appendix, when I nearly died in an elevator on the way to the operating room, people jammed needles into me, shoving parts of my body, yelling, and thrusting a tight oxygen mask over my face. As a teen, when I dislocated my shoulder playing volleyball, a tall, muscled orthopedist yanked at the arm in order to put it back where it belonged, using a maneuver that felt medieval and, if only briefly, shockingly painful. The first medical violence saved my life; the second restored function in my dominant arm.
Yet there are many other instances where I’m left wondering about where and how we determine what violence is necessary or acceptable. For example, does being in the trauma room offer license to do things as expeditiously as possible, providing a pass to doctors who are stressed and afraid of performing badly or failing their patient? And if so, what about doctors who have too much to do and need to get through this procedure or that admission, and this call night or that clinic, so they can move on to the next one? Where do we draw the line of acceptability, and where should we? Should the line’s location vary by circumstance or specialty, or by individual acts versus systemic and structural ones? At present, we count only a small fraction of medicine’s harms, prioritizing those suffered by patients over those to staff and systems, and counting almost exclusively the harms that visibly affect the body or its function while ignoring the scars of violent words, actions, and policies on psyches and relationships.
I’m also thinking here of the damage done by harms inadequately acknowledged. A friend whose husband has cancer sent me e-mails in which she described his treatment, meant to induce remission of his disease. In the less likely but ideal scenario, his treatment would also cure him. Three months in, she wrote of his chemo: “It halts—reverses, more like—his recovery progress from the surgery. He’s skinny skinny—it feels like an inadequate word to describe his physical condition.”
Two months later, she explains that he stopped the chemo early. “It was just so grueling. Since he stopped, he’s been gaining some strength and weight and eating with somewhat less suffering.”
Those last three words haunt me. The chemo didn’t have just a high likelihood of resulting in injury, death, psychological harm, maldevelopment or d
eprivation; it was undeniably causing all those things except his death, and it was pushing him toward that terminal precipice as well. Equally telling, the violence had been such that my friend and her husband couldn’t even imagine the end of suffering; the most they could hope for was less of it.
Anyone who has been through medical training, and most people who have been in medical settings for other reasons, particularly as a patient or as a person who loves a patient, has witnessed violence. Often it is necessary but sometimes it is not, or it is questionable, or more of a potential than actuality. I should explain that I’m using the word potential here as we do in anatomy, to signify a space that exists but isn’t always apparent between two adjacent structures … until it fills with inflammatory fluid or blood as a result of injury or disease. Thus, even now, when violence can be felt everywhere in our divided body politic, it’s entirely possible for certain sorts of people—a middle-aged white female doctor like me, for example—to go weeks and see it only if I choose to.
So much might be questioned or questionable in the minute-to-minute practice of medicine, but we have been conditioned to assume its necessity. In both medicine and society, when it comes to violence, we haven’t done enough to negate the negative, or to adequately explore the line between necessary and unnecessary. If we were to find that line, I suspect it would be like the line between water and land in a tidal zone, where what is expected varies with seasons, weather, time of day, and who is looking—a local, a tourist, a fisherman, a naturalist, or a poet.
Elderhood Page 13