By coincidence, just a week after my diagnosis, a panel convened by the National Institutes of Health made headlines when it declined to recommend universal screening for women in their forties; evidence simply didn’t show it significantly decreased breast-cancer deaths in that age group. What’s more, because of their denser breast tissue, younger women were subject to disproportionate false positives—leading to unnecessary biopsies and worry—as well as false negatives, in which cancer was missed entirely.
Those conclusions hit me like a sucker punch. “I am the person whose life is officially not worth saving,” I wrote angrily. When the American Cancer Society as well as the newer Susan G. Komen foundation rejected the panel’s findings, saying mammography was still the best tool to decrease breast-cancer mortality, friends across the country called to congratulate me as if I’d scored a personal victory. I considered myself a loud-and-proud example of the benefits of early detection.
Sixteen years later, my thinking has changed. As study after study revealed the limits of screening—and the dangers of overtreatment—a thought niggled at my consciousness. How much had my mammogram really mattered? Would the outcome have been the same had I bumped into the cancer on my own years later? It’s hard to argue with a good result. After all, I am alive and grateful to be here. But I’ve watched friends whose breast cancers were detected “early” die anyway. I’ve sweated out what blessedly turned out to be false alarms with many others.
Recently, a survey of three decades of screening published in November in The New England Journal of Medicine found that mammography’s impact is decidedly mixed: it does reduce, by a small percentage, the number of women who are told they have late-stage cancer, but it is far more likely to result in overdiagnosis and unnecessary treatment, including surgery, weeks of radiation, and potentially toxic drugs. And yet, mammography remains an unquestioned pillar of the pink-ribbon awareness movement. Just about everywhere I go—the supermarket, the dry cleaner, the gym, the gas pump, the movie theater, the airport, the florist, the bank, the mall—I see posters proclaiming that “early detection is the best protection” and “mammograms save lives.” But how many lives, exactly, are being “saved,” under what circumstances, and at what cost? Raising the public profile of breast cancer, a disease once spoken of only in whispers, was at one time critically important, as was emphasizing the benefits of screening. But there are unintended consequences to ever-greater “awareness”—and they, too, affect women’s health.
Breast cancer in your breast doesn’t kill you; the disease becomes deadly when it metastasizes, spreading to other organs or the bones. Early detection is based on the theory, dating back to the late nineteenth century, that the disease progresses consistently, beginning with a single rogue cell, growing sequentially, and at some invariable point making a lethal leap. Curing it, then, was assumed to be a matter of finding and cutting out a tumor before that metastasis happens.
The thing is, there was no evidence that the size of a tumor necessarily predicted whether it had spread. According to Robert Aronowitz, a professor of history and sociology of science at the University of Pennsylvania and the author of Unnatural History: Breast Cancer and American Society, physicians endorsed the idea anyway, partly out of wishful thinking, desperate to “do something” to stop a scourge against which they felt helpless. So in 1913, a group of them banded together, forming an organization (which eventually became the American Cancer Society) and alerting women, in a precursor of today’s mammography campaigns, that surviving cancer was within their power. By the late 1930s, they had mobilized a successful “Women’s Field Army” of more than one hundred thousand volunteers, dressed in khaki, who went door-to-door raising money for “the cause” and educating neighbors to seek immediate medical attention for “suspicious symptoms,” like lumps or irregular bleeding.
The campaign worked—sort of. More people did subsequently go to their doctors. More cancers were detected, more operations were performed, and more patients survived their initial treatments. But the rates of women dying of breast cancer hardly budged. All those increased diagnoses were not translating into “saved lives.” That should have been a sign that some aspect of the early-detection theory was amiss. Instead, surgeons believed they just needed to find the disease even sooner.
Mammography promised to do just that. The first trials, begun in 1963, found that screening healthy women along with giving them clinical exams reduced breast-cancer death rates by about 25 percent. Although the decrease was almost entirely among women in their fifties, it seemed only logical that eventually, screening younger (that is, finding cancer earlier) would yield even more impressive results. Cancer might even be cured.
That hopeful scenario could be realized, though, only if women underwent annual mammography, and by the early 1980s, it is estimated that fewer than 20 percent of those eligible did. Nancy Brinker founded the Komen foundation in 1982 to boost those numbers, convinced that early detection and awareness of breast cancer could have saved her sister, Susan, who died of the disease at thirty-six. Three years later, National Breast Cancer Awareness Month was born. The khaki-clad “soldiers” of the 1930s were soon displaced by millions of pink-garbed racers “for the cure” as well as legions of pink consumer products: pink buckets of chicken, pink yogurt lids, pink vacuum cleaners, pink dog leashes. Yet the message was essentially the same: breast cancer was a fearsome fate, but the good news was that through vigilance and early detection, surviving was within women’s control.
By the turn of the new century, the pink ribbon was inescapable, and about 70 percent of women over forty were undergoing screening. The annual mammogram had become a near-sacred rite, so precious that in 2009, when another federally financed independent task force reiterated that for most women, screening should be started at age fifty and conducted every two years, the reaction was not relief but fury. After years of bombardment by early-detection campaigns (consider: “If you haven’t had a mammogram, you need more than your breasts examined”), women, surveys showed, seemed to think screening didn’t just find breast cancer but actually prevented it.
At the time, the debate in Congress over health care reform was at its peak. Rather than engaging in discussion about how to maximize the benefits of screening while minimizing its harms, Republicans seized on the panel’s recommendations as an attempt at health care rationing. The Obama administration was accused of indifference to the lives of America’s mothers, daughters, sisters, and wives. Secretary Kathleen Sebelius of the Department of Health and Human Services immediately backpedaled, issuing a statement that the administration’s policies on screening “remain unchanged.”
Even as American women embraced mammography, researchers’ understanding of breast cancer—including the role of early detection—was shifting. The disease, it has become clear, does not always behave in a uniform way. It’s not even one disease. There are at least four genetically distinct breast cancers. They may have different causes and definitely respond differently to treatment. Two related subtypes, luminal A and luminal B, involve tumors that feed on estrogen; they may respond to a five-year course of pills like tamoxifen or aromatase inhibitors, which block cells’ access to that hormone or reduce its levels. A third type of cancer, HER2-positive, produces too much of a protein called human epidermal growth factor receptor 2; it may be treatable with a targeted immunotherapy called Herceptin. The final type, basal-like cancer (often called “triple negative” because its growth is not fueled by the most common biomarkers for breast cancer—estrogen, progesterone, and HER2), is the most aggressive, accounting for up to 20 percent of breast cancers. More prevalent among young and African American women, it is genetically closer to ovarian cancer. Within those classifications, there are, doubtless, further distinctions, subtypes that may someday yield a wider variety of drugs that can isolate specific tumor characteristics, allowing for more effective treatment. But that is still years away.
Those early mammography tria
ls were conducted before variations in cancer were recognized—before Herceptin, before hormonal therapy, even before the widespread use of chemotherapy. Improved treatment has offset some of the advantage of screening, though how much remains contentious. There has been about a 25 percent drop in breast-cancer death rates since 1990, and some researchers argue that treatment—not mammograms—may be chiefly responsible for that decline. They point to a study of three pairs of European countries with similar health care services and levels of risk: in each pair, mammograms were introduced in one country ten to fifteen years earlier than in the other. Yet the mortality data are virtually identical. Mammography didn’t seem to affect outcomes. In the United States, some researchers credit screening with a death-rate reduction of 15 percent—which holds steady even when screening is reduced to every other year. H. Gilbert Welch, a professor of medicine at the Dartmouth Institute for Health Policy and Clinical Practice and coauthor of last November’s New England Journal of Medicine study of screening-induced overtreatment, estimates that only 3 to 13 percent of women whose cancer was detected by mammograms actually benefited from the test.
If Welch is right, the test helps between four thousand and eighteen thousand women annually. Not an insignificant number, particularly if one of them is you, yet perhaps less than expected given the one hundred thirty-eight thousand whose cancer has been diagnosed each year through screening. Why didn’t early detection work for more of them? Mammograms, it turns out, are not so great at detecting the most lethal forms of disease—like triple negative—at a treatable phase. Aggressive tumors progress too quickly, often cropping up between mammograms. Even catching them “early,” while they are still small, can be too late: they have already metastasized. That may explain why there has been no decrease in the incidence of metastatic cancer since the introduction of screening.
At the other end of the spectrum, mammography readily finds tumors that could be equally treatable if found later by a woman or her doctor; it also finds those that are so slow-moving they might never metastasize. As improbable as it sounds, studies have suggested that about a quarter of screening-detected cancers might have gone away on their own. For an individual woman in her fifties, then, annual mammograms may catch breast cancer, but they reduce the risk of dying of the disease over the next ten years by only 0.07 percentage points—from 0.53 percent to 0.46 percent. Reductions for women in their forties are even smaller, from 0.35 percent to 0.3 percent.
If screening’s benefits have been overstated, its potential harms are little discussed. According to a survey of randomized clinical trials involving six hundred thousand women around the world, for every two thousand women screened annually over ten years, one life is prolonged but ten healthy women are given diagnoses of breast cancer and unnecessarily treated, often with therapies that themselves have life-threatening side effects. (Tamoxifen, for instance, carries small risks of stroke, blood clots, and uterine cancer; radiation and chemotherapy weaken the heart; surgery, of course, has its hazards.)
Many of those women are told they have something called ductal carcinoma in situ (DCIS), or “Stage Zero” cancer, in which abnormal cells are found in the lining of the milk-producing ducts. Before universal screening, DCIS was rare. Now DCIS and the less common lobular carcinoma in situ account for about a quarter of new breast-cancer cases—some sixty thousand a year. In situ cancers are more prevalent among women in their forties. By 2020, according to the National Institutes of Health’s estimate, more than one million American women will be living with a DCIS diagnosis.
DCIS survivors are celebrated at pink-ribbon events as triumphs of early detection: theirs was an easily treatable disease with a nearly 100 percent ten-year survival rate. The hitch is, in most cases (estimates vary widely between 50 and 80 percent) DCIS will stay right where it is—“in situ” means “in place.” Unless it develops into invasive cancer, DCIS lacks the capacity to spread beyond the breast, so it will not become lethal. Autopsies have shown that as many as 14 percent of women who died of something other than breast cancer unknowingly had DCIS.
There is as yet no sure way to tell which DCIS will turn into invasive cancer, so every instance is treated as if it is potentially life-threatening. That needs to change, according to Laura Esserman, director of the Carol Franc Buck Breast Care Center at the University of California, San Francisco. Esserman is campaigning to rename DCIS by removing its big “C” in an attempt to put it in perspective and tamp down women’s fear. “DCIS is not cancer,” she explained. “It’s a risk factor. For many DCIS lesions, there is only a 5 percent chance of invasive cancer developing over ten years. That’s like the average risk of a sixty-two-year-old. We don’t do heart surgery when someone comes in with high cholesterol. What are we doing to these people?” In Britain, where women are screened every three years beginning at fifty, the government recently decided to revise its brochure on mammography to include a more thorough discussion of overdiagnosis, something it previously dispatched with in one sentence. That may or may not change anyone’s mind about screening, but at least there is a fuller explanation of the trade-offs.
In this country, the huge jump in DCIS diagnoses potentially transforms some fifty thousand healthy people a year into “cancer survivors” and contributes to the larger sense that breast cancer is “everywhere,” happening to “everyone.” That, in turn, stokes women’s anxiety about their personal vulnerability, increasing demand for screening—which, inevitably, results in even more diagnoses of DCIS. Meanwhile, DCIS patients themselves are subject to the pain, mutilation, side effects, and psychological trauma of anyone with cancer and may never think of themselves as fully healthy again.
Yet who among them would dare do things differently? Which of them would have skipped that fateful mammogram? As Robert Aronowitz, the medical historian, told me: “When you’ve oversold both the fear of cancer and the effectiveness of our prevention and treatment, even people harmed by the system will uphold it, saying, ‘It’s the only ritual we have, the only thing we can do to prevent ourselves from getting cancer.’”
What if I had skipped my first mammogram and found my tumor a few years later in the shower? It’s possible that by then I would have needed chemotherapy, an experience I’m profoundly thankful to have missed. Would waiting have affected my survival? Probably not, but I’ll never know for sure; no woman truly can. Either way, the odds were in my favor: my good fortune was not just that my cancer was caught early but also that it appeared to have been treatable.
Note that word “appeared”: one of breast cancer’s nastier traits is that even the lowest-grade caught-it-early variety can recur years—decades—after treatment. And mine did.
Last summer, nine months after my most recent mammogram, while I was getting ready for bed and chatting with my husband, my fingers grazed something small and firm beneath the scar on my left breast. Just like that, I passed again through the invisible membrane that separates the healthy from the ill.
This latest tumor was as tiny and as pokey as before, unlikely to have spread. Obviously, though, it had to go. Since a lumpectomy requires radiation, and you can’t irradiate the same body part twice, my only option this round was a mastectomy. I was also prescribed tamoxifen to cut my risk of metastatic disease from 20 percent to 12. Again, that means I should survive, but there are no guarantees; I won’t know for sure whether I am cured until I die of something else—hopefully many decades from now, in my sleep, holding my husband’s hand, after a nice dinner with the grandchildren.
My first instinct this round was to have my other breast removed as well—I never wanted to go through this again. My oncologist argued against it. The tamoxifen would lower my risk of future disease to that of an average woman, he said. Would an average woman cut off her breasts? I could have preventive surgery if I wanted to, he added, but it would be a psychological decision, not a medical one.
I weighed the options as my hospital date approached. Average risk, after all, is not
zero. Could I live with that? Part of me still wanted to extinguish all threat. I have a nine-year-old daughter; I would do anything—I need to do everything—to keep from dying. Yet, if death was the issue, the greatest danger wasn’t my other breast. It is that despite treatment and a good prognosis, the cancer I’ve already had has metastasized. A preventive mastectomy wouldn’t change that; nor would it entirely eliminate the possibility of a new disease, because there’s always some tissue left behind.
What did doing “everything” mean, anyway? There are days when I skip sunscreen. I don’t exercise as much as I should. I haven’t given up aged Gouda despite my latest cholesterol count; I don’t get enough calcium. And, oh, yeah, my house is six blocks from a fault line. Is living with a certain amount of breast-cancer risk really so different? I decided to take my doctor’s advice, to do only what had to be done.
I assumed my dilemma was unusual, specific to the anxiety of having been too often on the wrong side of statistics. But it turned out that thousands of women now consider double mastectomies after low-grade cancer diagnoses. According to Todd Tuttle, chief of the division of surgical oncology at the University of Minnesota and lead author of a study on prophylactic mastectomy published in The Journal of Clinical Oncology, there was a 188 percent jump between 1998 and 2005 among women given new diagnoses of DCIS in one breast—a risk factor for cancer—who opted to have both breasts removed just in case. Among women with early-stage invasive disease (like mine), the rates rose about 150 percent. Most of those women did not have a genetic predisposition to cancer. Tuttle speculated they were basing their decisions not on medical advice but on an exaggerated sense of their risk of getting a new cancer in the other breast. Women, according to another study, believed that risk to be more than 30 percent over ten years when it was actually closer to 5 percent.
Don't Call Me Princess Page 13