The Emperor of All Maladies

Home > Science > The Emperor of All Maladies > Page 3
The Emperor of All Maladies Page 3

by Siddhartha Mukherjee


  In children, leukemia was most commonly ALL—lymphoblastic leukemia—and was almost always swiftly lethal. In 1860, a student of Virchow’s, Michael Anton Biermer, described the first known case of this form of childhood leukemia. Maria Speyer, an energetic, vivacious, and playful five-year-old daughter of a Würzburg carpenter, was initially seen at the clinic because she had become lethargic in school and developed bloody bruises on her skin. The next morning, she developed a stiff neck and a fever, precipitating a call to Biermer for a home visit. That night, Biermer drew a drop of blood from Maria’s veins, looked at the smear using a candlelit bedside microscope, and found millions of leukemia cells in the blood. Maria slept fitfully late into the evening. Late the next afternoon, as Biermer was excitedly showing his colleagues the specimens of “exquisit Fall von Leukämie” (an exquisite case of leukemia), Maria vomited bright red blood and lapsed into a coma. By the time Biermer returned to her house that evening, the child had been dead for several hours. From its first symptom to diagnosis to death, her galloping, relentless illness had lasted no more than three days.

  Although nowhere as aggressive as Maria Speyer’s leukemia, Carla’s illness was astonishing in its own right. Adults, on average, have about five thousand white blood cells circulating per milliliter of blood. Carla’s blood contained ninety thousand cells per milliliter—nearly twentyfold the normal level. Ninety-five percent of these cells were blasts—malignant lymphoid cells produced at a frenetic pace but unable to mature into fully developed lymphocytes. In acute lymphoblastic leukemia, as in some other cancers, the overproduction of cancer cells is combined with a mysterious arrest in the normal maturation of cells. Lymphoid cells are thus produced in vast excess, but, unable to mature, they cannot fulfill their normal function in fighting microbes. Carla had immunological poverty in the face of plenty.

  White blood cells are produced in the bone marrow. Carla’s bone marrow biopsy, which I saw under the microscope the morning after I first met her, was deeply abnormal. Although superficially amorphous, bone marrow is a highly organized tissue—an organ, in truth—that generates blood in adults. Typically, bone marrow biopsies contain spicules of bone and, within these spicules, islands of growing blood cells—nurseries for the genesis of new blood. In Carla’s marrow, this organization had been fully destroyed. Sheet upon sheet of malignant blasts packed the marrow space, obliterating all anatomy and architecture, leaving no space for any production of blood.

  Carla was at the edge of a physiological abyss. Her red cell count had dipped so low that her blood was unable to carry its full supply of oxygen (her headaches, in retrospect, were the first sign of oxygen deprivation). Her platelets, the cells responsible for clotting blood, had collapsed to nearly zero, causing her bruises.

  Her treatment would require extraordinary finesse. She would need chemotherapy to kill her leukemia, but the chemotherapy would collaterally decimate any remnant normal blood cells. We would push her deeper into the abyss to try to rescue her. For Carla, the only way out would be the way through.

  Sidney Farber was born in Buffalo, New York, in 1903, one year after Virchow’s death in Berlin. His father, Simon Farber, a former bargeman in Poland, had immigrated to America in the late nineteenth century and worked in an insurance agency. The family lived in modest circumstances at the eastern edge of town, in a tight-knit, insular, and often economically precarious Jewish community of shop owners, factory workers, bookkeepers, and peddlers. Pushed relentlessly to succeed, the Farber children were held to high academic standards. Yiddish was spoken upstairs, but only German and English were allowed downstairs. The elder Farber often brought home textbooks and scattered them across the dinner table, expecting each child to select and master one book, then provide a detailed report for him.

  Sidney, the third of fourteen children, thrived in this environment of high aspirations. He studied both biology and philosophy in college and graduated from the University of Buffalo in 1923, playing the violin at music halls to support his college education. Fluent in German, he trained in medicine at Heidelberg and Freiburg, then, having excelled in Germany, found a spot as a second-year medical student at Harvard Medical School in Boston. (The circular journey from New York to Boston via Heidelberg was not unusual. In the mid-1920s, Jewish students often found it impossible to secure medical-school spots in America—often succeeding in European, even German, medical schools before returning to study medicine in their native country.) Farber thus arrived at Harvard as an outsider. His colleagues found him arrogant and insufferable, but, he too, relearning lessons that he had already learned, seemed to be suffering through it all. He was formal, precise, and meticulous, starched in his appearance and his mannerisms and commanding in presence. He was promptly nicknamed Four-Button Sid for his propensity for wearing formal suits to his classes.

  Farber completed his advanced training in pathology in the late 1920s and became the first full-time pathologist at the Children’s Hospital in Boston. He wrote a marvelous study on the classification of children’s tumors and a textbook, The Postmortem Examination, widely considered a classic in the field. By the mid-1930s, he was firmly ensconced in the back alleys of the hospital as a preeminent pathologist—a “doctor of the dead.”

  Yet the hunger to treat patients still drove Farber. And sitting in his basement laboratory in the summer of 1947, Farber had a single inspired idea: he chose, among all cancers, to focus his attention on one of its oddest and most hopeless variants—childhood leukemia. To understand cancer as a whole, he reasoned, you needed to start at the bottom of its complexity, in its basement. And despite its many idiosyncrasies, leukemia possessed a singularly attractive feature: it could be measured.

  Science begins with counting. To understand a phenomenon, a scientist must first describe it; to describe it objectively, he must first measure it. If cancer medicine was to be transformed into a rigorous science, then cancer would need to be counted somehow—measured in some reliable, reproducible way.

  In this, leukemia was different from nearly every other type of cancer. In a world before CT scans and MRIs, quantifying the change in size of an internal solid tumor in the lung or the breast was virtually impossible without surgery: you could not measure what you could not see. But leukemia, floating freely in the blood, could be measured as easily as blood cells—by drawing a sample of blood or bone marrow and looking at it under a microscope.

  If leukemia could be counted, Farber reasoned, then any intervention—a chemical sent circulating through the blood, say—could be evaluated for its potency in living patients. He could watch cells grow or die in the blood and use that to measure the success or failure of a drug. He could perform an “experiment” on cancer.

  The idea mesmerized Farber. In the 1940s and ’50s, young biologists were galvanized by the idea of using simple models to understand complex phenomena. Complexity was best understood by building from the ground up. Single-celled organisms such as bacteria would reveal the workings of massive, multicellular animals such as humans. What is true for E. coli [a microscopic bacterium], the French biochemist Jacques Monod would grandly declare in 1954, must also be true for elephants.

  For Farber, leukemia epitomized this biological paradigm. From this simple, atypical beast he would extrapolate into the vastly more complex world of other cancers; the bacterium would teach him to think about the elephant. He was, by nature, a quick and often impulsive thinker. And here, too, he made a quick, instinctual leap. The package from New York was waiting in his laboratory that December morning. As he tore it open, pulling out the glass vials of chemicals, he scarcely realized that he was throwing open an entirely new way of thinking about cancer.

  *Although the link between microorganisms and infection was yet to be established, the connection between pus—purulence—and sepsis, fever, and death, often arising from an abscess or wound, was well known to Bennett.

  * The identification of HIV as the pathogen, and the rapid spread of the virus across th
e globe, soon laid to rest the initially observed—and culturally loaded—“predeliction” for gay men.

  *Virchow did not coin the word, although he offered a comprehensive description of neoplasia.

  “A monster more insatiable

  than the guillotine”

  The medical importance of leukemia has always been disproportionate to its actual incidence. . . . Indeed, the problems encountered in the systemic treatment of leukemia were indicative of the general directions in which cancer research as a whole was headed.

  —Jonathan Tucker,

  Ellie: A Child’s Fight Against Leukemia

  There were few successes in the treatment of disseminated cancer. . . . It was usually a matter of watching the tumor get bigger, and the patient, progressively smaller.

  —John Laszlo, The Cure of Childhood Leukemia: Into the Age of Miracles

  Sidney Farber’s package of chemicals happened to arrive at a particularly pivotal moment in the history of medicine. In the late 1940s, a cornucopia of pharmaceutical discoveries was tumbling open in labs and clinics around the nation. The most iconic of these new drugs were the antibiotics. Penicillin, that precious chemical that had to be milked to its last droplet during World War II (in 1939, the drug was reextracted from the urine of patients who had been treated with it to conserve every last molecule), was by the early fifties being produced in thousand-gallon vats. In 1942, when Merck had shipped out its first batch of penicillin—a mere five and a half grams of the drug—that amount had represented half of the entire stock of the antibiotic in America. A decade later, penicillin was being mass-produced so effectively that its price had sunk to four cents for a dose, one-eighth the cost of a half gallon of milk.

  New antibiotics followed in the footsteps of penicillin: chloramphenicol in 1947, tetracycline in 1948. In the winter of 1949, when yet another miraculous antibiotic, streptomycin, was purified out of a clod of mold from a chicken farmer’s barnyard, Time magazine splashed the phrase “The remedies are in our own backyard,” prominently across its cover. In a brick building on the far corner of Children’s Hospital, in Farber’s own backyard, a microbiologist named John Enders was culturing poliovirus in rolling plastic flasks, the first step that culminated in the development of the Sabin and Salk polio vaccines. New drugs appeared at an astonishing rate: by 1950, more than half the medicines in common medical use had been unknown merely a decade earlier.

  Perhaps even more significant than these miracle drugs, shifts in public health and hygiene also drastically altered the national physiognomy of illness. Typhoid fever, a contagion whose deadly swirl could decimate entire districts in weeks, melted away as the putrid water supplies of several cities were cleansed by massive municipal efforts. Even tuberculosis, the infamous “white plague” of the nineteenth century, was vanishing, its incidence plummeting by more than half between 1910 and 1940, largely due to better sanitation and public hygiene efforts. The life expectancy of Americans rose from forty-seven to sixty-eight in half a century, a greater leap in longevity than had been achieved over several previous centuries.

  The sweeping victories of postwar medicine illustrated the potent and transformative capacity of science and technology in American life. Hospitals proliferated—between 1945 and 1960, nearly one thousand new hospitals were launched nationwide; between 1935 and 1952, the number of patients admitted more than doubled from 7 million to 17 million per year. And with the rise in medical care came the concomitant expectation of medical cure. As one student observed, “When a doctor has to tell a patient that there is no specific remedy for his condition, [the patient] is apt to feel affronted, or to wonder whether the doctor is keeping abreast of the times.”

  In new and sanitized suburban towns, a young generation thus dreamed of cures—of a death-free, disease-free existence. Lulled by the idea of the durability of life, they threw themselves into consuming durables: boat-size Studebakers, rayon leisure suits, televisions, radios, vacation homes, golf clubs, barbecue grills, washing machines. In Levittown, a sprawling suburban settlement built in a potato field on Long Island—a symbolic utopia—“illness” now ranked third in a list of “worries,” falling behind “finances” and “child-rearing.” In fact, rearing children was becoming a national preoccupation at an unprecedented level. Fertility rose steadily—by 1957, a baby was being born every seven seconds in America. The “affluent society,” as the economist John Galbraith described it, also imagined itself as eternally young, with an accompanying guarantee of eternal health—the invincible society.

  But of all diseases, cancer had refused to fall into step in this march of progress. If a tumor was strictly local (i.e., confined to a single organ or site so that it could be removed by a surgeon), the cancer stood a chance of being cured. Extirpations, as these procedures came to be called, were a legacy of the dramatic advances of nineteenth-century surgery. A solitary malignant lump in the breast, say, could be removed via a radical mastectomy pioneered by the great surgeon William Halsted at Johns Hopkins in the 1890s. With the discovery of X-rays in the early 1900s, radiation could also be used to kill tumor cells at local sites.

  But scientifically, cancer still remained a black box, a mysterious entity that was best cut away en bloc rather than treated by some deeper medical insight. To cure cancer (if it could be cured at all), doctors had only two strategies: excising the tumor surgically or incinerating it with radiation—a choice between the hot ray and the cold knife.

  In May 1937, almost exactly a decade before Farber began his experiments with chemicals, Fortune magazine published what it called a “panoramic survey” of cancer medicine. The report was far from comforting: “The startling fact is that no new principle of treatment, whether for cure or prevention, has been introduced. . . . The methods of treatment have become more efficient and more humane. Crude surgery without anesthesia or asepsis has been replaced by modern painless surgery with its exquisite technical refinement. Biting caustics that ate into the flesh of past generations of cancer patients have been obsolesced by radiation with X-ray and radium. . . . But the fact remains that the cancer ‘cure’ still includes only two principles—the removal and destruction of diseased tissue [the former by surgery; the latter by X-rays]. No other means have been proved.”

  The Fortune article was titled “Cancer: The Great Darkness,” and the “darkness,” the authors suggested, was as much political as medical. Cancer medicine was stuck in a rut not only because of the depth of medical mysteries that surrounded it, but because of the systematic neglect of cancer research: “There are not over two dozen funds in the U.S. devoted to fundamental cancer research. They range in capital from about $500 up to about $2,000,000, but their aggregate capitalization is certainly not much more than $5,000,000. . . . The public willingly spends a third of that sum in an afternoon to watch a major football game.”

  This stagnation of research funds stood in stark contrast to the swift rise to prominence of the disease itself. Cancer had certainly been present and noticeable in nineteenth-century America, but it had largely lurked in the shadow of vastly more common illnesses. In 1899, when Roswell Park, a well-known Buffalo surgeon, had argued that cancer would someday overtake smallpox, typhoid fever, and tuberculosis to become the leading cause of death in the nation, his remarks had been perceived as a rather “startling prophecy,” the hyperbolic speculations of a man who, after all, spent his days and nights operating on cancer. But by the end of the decade, Park’s remarks were becoming less and less startling, and more and more prophetic by the day. Typhoid, aside from a few scattered outbreaks, was becoming increasingly rare. Smallpox was on the decline; by 1949, it would disappear from America altogether. Meanwhile cancer was already outgrowing other diseases, ratcheting its way up the ladder of killers. Between 1900 and 1916, cancer-related mortality grew by 29.8 percent, edging out tuberculosis as a cause of death. By 1926, cancer had become the nation’s second most common killer, just behind heart disease.

  “Cancer:
The Great Darkness” wasn’t alone in building a case for a coordinated national response to cancer. In May that year, Life carried its own dispatch on cancer research, which conveyed the same sense of urgency. The New York Times published two reports on rising cancer rates, in April and June. When cancer appeared in the pages of Time in July 1937, interest in what was called the “cancer problem” was like a fierce contagion in the media.

  Proposals to mount a systematic national response against cancer had risen and ebbed rhythmically in America since the early 1900s. In 1907, a group of cancer surgeons had congregated at the New Willard Hotel in Washington to create an organization to lobby Congress for more funds for cancer research. By 1910, this organization, the American Association for Cancer Research, had convinced President Taft to propose to Congress a national laboratory dedicated to cancer research. But despite initial interest in the plan, the efforts had stalled in Washington after a few fitful attempts, largely because of a lack of political support.

  In the late 1920s, a decade after Taft’s proposal had been tabled, cancer research found a new and unexpected champion—Matthew Neely, a dogged and ebullient former lawyer from Fairmont, West Virginia, serving his first term in the Senate. Although Neely had relatively little experience in the politics of science, he had noted the marked increase in cancer mortality in the previous decade—from 70,000 men and women in 1911 to 115,000 in 1927. Neely asked Congress to advertise a reward of $5 million for any “information leading to the arrest of human cancer.”

  It was a lowbrow strategy—the scientific equivalent of hanging a mug shot in a sheriff’s office—and it generated a reflexively lowbrow response. Within a few weeks, Neely’s office in Washington was flooded with thousands of letters from quacks and faith healers purporting every conceivable remedy for cancer: rubs, tonics, ointments, anointed handkerchiefs, salves, and blessed water. Congress, exasperated with the response, finally authorized $50,000 for Neely’s Cancer Control Bill, almost comically cutting its budget back to just 1 percent of the requested amount.

 

‹ Prev