Cancerland

Home > Other > Cancerland > Page 3
Cancerland Page 3

by David Scadden


  In 1976, when I arrived at Case, much of Cleveland lived with the pressures that Yvonne and her family dealt with every day. Labor issues and foreign competition in steel, cars, and other industries had destroyed so many jobs in the city that it was steadily losing population. In two years, Cleveland would become the first big city to default on its debt since the Great Depression. Then there was the environment. The air and water in Cleveland were notoriously polluted. In 1969, an oil slick on the Cuyahoga River caught fire. A river on fire! First-year medical students were given T-shirts with that image stating, CLEVELAND: YOU’VE GOT TO BE TOUGH. But it was a fabulous place to learn about medicine and people’s lives and how the two collided. My friends and I all volunteered in the funky Cleveland Free Clinic in the evenings, getting a rich supplement to our cultural education. I felt I could navigate things better for my patients when Yvonne delivered her healthy daughter, Annette, and went to the pediatrician visits. I was going to class to learn about the body, but it was in the clinical experiences that I learned how to help people.

  It was extraordinary to be first connected to hospital medicine through a birth. The word awestruck aptly captures what it was like to be a clueless medical student in the delivery suite, serving half as physician, half as patient advocate. Fortunately, not much was expected, because I was wobbly kneed through the whole experience. In the presence of a miracle, doing more than standing with mouth agape was beyond me. By the end of medical school, I had delivered more than a dozen babies, including twins, and done minor procedures postdelivery. Participating in the joy with parents at a delivery and understanding the remarkable state of pregnancy drew me to the field. But the need and the impact of emerging science were less evident to me in obstetrics, and I eventually turned away.

  I was somehow compelled by basic scientific research, especially in the area of molecular biology. No one with even the slightest interest could have missed the fact that by the 1970s we had entered a revolutionary period. Great advances were being made toward understanding the chemical secrets of life itself, and with each new discovery, we were getting closer to designing more rational treatments that would save lives and ease suffering. These advances created their own big questions for politicians, religious leaders, government regulators, and everyday citizens. The city of Cambridge, Massachusetts, had already banned recombinant DNA research by my first year of medical school. All of society would be engaged in issues prompted by the promise and peril of the biology revolution. I could imagine nothing more compelling.

  But as exciting as the scientific moment seemed, I wasn’t completely confident in my abilities to participate. In undergraduate classes, I loved organic chemistry, which is the course almost everyone considers the make-or-break subject for people who hope to go into medical school. But organic chemistry is like mathematics, a language. You learn certain fundamental rules for how things relate and you can put them together in creative ways. That nature has done so to make the building blocks of life—proteins, RNA, DNA, lipids, and sugars—captures the imagination. Add the spark of being able to modify things by the same rules, and the language of chemistry starts to become song. But I was just a student, inspired but not equipped to really make the music. I assumed that was only for those born with a determination to be and do chemistry in capital letters. It has a complexity and diversity that can seem almost infinite and was, at the time, opaque and mystifying to me.

  The connection to chemistry came back in a wave in the first class in medical school.

  That first course, in a subject given the vague title of Metabolism, was supposed to give us a firm grounding in the ways that cells—the fundamental unit of life—create, store, and use energy derived from nutrients to assemble and maintain cells. It was remarkable to me how the reactions and principles of organic chemistry turned into the very basis for life. It was also remarkable how complex it all rapidly became. For a few of my fellow students, who had studied a great deal of science as undergraduates (I was an English major), the subject was so easy that they sat and read the morning New York Times while grunts like me struggled to understand the lectures. I didn’t know whether I was not as bright as my classmates or there was something unusually hard about this course, but I truly struggled with it. Also, I felt uncomfortable just saying, “I’m stuck; help me.” For all I knew, I was the only one feeling so lost.

  Unlike at some other schools, where competition was overt and the general attitude was “survive it if you can,” we were encouraged to think of each other as colleagues who learned together. In that very first course, I discovered classmates like Bruce Walker—one of my closest friends to this day—who were eager to both give and receive help. Away from the lecture halls, we often met in groups to review lectures that were sometimes filled with daunting medical and scientific details. Together, we solidified our understanding of the material but also considered how it all might relate to the responsibilities we would assume when we started caring for people whose well-being and even continued existence would depend on us.

  Even though we were first-year students, far from becoming actual doctors, most of us were late-1960s idealists who wanted to use our talents and opportunities to help others. The people I knew didn’t seem to focus on money or status, and they voiced strong advocacy for doing the right thing for patients. There was an implicit suspicion of the commercial side of medicine. Those I knew took very seriously the commitment to making the world better, to literally alleviate suffering. The anxiety most often spoken was that we might go into the world and make a serious, even fatal mistake with a patient because we missed something crucial in med school. With this in mind, we got together in the evenings and on weekends and tutored each other. Since classes were six days a week, the big break was when the dean of admissions would invite us over on a Sunday afternoon for croquet, cookies, and beer—a spectacular break even when your ball ended up deep in the poison ivy. For a few of us, the itch to get away was too great, and we took a weekend bicycling in Vermont rather than studying biostatistics. To a person, we learned to regret that gap in our knowledge made worse by the solid drenching of Green Mountain rain.

  The tradition of student tutoring was well established at Case and kept all but the most ill-focused from falling behind. Once we got into the clinical time of our schooling, the third and fourth year of medical school, we were split into smaller and smaller groups, but the sense that doctoring meant continuous learning and teaching and learning again was preserved. People started distinguishing themselves by clinical specialties and areas of research. In the late 1970s, medicine was becoming more specialized as new disciplines arrived with their own requirements for training, testing, licensure, and standards for practice. Medical oncology was not a formal specialty until 1972, and many physicians eschewed any kind of subspecialty board testing and certification until it became imperative for participating in the era of managed care, the 1980s. This was not an entirely new idea. Physicians in ancient Egypt, believing each part of the body to be a separate entity, specialized along the lines of anatomy. In the modern era, some specialties like radiology emerged out of new technologies. However, the push for specialization came from those who believed that both clinical care and science would improve as minds were focused more intently on limited areas of interest and the amount of information in each burgeoned. Specialty information required specialty-specific knowledge that was tested by specialty-specific exams. This has led to an ever-heavier burden of credentials that physicians maintain, to the point of now-evident subspecialist revolt. But for medical students, it represented mostly a spectrum of information and activities that allowed them to sort themselves into comfortable spots across the spectrum. It also became evident that each specialty seemed to have a culture and an accepted set of behaviors. Personality-type matching with specialties was oddly predictable. The brooding self-reflectors never made the OR home, and the ex-athlete or ex-military rarely found the intangibles of psychiatry appealing.

/>   I was inspired by all the advances being made in biology and genetic technologies, which were sending waves of excitement through this former English major, if not the medical world more broadly. This work promised to make comprehensible the opaque world of cancer and perhaps the immune system. Cancers of the blood, which reside in the realms of both hematology and oncology, were most compelling for study because you could see the disease evolve at the level of the cell by simple blood samples. What the patient was experiencing was sometimes vague, but the blood was more eloquent, revealing whether fatigue represented poor sleep or regrowing leukemia cells. Other so-called solid tumors were rarely so evident, requiring scans and x-rays and ultimately biopsies. To me, the proximity of patient symptoms to cells in the blood was very compelling. It seemed that the basis for the disease had to be closer to comprehension if you could actually visualize what was going on by blood samples. It was also apparent that cells of the blood were what protected us from invaders of all types: viruses, bacteria, fungi, and maybe cancer. Studying the blood seemed to me like a way to ride the rocket of new discoveries in molecular biology and have them meaningfully turn into therapies I could give to patients.

  But before I could get aboard, I had to understand the landscape of the relevant science, which was changing faster than the seasons.

  * * *

  Most scientists recognize the creation of the first model of a DNA molecule, by James Watson and Francis Crick, as the breakthrough that began the era of molecular biology. (As The New York Times noted on its front page, their so-called double helix image brought us “near the secret of life.”) At the time when their work was published, the British Crick and the American Watson were little known outside their specialty. A bookish man, Crick would say his one hobby was “conversation.” A child prodigy who had appeared on the Quiz Kids radio show—sponsored by Alka-Seltzer—Watson had been briefly famous but then immersed himself in academia. He and Crick did their work at the University of Cambridge in a one-story, metal-roofed building affectionately called the Hut. However, there was nothing modest about their achievement, and as James Watson famously noted, Francis Crick was never in a “modest mood.” They announced their discovery to much fanfare and arguably too little attribution to those with whom they had worked, notably Rosalind Franklin.

  The double helix established how cell division transmitted genetic content in a way that preserved the information encoded in DNA. It made clear the mechanics of inheritance by showing that DNA has two strands, with one being simply a reverse complement of the other. They defined the rules for how making new DNA strands from existing ones could lead to perfect copies. They unveiled what had baffled thinkers since before science was a discipline. Thanks to the work completed inside the Hut, biology had reached an inflection point comparable to the advance made when physicists split the atom in the New Mexico desert in 1945. Watson and Crick energized biological science and made real the possibility of altering DNA and thereby cells and organisms. Biology would no longer be just describing the things that are, but a dynamic experimental field that could define how we function and what governs how we change in development and disease. The puzzles inside of puzzles that represent life science looked like they might have had hard rules governing them—more like chemistry or physics than traditional field biology. That meant unleashing not just discovery but creativity: methods to change biology, perhaps to develop new therapies for biologic disease.

  * * *

  As the world absorbed the news about the double helix, Alick Isaacs was investigating influenza viruses at the National Institute for Medical Research in London, sixty-five miles south of Cambridge. He was motivated in large part by the not-so-distant Spanish flu pandemic of 1918–19, which had killed an estimated fifty million people. In 1957, as a less deadly Asian flu blazed across the world, he was invited to tea by a Swiss virologist named Jean Lindenmann, who had come to the institute for a one-year fellowship. Lindenmann and Isaacs were two very different men. Isaacs was a quintessential Scot and prone to serious bouts of depression. Lindenmann had been raised in Zurich by cosmopolitan parents and was known for an ironic sense of humor. Both men were medical doctors and research scientists, and they shared a fascination with the way that people infected with one strain of a virus seemed to develop immunity against related ones. They were determined to discover how this occurred.

  Good research of course depends on carefully designed experiments. Isaacs and Lindenmann devised a classic. First they grew cells from the membranes of chicken eggs in a nutrient solution. Then they added one kind of virus to the cells and, as the virus flourished, added another type. The cells resisted this second virus. Isaacs and Lindenmann then removed all the cells and the viruses, leaving only the culture in which they had been grown. When they added fresh membrane cells to the nutrient mixture and hit them with a third type of virus, something remarkable happened—the new cells resisted that infection. Something produced by that first batch of cells, left behind when they were removed, had interfered with the infectious powers of the viruses. They called this substance, which was a protein, interferon.

  Isaacs and Lindenmann were not alone in their scientific interest. At roughly the same time, others working in Tokyo and Boston had also noticed that some process was stopping viral growth in certain circumstances—and they, too, identified the key protein. Taken together, this work showed that interferon functions as both a virus blocker and a signal sender, communicating to nearby cells that an invader is present. The signal stimulates the listening cells to produce their own interferon, which then stops the virus as it arrives. All this communication, going on at the cellular level, was a stunning revelation for those seeking ways to intervene on the body’s behalf against viral threats that cause infectious disease in the short term and cancer later in life. (Viruses are now believed to be a factor in as many as 20 percent of malignancies. Among the ones we now know contribute to the risk of cancer are the Epstein-Barr virus, human papillomaviruses, the hepatitis B and hepatitis C viruses, human herpesvirus 8, HIV, and a related human T lymphotropic virus, HTLV-1.)

  Unfortunately, in the years after it was discovered, the only known method of producing human interferon involved processing vast amounts of blood. A single one-hundred-milligram dose of natural interferon required sixty-five thousand pints. The only source was the lab of Finnish researcher Kari Cantell, who had arranged to obtain every ounce of blood donated by the Finnish people so that he could harvest the protein. The expense and technical challenges in producing interferon meant that it was used in very few medical trials. Nevertheless, small studies suggested that it could slow the development of some forms of cancer.

  In the 1960s and 1970s, pharmaceutical companies, foundations, and government agencies poured more than $100 million—roughly $1 billion in 2017 dollars—into interferon research. This work established that cells produce many different types of interferon. Among the most important is the interferon made by T lymphocytes, which are essential to the body’s infection-fighting abilities. These T cells are produced in the human thymus, which was long considered a vestigial organ of no real importance.

  The history of the thymus and science underscores the challenge in understanding the human body. As recently as 1960, many physicians considered this little mass of tissue to be inconsequential. Some thought it was implicated in sudden infant death syndrome (SIDS), and even though the connection was never proven, they “treated” infants with radiation, damaging the thymus, with the belief that they were preventing SIDS. Then in 1961, an Australian immunologist named Jacques Miller found he could all but disable the immune system of mice by removing the thymus. His experiment and follow-on work showed that through the production of T cells, the gland contributed greatly to immunity. (Much more about these cells in a later chapter.) This understanding opened the possibility that the body could be rallied, perhaps with T cell–stimulating treatment, to better fight disease on its own.

  The science th
at identified interferon and then the creation and action of T cells revived interest in the idea that infectious agents, such as viruses, were implicated in the cancer process. This possibility had been raised in 1910, when a cancer in chickens was linked to a virus by a scientist named Francis Peyton Rous, who worked at the Rockefeller Institute in New York. Any “cause” for cancer offered a chance to investigate the mechanism of disease and explore the possibility of treatments based on the body’s ways of protecting itself. At the start of the twentieth century, Peyton Rous prompted a big search for human cancers caused by infectious agents, but years and then decades passed with no new discoveries. The chicken-virus-cancer link came to be regarded as an anomaly.

  Anomalies die quiet deaths in science, except when they are confirmed. In the case of the cancer-causing infection, the confirmation finally came from an unlikely source—an Irish missionary physician named Denis Parsons Burkitt, who spent most of his life working in Uganda. (When he arrived, he became only the second surgeon in the entire country.) An unassuming man who had lost an eye in a childhood accident, Burkitt had almost failed at college and later said he had prayed his way through his chemistry courses. He said he considered his work “God’s call” and insisted that he received much in return for his labor. “I gave a spoonful and got back a shovelful,” was how he put it. As a scientist, Burkitt was fascinated by the types of disease maps epidemiologists use to show how illnesses arise in people across a community or region.

 

‹ Prev