Book Read Free

The Body Hunters

Page 9

by Sonia Shah


  Physicians in Fascist Germany and imperial Japan likewise performed nontherapeutic experiments, but in their cases, subjects were condemned to death regardless of the results. Japanese scientists injected their Chinese prisoners with plague, cholera, and other pathogens, slaughtering them when they finally became too weak to provide any interesting data. They also conducted “field tests” on unsuspecting Chinese villages, poisoning more than one thousand wells with typhoid bacilli, releasing plague-infested rats and spraying typhus and cholera on wheat fields.20

  During World War II Nazi scientists conducted a range of grisly experiments on concentration camp inmates. Eager to understand how the human body functioned at high altitudes, they encased subjects in decompression chambers, pumped all the air out, and then dissected the subjects while still alive to study their lungs. To see firsthand the effects of dehydration they starved subjects and forced them to drink only saltwater. They injected children with gasoline. They removed their subjects’ bones and limbs, many dying from infections after their useless surgeries, others simply being shot. Inmates were injected with phenol to see how long it would take them to die.21

  Prisoners were used, a Nazi officer later explained, because “volunteers could not very well be expected, as the experiments could be fatal.”22

  The Nazi regime’s medical research program fell under scrutiny soon after the war ended. Some twenty Nazi doctors of the hundreds or more who may have been involved in Germany’s wartime experimentation were selected to stand trial before the International Military Tribunal, set up in Nuremberg by the United States and the rest of the victorious allies.23

  Though the U.S. government’s own deceptive and exploitative wartime experiments would not see the light of day for nearly fifty years,24 it still wasn’t easy for the Americans to prove that their medical research was substantively different from that of the Nazis. Each submerged the interests of human subjects in order to procure scientific data.

  The defendants argued that their wartime experiments were essentially run-of-the-mill medical research, “the logical expression of the values of German medical science,” as University of California historian Anita Guerrini notes in her 2003 book, Experimenting with Humans and Animals. The subjects were volunteers, they said, who were scheduled to be killed anyway. And their suffering had to be balanced against the benefits the research would bestow upon others. That is, “it was legitimate that a few should have been made to suffer for the good of the many,” Guerrini wrote.25 Wasn’t this the guiding philosophy of all Western medical research? Hadn’t American doctors purposely given prisoners a fatal disease in their own experiments? the Nazis’ defense lawyers asked the court, reciting from the 1945 Life magazine article on the government’s prisoner-malaria experiments.26

  To uphold the reputation of American medical research, the prosecution called upon its star medical ethics expert, the University of Illinois’s Andrew Ivy, MD. The fact was, though, that nobody in the American medical research establishment had questioned the ethics of the prisoner-malaria experiments when the Life spread appeared in 1945. Nobody had said anything in 1946, when PHS doctors had reported that their untreated patients at Tuskegee were dying at nearly twice the rate as their healthy controls. The truth was, while the Hippocratic oath guided medical practice no American medical researcher was bound by any written ethical principles.27

  Nazi medical experimentation may have fallen into a lower category of depravity than what was happening in the United States, conducted as it was in the context of wholesale butchery, but the fact was that little could be called upon to prove this was so, at least not without knocking the medical research establishment off its pedestal. Ivy was forced to act quickly. As the trial progressed he convened a panel to investigate the prisoner-malaria experiments and wrote up some ethical principles to govern human experimentation, presenting his draft to the American Medical Association. Ivy represented his hastily improvised solutions, yet to be considered by the AMA, as “the basic principles approved by the American Medical Association for the use of human beings as subjects in medical experiments.” He also presented his newly formed panel on the prisoner-malaria experiments as an ongoing one, though it had yet to meet even once. If the public’s high regard for medical research were any guide, it should have been easy to prove Nazi medical research worse than America’s, but the country’s leading medical ethics expert had to perjure himself in order to do it.28

  In the end, four Nazi doctors were hanged after their trial at Nuremberg and eight were sentenced to prison. The rest, along with others who were not tried, returned home to their university jobs and medical practices.29

  The judges, in their decision, issued a new set of ethical guidelines to govern medical experiments. These would become known weightily as the Nuremberg Code, but in fact were mostly lifted directly from the few principles jotted down by Andrew Ivy.30 The most pertinent of the ten principles was the first one: that human subjects in experiments should understand what they are getting into and agree to participate. Experimental subjects should not be powerless prisoners of war and the like either, but “so situated as to be able to exercise free power of choice.” Experiments should only be conducted when absolutely necessary “so as to yield fruitful results for the good of society,” and risks to subjects should be minimized by all means possible. Any dangers to the subjects must be outweighed by the “humanitarian importance of the problem to be solved by the experiment,” and certainly should include none that researchers knew in advance might result in death or disability.31

  The medical profession lauded the code publicly, but privately tended to dismiss it. Yale psychiatrist Jay Katz remembered his professors’ reactions to the Nuremberg Code: “It was a good code for barbarians but an unnecessary code for ordinary physicians.”32 And in any case, the new code was voluntary and vague. When the medical establishment used the code to consider whether a given experiment’s potential social benefits outweighed definite risks to subjects living today, they were usually able to err on the side of the former. For example, in most countries the Nuremberg Code was interpreted to exclude prisoners from any kind of medical research. In the United States Ivy’s committee found the government’s prisoner-malaria experiments ethically “ideal,” a view it announced in the February 14, 1948, issue of JAMA.33

  During the 1950s and 1960s medical researchers continued to conduct experiments on powerless subjects that fell well short of Nuremberg’s ideal of minimal risk and informed and voluntary consent. For example, in 1952 Jonas Salk conducted early trials of his experimental polio vaccine on mentally retarded children at the Polk State School in Pennsylvania; in many cases, only the state officials who were the legal guardians of the children gave permission. Between 1957 and 1960 another polio researcher, the drug industry–sponsored Hilary Koprowski likewise tested his polio vaccine on retarded children in New York, as well as on 325,000 children in what was then called the Belgian Congo.34

  In other cases informed consent was skipped over entirely, for the experiments themselves were secret. Between 1944 and 1960 government researchers secretly released radioactive material over mostly Native American and Latino communities in order to determine how the material dispersed and its effects on human health. Likewise, in a series of experiments conducted between 1953 and 1957 medical researchers at Massachusetts General Hospital exposed eleven unsuspecting patients to uranium, hoping to find out how the substance might affect inadvertently exposed government workers.35

  The doctrine of minimizing risks to test subjects was fuzzy enough to allow investigators to openly infect otherwise healthy people in order to see what might happen. For example, in a series of medical experiments between 1963 and 1966 New York University pediatrician Saul Krugman injected healthy children with hepatitis virus, a liver-infecting pathogen spread through fecal matter. Working at Willowbrook State School, a state-run institution for mentally retarded and other disabled children, Krugman’s team obtained hepa
titis-laden feces, centrifuged, heated, and treated it with antibiotics, and then mixed it with five parts of chocolate milk to one part of feces. They fed the contaminated concoction to uninfected children and tracked their deterioration. According to Krugman, purposely infecting the children didn’t subject them to any great risk, because Willowbrook was rife with infectious diseases anyway.36 This was a facility, after all, where inmates sometimes smeared the walls with feces.37

  Krugman’s Willowbrook studies went on until the 1970s, resulting in breakthroughs in hepatitis research that made Krugman a medical hero. He was awarded some of the most prestigious prizes in medicine.38

  These transgressions only started to leak into public notice in the mid-1960s. First, in 1966, a Harvard anesthesiology professor named Henry K. Beecher described dozens of studies that violated Nuremberg standards in a New England Journal of Medicine paper, including one in which subjects in a typhoid study had been denied effective medication, leading to twenty-three deaths, and another in which ill patients had been purposely injected with live cancer cells. The following year, across the Atlantic, British physician Maurice Pappworth released his book Human Guinea Pigs: Experimentation on Man, likening the research practices of Western scientists to those of the Nazi doctors.39

  Revelations from Beecher and Pappworth proved insufficiently persuasive to many investigators, including those continuing their inquiries in Tuskegee. Outraged letters to the Public Health Service about the study started to trickle in,40 but when the Centers for Disease Control (CDC) reviewed the study in the late 1960s (responsibility for the program had transferred to that agency in 195741), they nevertheless decided that it should continue until the study’s “endpoints” were reached, that is, until all of the ill subjects died. By 1969, untreated syphilis had felled up to hundred of the subjects of the study. “You will never have another study like this; take advantage of it,” a CDC reviewer suggested.42

  But with the 1960s-era articulation of the rights of blacks, women, the poor, and other oppressed people, the racist paternalism of the Tuskegee study could not remain submerged much longer. One staffer in the Public Health Service, Peter J. Buxton, felt that “what was being done was very close to murder and was, if you will, an institutionalized form of murder,” and he brought his concerns to his superiors. After they delivered “a rather stern lecture” about the benefits of the study, as Buxton recalled, he brought the information to a reporter friend. In 1972, Jean Heller reported on the Tuskegee Study of Untreated Syphilis in the New York Times and unleashed a storm of outrage.43

  Aided in part by revelations about Tuskegee, by the early 1970s unalloyed faith in medicine stalled. The heralded new drugs and medical techniques of the postwar era had ended up costing more and producing less by way of better health than most had anticipated. Between 1962 and 1972 Americans’ health care bill had tripled; the cost of prescription drugs had doubled.44 And yet, Americans suffered higher infant mortality rates and lower life expectancies than most Europeans. In January 1970 Fortune magazine asserted that American medicine “is inferior in quality, wastefully dispensed, and inequitably financed. . . . Whether poor or not, Americans are badly served by the obsolete, over-strained medical system that has grown up around them helterskelter.” The situation was so bad that even the business press had come to sound like rabble-rousing activists. “The time has come for radical change,” Fortune opined.45

  The Tuskegee study quickly achieved notoriety as a prime example of racist medical arrogance. Prominent physicians took up their pens to decry what they called a “crime against humanity” of “awesome dimensions.” Senate hearings and a $1.8 billion lawsuit followed.46 By the time the Tuskegee study was finally terminated on November 16, 1972, the untreated Tuskegee subjects had unwittingly infected twenty-two women, seventeen children, and two grandchildren. The U.S. government agreed to pay $37,500 to each syphilitic patient who was still alive and $15,000 to those who served as controls.47

  The Tuskegee revelations proved to the public the folly of allowing the moral integrity of scientists to suffice as protection for experimental subjects. Government needed to regulate the medical research industry just as they regulated mines and factories.

  The National Research Act was passed in 1974, and an entirely new actor barged into the test clinic: independent oversight committees. Under the act the integrity of informed consent, the minimization of risks, and the breadth of data supporting the goals of the research would be assessed not by investigators themselves, but by independent committees empowered to ban or alter trials that didn’t pass muster. These ethics committees, called institutional review boards (IRB) in the United States, would be the final arbiters on the ethics of human experiments.

  The 1974 national commission convened to elaborate on ethical principles guiding human experimentation in the United States went further. According to its Belmont Report, scientists had to practice “respect for persons,” “beneficence,” and something even more ambitious: justice. Experiments should not be conducted on the impoverished, incarcerated, and other vulnerable populations solely for the benefit of the rich and free, or to sate the curiosity of researchers.48

  These ethical obligations echoed those articulated in another voluntary code then making the rounds. In 1975, the United States along with thirty-four other countries signed onto the “Declaration of Helsinki,” a bold document crafted by the World Medical Association, a group representing dozens of national physicians’ organizations from around the globe. The declaration urged voluntary informed consent, the use of independent ethics committees, and that investigators prioritize their subjects’ well-being above all other concerns, including “the interests of science and society.” In the interests of justice, the declaration suggested, research subjects should be assured of access to the best health interventions identified in the study, and that their societies enjoy a “reasonable likelihood” of benefiting from the results of the experiment.49

  Over the following years the new ethical principles developed in Belmont and Helsinki slowly trickled into the federal regulations governing clinical research in the United States. These regulations bound all research on American subjects and applied as well to any researchers accepting U.S. government funding no matter where they conducted their experiments.

  Any drug company hankering for FDA approval to market new drugs would have to abide by the new regulations too—unless they conducted their trials outside the United States without alerting the FDA first. In that case, according to FDA rules, the Declaration of Helsinki (or local laws, whichever afforded more protection) would suffice.50

  Between World War II and the mid-1970s regulators had arduously built a wall, brick by brick, to protect the human rights and dignity of human research subjects from the inquisitive investigators itching for access to them. The first major assault on these barriers came not long afterward. Propelled by the spread of HIV in the darkest days of the AIDS pandemic, the medical research establishment rushed the wall, and found it a challenging but not insurmountable hurdle.

  5

  HIV and the Second-rate

  Solution

  From the Nazi camps to Tuskegee, when investigators needed their test subjects to suffer in order to acquire results, they often assumed the posture of the innocent bystander: in the concentration camps, inmates were going to be killed anyway; at Willowbrook, the children would have infected themselves with hepatitis if the scientists hadn’t intervened; at Tuskegee, the sharecroppers wouldn’t have been able to afford treatment, so what did it matter that investigators didn’t provide any?

  According to the new ethics regime established in the 1970s, such rationalizations would no longer be sufficient. According to Helsinki, “considerations related to the well-being of the human subject should take precedence over the interests of science.” That meant that in controlled trials new methods should be tested against the “best current” methods, not some slipshod facsimile of them, even if the best curren
t methods would be no more than a dream to test subjects had they not enrolled in the trial.1

  But the codes were vague, and at times contradictory, and this particular standard wasn’t one that researchers were too keen on. Since doctors didn’t universally dole out the best current methods to their patients, circumscribed as they might be by access to resources and information, why should clinical investigators be held to a higher standard? What if subjects didn’t mind not getting the best current treatments, and were happy with second-rate—or even third-rate—regimens? What if by offering substandard care in their trials scientists could produce astounding results that might change the face of the world?

  It took the disastrous new scourge of AIDS to lay bare the contradictions. When the Centers for Disease Control first reported on a strange immune deficiency in healthy young gay men in 1981, government officials and drugmakers reacted with studied indifference. So hostile was the Reagan administration to the interests of homosexuals that the surgeon general was “flatly forbidden to make any public pronouncements about the new disease,” according to journalist Laurie Garrett.2 Drug companies were reluctant to develop drugs for the deadly infection because they felt the “target market would be too small,” FDA historian Philip Hilts wrote. “It was said that to develop a drug to treat an illness affecting fewer than 200,000 people would yield too small a profit.”3

  For medical researchers, though, AIDS presented a breathtaking vortex of research questions. By 1984, amid intense competition among medical researchers, scientists had isolated the cause of the disease.4 The culprit was a retrovirus, an organism that can only survive and replicate by pirating live cells. HIV is an especially ominous intruder: it infects the immune system itself, hijacking the command centers of pathogen fighters called CD4 cells and instructing them to cease all activities save sending out copies of their new viral commander. Thus crippled, the body is dangerously vulnerable to infections. The retrovirus replicates at a rapid clip, churning out ten billion copies every day, some proportion of which have slight variations—mutations—that would make treating the disease complex.5

 

‹ Prev