The Body Hunters

Home > Other > The Body Hunters > Page 2
The Body Hunters Page 2

by Sonia Shah


  According to CenterWatch, a publisher specializing in the clinical research industry, in order to launch a single drug a company has to convince more than 4,000 patients to undergo 141 medical procedures each in more than 65 separate trials. First come the small Phase 1 studies that test a new drug’s safety; then the slightly larger Phase 2 studies that look for hints of effectiveness, and finally the extensive Phase 3 studies that aim to prove a drug’s effectiveness with statistical certainty. More than 100,000 people have to be enticed to call in for initial screenings for such trials, as only a fraction show up for their appointments, and of these only a fraction would be medically eligible.6 With the expense of finding and retaining a single test subject for a clinical trial running to at least $1,500, and with some 90 percent of the drugs entering clinical trials failing to garner FDA approval anyway, minimizing the cost and length of clinical trials had become crucial to corporate salubrity.7

  And yet, in the United States at least, enlisting sufficient numbers of trial volunteers is difficult, to say the least. Back in 1954, Americans offered their children as human guinea pigs by the millions for Jonas Salk’s experimental polio vaccine. When the results of that massive trial were released, radio announcers blasted the news. Church bells clanged. Traffic snarled as drivers jumped out of their cars to shout with joy.8 But not long after, the hastily approved vaccine infected 220 children with polio, and public trust in clinical experimentation started to deflate.9 Revelations of unethical trials followed—exposés of the government-sponsored Tuskegee Syphilis Study in the early 1970s proved a historical nadir—and disillusionment hardened into dislike. Today, although Americans buy on average more than ten prescriptions every year, less than one in twenty are willing to take part in the clinical trials that separate the dangerous drugs from the lifesaving ones.10

  Less than 4 percent of cancer patients, who generally have the most to gain from new experimental treatments, volunteer for experimental drug trials, a rate industry insiders deride as “appallingly low.” Many people with cancer “simply didn’t want to move from their homes to the clinical center for the time—sometimes weeks—necessary to participate in a trial,” noted industry journal Scrip. Others, particularly elderly patients, felt that “cancer may be a sign they have lived long enough.”11 Even trials for what would turn out to be breakthrough cancer drugs like Genentech’s breast-cancer drug Herceptin almost withered and died for lack of willing subjects. “Every year tens of millions of women die from breast cancer, and they couldn’t get a hundred subjects for a trial,” clinical research exec Dennis DeRosia recalls bitterly.12

  The unmentioned reality is that cancer patients are hardly irrational in judging new drugs unworthy of their trouble. Despite the huge amount of investment in cancer research, “success has largely eluded us,” admitted industry scientist David Horrobin. “The few outstanding successes in rare cancers cannot hide the overall failure.”13 For many other conditions, useful drugs are already in adequate abundance. Americans might be convinced to try a new brand-name drug after absorbing a vigorous marketing campaign touting it as the second coming, but when it is a lowly, unapproved experimental medicine, why should they bother? Most patients were uninterested in testing out Pfizer and Eyetech’s new eye-disease drug Macugen, for example, because there were already many other drugs available and Macugen had to be injected directly into their eyes. Besides, there were at least two other trials for the same kind of drug, with different delivery systems, going on at the same time.14 So scarce have willing subjects become in the West, Horrobin wrote in a 2003 Lancet paper, that some drug companies have taken to aggressively recruiting more subjects than they need, a preemptive strike against rivals on the hunt for warm bodies to fill their test clinics.15

  Today, industry investigators can count on failing to find sufficient numbers of willing test subjects on time in four out of five of their clinical trials, and the financial strain of hunting for increasingly disinclined test subjects threatens to render the entire industry moribund. While the annual cost of drug development remains mired in controversy, undisputed is the fact that the price tag has zoomed steadily upward since at least 1980, while the output of new FDA-approved drugs has remained essentially stationary. And every day a new drug remains locked in development bleeds companies of up to $1 million in potential sales income.16

  Western medicine has relied on human and animal experimentation since ancient times. It was by dissecting the bodies of criminals and the poor, for example, that Greek physicians discovered the nervous system in 300 B.C.17 But it was only after the rigorous experimental design called the controlled clinical trial emerged in the 1940s and was codified into U.S. law in 1962 that the global hunt for experimental bodies began in earnest. After a brief period of testing experimental drugs on the U.S. prison population, a practice which ended when ethical scrutiny of trials was stepped up in the 1970s, most drug companies partnered with university hospitals and academic doctors to conduct trials. University doctors were the best experts to call on: reputed to be beyond reproach ethically, they had the patients at their fingertips, the know-how to design and conduct scientifically sound trials, and enough perceived independence to lend weight and credence to the findings.

  But over the course of the 1980s and 1990s, the drug market skyrocketed. By 1989, for example, drug companies were angling to launch more than three times more drugs on the market than they had back in 1970. Impatient drugmakers started to get tired of their ponderous academic partners. “Pharmaceutical firms are frustrated with academic medical centers,” University of California’s Thomas Bodenheimer noted in an influential 2000 New England Journal of Medicine paper. “Slow review of industry proposals . . . delays the starting dates of trials.” Academic hospitals “have a bad reputation,” according to Greg Fromell, who works at a company specializing in running drug trials for the industry. They “overpromise and underdeliver.”18

  By the late 1990s, the flow of cash from drug companies to academic medical centers had slowed to a trickle, the stream diverted to a new breed of faster, more aggressive drug trial entrepreneurs. New outfits like Quintiles Transnational and Covance call themselves “contract research organizations” (CRO). For a fee they take a drug company’s blueprints for a clinical trial and promptly deliver patients, investigators, and results in return.19 “At Quintiles,” the company’s Web site says, “we know it’s all about results. That’s what you want. And that’s what you’ll get, on schedule or maybe even a little ahead of it.” Some CROs even insert trial results into FDA applications and splice them into prestigious journal articles on behalf of their industry clients.20

  At first, CROs performed this trick by capitalizing on a previously unexploited pool of potential subjects: the millions of patients treated at local clinics and private practices by community physicians. Then they slowly started to look beyond U.S. borders. After all, the FDA had long allowed drug companies to submit data from clinical trials conducted outside the United States, in 1987 even going so far as to accept a new drug application with data solely from overseas trials. Back then there had been no big stampede abroad, as academic investigators considered data from developing countries unreliable. Not so the new CROs.21

  Just as automakers and apparel manufacturers had fled the stringent labor and environmental laws of the West to set up shop in the developing world, drug companies and CROs streamed across the border. Although companies aren’t required to alert the FDA before testing their drugs on non-U.S. patients, nor does the FDA track research by location after approving new drugs, it is clear that the tectonic plates have shifted.22 Between 1990 and 1999 the number of foreign investigators seeking FDA approvals increased sixteenfold, the Department of Health and Human Services’ Office of the Inspector General found.23 By 2004, the FDA estimated, drug companies angling for FDA approval of their new products were launching over sixteen hundred new trials overseas every year.24 The most popular destinations are not Western Europe and Japan,
but rather the broken, impoverished countries of Eastern Europe and Latin America.25 Russia, India, South Africa, and other Asian and African countries have proven equally fruitful.

  Between 2001 and 2003 the number of trials conducted in the United States—and the number of investigators hired to oversee them—plummeted. While U.S.-based investigators dropped by 11 percent, the number of investigators abroad fattened by 8 percent.26 By 2006, GlaxoSmithKline, Wyeth, and other drug giants predicted, half or more of their trials would be conducted overseas.27

  Fleeing the empty test clinics of the West, drugmakers who have set up shop abroad wallow in an embarrassment of riches. The sick are abundant, and costs are low. In India, “apart from the low-cost of field trials,” enthused a Pfizer press release, “a billion people means there is never a shortage of potential subjects.”28 In South Africa, a leading CRO noted on its Web site, patients suffered “an extremely high prevalence of HIV/AIDS and other major diseases including cardiovascular, diabetes, hypertension, mental illness and cancer.” Their lack of access to medicines made them particularly appreciative of the free drugs offered in trials, no matter how experimental. “The vast majority of people have only the most basic healthcare,” the Quintiles Web site noted, allowing “clinical trials [to] provide study participants with access to more sophisticated medicine.”29

  Many patients in developing countries don’t balk at the discomfort of experimental procedures either. In the United States investigators reject protocols that require that their subjects undergo painful, invasive procedures. Bradley Logan, MD, who runs drug industry trials on contract, remembers being approached to conduct a trial that required that he surgically insert telescopic devices into women’s abdomens that were ten times bigger than he and other doctors had been using for years. “I said no. I’m not making this big hole in a woman when it isn’t necessary,” he said, outraged.30 “I’ve seen protocols that require five endoscopic biopsies in a single month,” another industry researcher complained to CenterWatch. “Is it reasonable to assume that enough patients will willingly submit to this regimen to meet enrollment targets?”31

  Not so elsewhere on the planet. “We did a study in Russia and the U.S.,” said Wurzlemann. “But we got a lot more patients in Russia, because patients had to get a venogram,” that is, tolerate the surgical insertion of an intravenous catheter that would administer contrast-heightening chemicals so investigators could better scrutinize their X-rays. In the United States venograms have been largely replaced by noninvasive CT scans and MRIs. “The Russian people were happy to do this, as other alternatives are not available,” Wurzlemann proclaimed. Yuri Raifeld, Wurzlemann’s Russian colleague, who was sitting in the audience, chortled. “Well, I would not say they were happy to do it,” he said, “but they did it!”32 The room tittered.

  And so, in contrast to the agonizingly slow pace of enrollment in trials at home, recruitment abroad is rapid. In South Africa Quintiles herded 3,000 patients for an experimental vaccine study in just 9 days. They inducted 1,388 children for another trial in just 12 days.33 And unlike American patients, who hemmed and hawed and often simply dropped out of studies, in India, boasted Vijai Kumar, head of a New Delhi–based industry trial center, “we have retained 99.5 percent of the subjects enrolled.”34

  In 2003, Pfizer announced plans to set up a global clinical trial hub in India.35 GlaxoSmithKline and AstraZeneca followed suit, dispatching teams to set up new clinics and offices on the poverty-stricken subcontinent.36 Glaxo aimed to relocate up to 30 percent of its gargantuan clinical trial business to “low-cost” countries such as India and Poland, its CEO said in 2004, saving the company over $200 million a year.37

  On their heels followed an army of CROs, one-third of which set up shop in foreign countries between 2000 and 2002.38 Headquartered in North Carolina, Quintiles littered new clinics and offices across the developing world in Chile, Mexico, Brazil, Bulgaria, Estonia, Romania, Croatia, Latvia, South Africa, India, Malaysia, the Philippines, and Thailand. Covance boasted that it could run trials in over a dozen countries at twenty-five thousand separate medical sites worldwide. New outfits like Neeman Medical International advertised its “access to large, previously untapped patient populations” in Latin America and Asia. “Ski where the snow is,” a Neeman advertisement advised. “Conduct clinical trials where the patients are.”39

  Trade shows and conferences dedicate themselves to supporting the new trend, while the industry press regularly runs encouraging advice and how-to tips, from “Success with Trials in Poland” and “Organizing Large Randomized Trials in China: Opportunities and Challenges” to “Clinical Trials in Latin America: Meeting the Challenges Can Reduce Time-to-Market,” “Discover Russia for Conducting Clinical Research,” and “A billion-dollar clinical research opportunity lies in India.”40

  Developing countries, South African bioethicist Carel Ijsselmuiden says, have become “a great, global lab.”41

  Is there anything intrinsically wrong with drugmakers or other Western medical researchers taking advantage of the disparity between a drug-wealthy healthful West and a med-famished global poor if patients consent, none are harmed, and some may even be helped a little bit? Clinical researcher Malcolm Potts, PhD, thinks not. In a February 2000 paper Potts advocated stripping away health protections for subjects in developing countries in order to speed trial results to investigators. Part of his rationale hinged on a typical—and selective—fatalism. “The real world is exceedingly painful,” he noted.42 That is, the ill health of the developing world, which is now proving valuable for Western science to mine, is something mournful perhaps, but as static and irreversible as the setting sun. And yet, it is almost fully a product of a just a few centuries of political and economic machinations.

  Just a few centuries ago Westerners were a fairly sickly bunch. Most could expect to live just twenty-five years. As medical historian Roy Porter explains in his classic 1997 tome, The Greatest Benefit to Mankind: A Medical History of Humanity, by unleashing the diseases of the old world upon the new, and introducing African pathogens from the bodies of their imported slaves, Western colonists vanquished as much as 90 percent of some of the native populations of the Americas. Slashing forests, capturing workers, and unleashing wars and streams of refugees in Asia and Africa, Western colonists spread and deepened local diseases such as kala-azar, bilharzia, cholera, and sleeping sickness. New medicinal plants and materials from the colonies were added to the Western pharmacopoeia. From Brazil came the emetic ipecacuanha; from Peruvian tree bark, the first-ever specifically effective medicine, malaria-curing quinine.43

  Thus enriched, Western powers intensified their consumption of the planet’s iron, wood, and coal, fashioning them into tools that would remake their rough-and-tumble medical practices, and revolutionize their health status. Early microscopes by the mid-1860s allowed French scientist Louis Pasteur to connect the ceaseless activity of microorganisms to the onset of disease, an insight that social crusaders put into action with widespread campaigns to separate the microbial pests from the humans they preyed upon. Before the 1900s most Europeans, living in squalor, drinking contaminated water, and being tended by bloody-handed doctors and surgeons, could hope to live no more than thirty years. One in five infants didn’t survive birth. By the 1920s simple methods of avoiding pathogenic microbes—washing with soap and water, separating wastes from drinking water, throwing out spoiled foods—had extended most Western lives until well into their fifth decade.44

  Such efforts languished in the colonies. In most colonies “the European administration attempted to separate from the prevailing environment,” public health expert Oscar Gish writes, “ensuring sanitary conditions in their own living areas but very often creating a sanitary cordon between itself and the surrounding native quarters.”45 In India, for example, British administrators blamed the epidemics of cholera and malaria among the locals on Indians’ “filthy habits,” disregarding the fact that many had been forcibly resettled
on mosquito-infested swamps.46 The British had cleared marginal lands of forests and cut tens of thousands of miles of canals, which were devoid of drainage ditches, for the irrigated farmland and railroads that transformed India into a productive colony for the crown.47

  World War II closed the age of empire. But Western health interventions—now sent under the rubric of aid—didn’t always reverse the earlier trend. Disastrous malaria eradication campaigns such as the 1950s World Health Organization (WHO) effort, eloquently documented in journalist Laurie Garrett’s book The Coming Plague, intensified rather than alleviated the burden of malaria in Asia. In 1961, at the height of the campaign, India suffered fewer than 100,000 cases of malaria; by 1977, after the doomed effort was abruptly canceled, the country’s caseload topped 6 million.48 The WHO’s 1966 smallpox eradication program, while successfully ridding the world of that disease, involved forcible invasions of homes and the injection of live virus into fifty or more arms at a time.49

  The current era of globalization has proven not much better. The World Bank and International Monetary Fund (IMF), with their billions of dollars in strings-attached loans, lay heavy hands on the health care of the global poor. According to the bank, “improved water and sanitation,” the very public works that lifted the West out of its infectious soup, were “not particularly cost effective as a health measure.”50

  And so in Zaire, for example, World Bank and IMF “economic recovery” measures require the government to slash its spending on social services. In a single year the government fired more than eighty thousand teachers and clinicians. In Zambia, within just two years of such programs, the nutritional and health status of children had plummeted, canaries in a coal mine.51 Infant mortality rose by 25 percent while life expectancy dropped from fifty-four to forty years.52 In Argentina, polio and DPT immunizations fell by nearly 25 percent between 1992 and 1998, and throughout Latin America previously controlled diseases such as cholera and dengue fever re-emerged at epidemic levels.53 The flow of patients into clinics and hospitals in Nigeria, Kenya, and Ghana slowed to a trickle, dropping by half within days of the imposition of new fees. “Before, everyone could get health care,” one patient in a developing country noted. “Now everyone just prays to God that they don’t get sick because everywhere they ask for money.”54

 

‹ Prev