The Body Hunters

Home > Other > The Body Hunters > Page 6
The Body Hunters Page 6

by Sonia Shah


  The 1962 laws greatly enlarged the scale and number of experiments on humans, entrenching the randomized controlled trial as the basis for clinical experimentation. Safety studies could be adequately accomplished in a short time with just a handful of healthy volunteers. Now drug companies would have to convince sick patients to try experimental drugs, enroll them in massive trials, and give many of them placebos for comparative data. If the drug was aimed at some slowly progressing condition—heart disease, for example—data regarding its effectiveness wouldn’t emerge for years. If the drug had but a small salubrious effect, documenting that it worked would require the participation of thousands, even tens of thousands, of patients. In 1938, companies wanting to prove to the FDA that their new drugs were safe could do so with slim applications that numbered just thirty pages. By 1968, companies that wanted to prove that their new drugs were both safe and effective would have to submit over seventy-two thousand pages of data to do it.26

  While the new rules would do much to restore public confidence in the drug industry, ironically, the regulations did little to address the conditions that had led to the thalidomide disaster. Forty years later drugs that cause birth defects continue to slip past FDA regulators onto the market. Clinical trials rarely reveal such effects, since most trials exclude pregnant women, particularly when drugmakers suspect there might be a risk of birth defects. “The unfortunate reality is that we learn about virtually all teratogenic [birth defect–causing] effects only after a drug has already received marketing approval,” Boston University epidemiologist Allen A. Mitchell wrote in the New England Journal of Medicine in 2003, “and of course only after it has been used by pregnant women.”27 A postmarketing surveillance system, Mitchell wrote, one which would have systematically caught early reports of nerve damage, is probably all that could have mitigated the disaster.28

  Interest groups predictably complained that the new regulations were too strict. The American Medical Association argued that doctors, not impersonal trial results, should decide which drugs worked for their patients. Drug companies complained that the strict regulation would thwart their research efforts, as scientists would flee from industry labs. Undeterred, the National Research Council, tasked with evaluating the mountain of already approved drugs, ended up pulling no fewer than three hundred drugs. It wasn’t just for a lack of evidence of efficacy, either: in some cases, companies had tested their drugs, found them to be ineffective, and marketed them anyway. Upjohn, for instance, marketed a combination of the antibiotic tetracycline and novobiocin, even though the company’s own trials had shown that novobiocin counteracted the effectiveness of tetracycline.29

  American faith in the promise of medical research proceeded with renewed vigor after the 1962 amendments, with the emergence of genetic engineering techniques—the cutting and splicing of strands of DNA—in the early 1970s.30

  The biotech revolution soon ratcheted up the pace of drug development, enlisting the best minds of academia to do it. In 1978, Herbert Boyer, at the University of San Francisco, isolated the genes that instructed human cells to make insulin, synthesized them, and spliced them into bacteria, which started to churn out human insulin. Boyer didn’t just write up a few papers and rest on his laurels. He and a colleague patented their discoveries and netted over $27 million in royalties.31 With money from a savvy venture capitalist, Boyer founded a company to commercialize the technology. The company, Genentech, released “recombinant” insulin just four years later, in 1982, laying a foundation that would eventually make it one of the most successful biotech drug companies in the world.32

  Before Boyer, academic researchers considered the commercial development of their research peripheral to their own careers. Back in the 1930s associations of the most esteemed scientists in pharmacology would refuse membership to anyone who even worked for a drug company.33 Since Boyer, academic scientists have been patenting their discoveries, keeping them secret from colleagues, and starting up biotech companies to produce and market their drugs. In 1980, Congress pushed along the commercialization of academic research with the Bayh-Dole Act, aimed at promoting “collaboration between commercial concerns and nonprofit organizations.” The new law allowed—and even obligated—universities to commercialize findings they made under the largesse of government grants by patenting them. Somehow, the thinking went, all of this activity would lead to major medical breakthroughs, such as a cure for cancer.34

  Indeed, the flow of new drugs from the industry skyrocketed. Drugmakers deluged the FDA with more than twelve thousand applications to sell new drugs in 1989, compared to just forty-two hundred in 1970.35 Between 1975 and 1985 more than eighty of the industry’s new FDA-approved products and processes flowed from publicly funded academic research.36 Prescription drug sales during those years tripled. The scale and pace of human experimentation to support the new drugs quickened in its wake.37

  FDA rules facilitated the tsunami of new drugs flooding the market. In 1984, Congress passed legislation granting drug companies an additional five years of patent protection, balancing it with streamlined FDA review of generic drugs, which were meant to become quickly available after the lengthy brand-name patents expired.38 But there were loopholes, which brand-name drugmakers exploited handily: the maker of a patented drug could sue any generic company seeking to produce the drug for patent infringement, automatically triggering a thirty-month stay on the production of generic meds.39

  With guaranteed market monopolies secured, drug companies could invest more into selling big drugs for megamarkets. The only trouble was, while millions were dying from malaria, AIDS, and tuberculosis, those who spent the most on prescription drugs—aging Americans—were becoming healthier and healthier. Between 1965 and 1996 death rates from blocked arteries in the United States had dropped by 74 percent. Deaths from heart disease had dropped by 62 percent. Deaths from hypertension had dropped by 21 percent.40

  How could drugmakers continue growing? If they did what the public had come to love them for—vanquishing sickness with curative wonder drugs—they’d have to make do with markets with minimal buying power, from the tuberculosis-ridden inner cities of the United States to malarial sub-Saharan Africa and tropical Asia. A more lucrative approach, albeit a smaller contribution to public health, would be to encourage wealthier, healthier customers to pop pills despite their relative vigor. After all, no FDA regulation requires drugmakers to invent high-priority drugs. As long as drug companies could prove their drugs safe enough and better than nothing in placebo-controlled trials, they could sell whatever kind of medicine they wanted, whether patients needed the drugs or not.41 As the old pharma saying went, “While it’s good to have a pill that cures the disease, it’s better to have a pill you have to take every day.”42

  The industry slowly reoriented itself. The complaints and vanities of aging baby boomers and those over the age of sixty-five, who spent nearly three times more on pills, doctors, and hospitals than their younger cohorts, would call the shots, whatever drugs that could be marketed to them cashing in big.43 With the right promotion, the new blockbuster drugs could bring in annual sales topping $1 billion each. Accordingly, the first modern blockbusters were not miracle cures like penicillin; they were the heartburn drugs Tagamet and Zantac.44

  In 1985, a long-running government study on cardiovascular risk—the Framingham Heart Study—reported a correlation between low cholesterol levels and increased longevity.45 It was just a correlation, of course, but the timing was perfect. Could it be that high cholesterol cuts lives short? Many Americans had high cholesterol levels, after all, and would likely maintain them given their penchant for fatty foods and sedentary lifestyles. If they could be convinced to take an expensive prescription drug every day for the rest of their lives—despite not feeling at all ill—Merck had just the drug for them.

  First, the company planned what the Washington Post called “an advertising and public relations blitz” to paint cholesterol as Americans’ top health adv
ersary.46 Then, in 1987, the company released Mevacor, a molecule called lovastatin that blocks an enzyme the body needs to make cholesterol. “It’s going to be earthshaking,” a cardiologist quoted by the Wall Street Journal enthused.47

  It was. In its first year out, Mevacor brought in $175 million; by 1989, annual sales stood at $500 million. By 1991, Mevacor sales had topped $1 billion a year.

  Besides showcasing how unhinged drug development was from promoting public health (better diets and more exercise would have conferred broader health benefits and was cheaper and safer to boot), Mevacor was a stunning testament to the power of marketing. For even while Mevacor sales spiked, experts continued to debate the pros and cons of high cholesterol levels. Some studies showed that people with high cholesterol actually lived longer than those with low cholesterol.48

  Merck’s trailblazing act of market expansion was soon followed by Eli Lilly’s antidepressant Prozac, and a host of “lifestyle” drugs. These were drugs whose main medical innovation was their ability to be prescribed to millions, whether they were ill or not, or which accommodated rather than corrected for unhealthy lifestyles.49 During the 1990s, Americans clamored for the new drugs, and showered the drug industry with its approval. Fortune magazine anointed Merck “the most admired company in America” every year between 1987 and 1993.50 Amid the love fest, industry lobbyists, conservative economists, and patient advocates stepped up their criticisms of drugmakers’ sole nemesis: the FDA. What would happen next would dilute the two most stringent standards for new drugs: that they prove themselves both safe and effective.

  First the rules requiring proofs of efficacy were weakened. In a 1991 paper officials from the FDA’s Center for Drug Evaluation and Research announced that new drugs would no longer have to prove that they alleviated disease and improved patient’s lives. Now FDA regulators would be willing to allow drug companies to prove their drugs “worked” by showing that they possessed some quality more easily measured than the ability to make patients better, using what was called a “surrogate end point.” Instead of having to prove that a new cardiovascular drug reduced mortality from heart disease, for instance, drug companies could show simply that the drug reduced cholesterol levels. Rather than show a new anticancer drug or AIDS drug extended patients’ lives, they could prove instead that the drug shrank tumors or increased white blood cell levels.51 Forget about time-consuming, patient-intensive trials on how drugs might work to help real patients in their struggle with illness: “The assessment of a new drug should flexibly evaluate safety and efficacy,” the regulators wrote.52

  With millions in pent-up sales hanging in the balance for every day a potential blockbuster drug was held up in trials, the FDA’s new flexibility would prove highly lucrative for drug companies, although of questionable utility to patients. “There has recently been great interest in using surrogate end points . . . to reduce the cost and duration of clinical trials,” wrote biostatisticians Thomas R. Fleming and David L. DeMets in a 1996 Annals of Internal Medicine paper titled “Surrogate End Points in Clinical Trials: Are We Being Misled?” “In theory, for a surrogate end point to be an effective substitute for the clinical outcome, effects of the intervention on the surrogate must reliably predict the overall effect on the clinical outcome. In practice, this requirement frequently fails.”53

  During the 1980s the FDA had approved two drugs to ease irregular heartbeats—Bristol-Myers Squibb’s Enkaid and 3M’s Tambocor—not because irregular heartbeats were considered dangerous in and of themselves, but because they were thought to lead to fatal heart attacks. Like shadows on a wall, irregular heart beats were surrogate markers for the fatal heart attacks. In 1989, after over two hundred thousand patients had been prescribed the drugs, an NIH-sponsored study found that no such linkage existed: not only did the drugs fail to extend patients’ lives, they appeared to kill three times more patients than those administered placebos.54

  Since then, numerous studies have exposed the hollow center of surrogate markers: drugs that lower cholesterol can increase mortality; drugs that reduce blood pressure increase patients’ risk of heart attacks; AIDS drugs that increase CD4 counts have no effect on the course of the disease; and drugs that reduce tumors don’t extend lives. And yet drugs that have proven they can do little more than alter these ghostly proxies continue to be approved by the FDA.55

  Then, in 1992, with the passage of the Prescription Drug User’s Fee Act FDA reviewers who painstakingly analyzed data on new drugs to ensure their safety were burdened with punishing deadlines. Under the new law drug companies would pay the FDA directly—up to $672,000 for each new drug application in 2005—in exchange for speedier deliberation times.56 Regardless of the complexity of the drug or its safety profile, the FDA would be bound to meet strict new deadlines, shaving off weeks from review times and making the agency feel, as some insiders said, like a sweatshop.57 Over the following years the average FDA review period for new drugs tumbled from thirty months to under seventeen, a neat deal for drug companies that could now count on many hundreds of millions of dollars in sales in exchange for the relatively paltry user fee.58

  The rapid review times allowed dangerous drugs to slip through the FDA’s fingers, critics complained, with increasing numbers of new drugs found to be life threatening only after they had been ingested by millions. In 1997, the FDA was forced to withdraw two drugs from the market after they injured and killed patients; in 1998, three drugs were withdrawn; in 1999, two were withdrawn; in 2000, no fewer than four drugs were withdrawn.59

  The marketplace might have ably punished any drug company that produced meds that were marginally useful or unsafe with lackluster sales. But in 1997 came another regulatory change that circumvented such a correction.

  Until 1997, the largest audiences for advertising messages—television viewers—were virtually unreachable for drugmakers. The FDA required that drug ads list, in addition to the therapeutic properties of the drug, all of its concomitant side effects. In a magazine ad, the side-effects list could be dispensed with effectively in the small type along the margins. In a television commercial, though, the list would have to be read out loud in excruciating detail. Few companies would attempt such a feat.

  When companies tried, such as Hoescht Marion Roussel, they failed badly. In the early 1990s the company’s best-selling prescription allergy drug, the nonsedating antihistamine Seldane, had been criticized for its dangerous side effects when taken in conjunction with other drugs. The FDA had decided not to pull Seldane off the market, despite its dangers, but things were looking bad for Seldane when, in 1993, Schering-Plough released a similar nonsedating antihistamine, Claritin, which quickly started to pull ahead in sales.

  By 1996, Hoescht had a new product to take Seldane’s place and counter the Claritin onslaught: the “new and improved” antihistamine Allegra. Six months later the FDA pulled Seldane off the market. Now it was up to Hoescht to “market Allegra . . . aggressively” as the New York Times reported, and reclaim its lost market share.60

  But how to do it? Hoescht attempted to advertise Allegra on television, but to sidestep FDA requirements the company’s Allegra ads never mentioned what the drug was for. The commercials featured a woman inexplicably windsurfing across a field of wheat, as the Washington Post reported. The ads were a disaster, mystifying consumers and providing ample fodder for late-night comedy shows.61

  Then, in 1997, eight months after pulling Seldane, the FDA announced that TV drug ads would no longer have to portray the bad along with the good about new drugs. Instead, television ads could simply mention the very worst side effects, dispensing with the others by suggesting that consumers consult a Web site or call a 1-800 number to find out more. By allowing drugmakers to highlight the benefits of new drugs while sidelining drawbacks, the new rules would allow even marginally useful drugs, when promoted vigorously, to flourish.

  “This is very good timing,” enthused a Hoescht spokesperson.62 Fall ragweed season was approaching
. Now the company could run a proper ad campaign, reaching television’s mass audiences with their message that Allegra was a better allergy drug than Claritin, better too than the cheap, over-the-counter remedies that many were opting for.

  In 1997, Hoescht spent over $50 million advertising Allegra directly to consumers. It was money well spent; Allegra sales promptly doubled. Schering-Plough responded with over $74 million in consumer advertising for Claritin.63 As the televised war between the antihistamines raged, Americans flocked to their doctors clamoring for Claritin and Allegra scrips. In the first eight months of 1998, “patient visits to doctors increased 2 percent, [but] visits for allergies rose five times as fast,” the New York Times reported.64 By 1999, drugmakers were spending more on hyping prescription antihistamines to consumers than any other class of drugs.65

  And yet, a 2002 study that compared Zyrtec, Claritin, Allegra, and other nonsedating antihistamines found no differences in their efficacy. “When choosing a drug . . . for treatment of allergic rhinitis,” the authors concluded, “the preference of the patients might be the one most important factor, because all the new histamine H1-antagonists appear to be comparable in their efficacy.”66

  According to drug industry spokespeople and the FDA, the new “direct-to-consumer” advertising craze helped patients get the drugs they needed. “You need to be told by someone that those products are out there or you’ll never know,” FDA medical director Robert Temple said to the Washington Post in 1997.67 As for health care providers, the new obsession for prescription allergy meds was hard to fathom. “Except for antibiotics,” commented Mark DiGiorgio, a disgusted HMO executive, “we are spending more money on runny noses than anything else.”68

 

‹ Prev