Manufacturing depression

Home > Other > Manufacturing depression > Page 23
Manufacturing depression Page 23

by Gary Greenberg


  That may seem startlingly obtuse, but it makes sense when you remember that with a few exceptions—like Paul Ehrlich’s Salvarsan—there was no way to say with certainty why a sick person who took a particular drug got better; there was still precious little understanding of the biochemistry of disease or cure. In the absence of that knowledge, Holmes thought, a drug’s ability to cure illness was a matter for the marketplace to decide. You bought the potion, decided whether you liked it, and told your friends (and your doctor) what you thought—the same as you did with any other consumer product. The government’s role was simply to make sure that the public got what it paid for and didn’t get harmed in the process.

  That turned out to be harder to accomplish than it sounds. In 1931, the Food and Drug Administration was spun off from the Bureau of Chemistry. It didn’t take long for the new agency to discover just how limited its powers were. In 1937, S. E. Massengill Company introduced Elixir Sulfanilamide. Sulfa drugs had burst onto the medical scene in 1936, when Franklin Roosevelt, the president’s son, had been cured of a streptococcus infection (in those days, strep infections were potentially fatal) by a timely injection of sulfanilamide. The patent for the drug—Prontosil, derived from a dye that Paul Ehrlich had fooled around with and which was used to redden leather—had long ago expired, so drug companies sought market share by inventing new preparations. Massengill hit upon the idea of putting the antibiotic into a raspberry-flavored syrup. The problem was that the drug refused to dissolve in the syrup—until Massengill’s chemists added diethylene glycol. The company’s chemists may have been the only chemists on earth who didn’t know that this compound, used as an industrial antifreeze, was a fatal poison.

  After the bodies were counted (105), after the remaining 234 gallons of Elixir Sulfanilamide had been rounded up, and even after a government official determined that chemists at Massengill “just throw drugs together and if they don’t explode they are placed on sale,” the FDA found that its only recourse was to fine the company $26,100 for mislabeling—elixir, the agency said, was a term reserved for alcohol-based preparations. Inadequate as this punishment was to the scope of the tragedy, it could have been even worse. Had Massengill named its drug something else (Sulfa-Freeze, perhaps), the company might have been entirely beyond the reach of the law.

  The Elixir Sulfanilamide debacle provoked Congress to action. In the Food, Drug and Cosmetic Act of 1938, it banned the interstate shipment of harmful substances, which in turn gave the FDA the authority to require drug companies to prove that their new products were safe before they could be brought to market. The label remained the focus of the law. It now had to disclose not only the contents of the bottle but accurate information about proper dosage and potential dangers. The law exempted some drugs from this requirement—those whose effective use and safety hinged on conditions too complex for a layman to assess. For these drugs, no label was necessary because people weren’t going to be buying them on their own say-so, but rather only on a doctor’s orders. Along with the prescription, doctors would dispense the information necessary for safe use—information that they got from the drug companies, and mostly from the army of salesmen, or detailers, that the industry now began to deploy.

  Like earlier pure drug laws, the 1938 version was a hit with doctors and drug companies. It gave more power to physicians and it left it up to the industry, whose in-house research staffs were still tiny, to recruit the doctors who would submit the safety information to the FDA, a cozy relationship that couldn’t hurt either party. Even more important, the law continued to steer a wide berth around the question of efficacy. That was still, as it had been in 1911, a matter of opinion, not something on which the FDA was going to weigh in.

  But that didn’t mean that efficacy was off the table completely. Drug safety is generally not a straightforward question; cases of outright poisoning like Elixir Sulfanilamide are rare. More common is the problem that arose when effective drugs caused unforeseen problems as a function of, or in addition to, their therapeutic action—in other words, side effects. Sulfa drugs, for instance, could deplete white blood cells, but that risk, especially if managed by a skilled doctor, was clearly outweighed by the infection-killing benefits of the drug. Safety could only be assessed as part of a cost-benefit analysis in which efficacy had to play a part. A few articles in a journal, testifying to the effectiveness of a drug and written by doctors whose credentials as researchers weren’t necessarily sterling and whose methods were haphazard, would suffice.

  The FDA did make at least one attempt to face the question squarely—with drugs that seemed to have no therapeutic effect whatsoever. These drugs were by definition unsafe because there was no benefit against which to weigh the cost. The agency focused on “glandular substances”—hormone-like remedies intended for women of a certain age that had no discernible pharmacological action—and proposed a fix: a label that acknowledged “there is no scientific evidence that such products…possess any therapeutic activity.”

  The industry howled in protest. The FDA, a trade journal complained, was telling drug makers that they “must undertake to educate physicians”—a function they were perfectly happy to fulfill when the news was good. And when the agency declared in 1948 that it couldn’t certify the safety of such a preparation, industry hauled out the heavy guns. The FDA, it said, was interfering with the sacred doctor-patient relationship. One of Congress’s own doctors brought the warnings close to home. This edict, he told lawmakers, would make it impossible for him to offer glandular remedy to “the wives of my Congressional group.”

  That worked. By 1950, the FDA, reminding Congress that the Supreme Court had long ago tied their hands, was officially out of the efficacy business, but it remained in the drug-certifying business. And once again, regulation uplifted the businessmen. The prescription drug industry, which already claimed science for its side, and whose products were officially only understandable by experts, could now obtain the government’s imprimatur for its products without ever proving that they worked.

  This was the regulatory environment in which the antidepressant discoveries I described in the last chapter took place. The FDA could only comment on safety, it had to take doctors’ word about efficacy, and the agency had only a short time to respond to a new drug application (sixty days, after which the drug, in the absence of a response, was automatically approved) and a small staff (1,065 in 1956, the height of the industry’s postwar boom, and only 117 more than it had ten years earlier). The FDA’s $6 million budget was dwarfed by the $140 million that the pharmaceutical companies were spending annually on research and development. A drug could make it from bench to market in just a few months.

  In some respects, this laxity didn’t seem to matter. The 1950s were a time of true wonder drugs—not only the antibiotics that followed on the sulfa drugs and penicillin, but also chlorpromazine, corticosteroids and other hormone treatments, and diuretics for high blood pressure. The results, in lives saved or transformed, seemed to speak for themselves. Armed with its government-sanctioned success, the drug industry had gained the confidence of the public as a reliable supplier of magic bullets.

  The industry wasted no time in exploiting its achievements. Sometimes this was too much of a good thing, as Johns Hopkins doctor Louis Lasagna complained in 1954:

  The doctor of today is under constant bombardment with claims as to the efficacy of drugs, old and new. It is difficult, if not impossible to read a journal, attend a medical meeting, or open the morning mail without encountering a new report on the success or failure of some medication.

  As if to prove Justice Holmes correct, medical journals were chock full of opinion. Doctors, it seemed, were to be guided in their prescription choices by whoever among their colleagues sounded most trustworthy to them. Or, for that matter, by the advertising in their journals: the number of pages of JAMA devoted to drug company ads doubled in volume in the 1950s, even as the AMA stopped requiring advertisers to earn its Seal of Acce
ptance before they could hawk their wares. And if doctors were too busy to read the journals or look at the ads, there were always detail men to regale them with the latest lab results over a round of golf.

  The resulting therapeutic chaos alarmed Estes Kefauver and his Anti-Trust and Monopoly Subcommittee—the same body that listened to testimony about iproniazid and other “quick pills.” A Tennessee Democrat who was Adlai E. Stevenson’s running mate in 1956, Kefauver was a liberal populist, an early champion of racial equality, a fierce opponent of Joseph McCarthy. He fought the pharmaceutical industry to the death—his own, in 1963, the fifth year of his hearings. By that time he had infuriated doctors, the drug industry, and lawmakers like Everett Dirksen with his repeated attacks on the way that the prescription drug scheme had tilted the market toward the drug companies and turned doctors into their shills. “He who orders does not buy,” he said, “and he who buys does not order.” Consumers were at the mercy of drug companies; they couldn’t even evaluate advertising claims on their own because the targets of the ads, and the only people who saw them, were their doctors.

  At the very least, Kefauver argued, drug companies should have to prove the merits of their drugs to the government’s satisfaction before they began their advertising onslaughts. He proposed a law giving the FDA the power to require drugs to be proven “safe and efficacious in use”—a question that he thought science had finally made into a plain matter of fact. Although a few renegades, like Lasagna, supported Kefauver, the AMA strongly opposed his proposed law warning that it was a step toward socialized medicine. The Judiciary Committee, to which Kefauver’s subcommittee belonged, took out the efficacy provision; the bill had been so defanged by the time it reached the Senate floor that Kefauver refused to manage it. It was headed for a quiet death in mid-1962 when lawmakers suddenly became aware of a magic bullet that had turned lethal, and that the citizens of the United States had only barely dodged.

  The drug was thalidomide, and it had been invented in the late 1950s by Chemie Grünenthal, a German company that was hoping to get into the psychiatric drug business with a new tranquilizer. The company hawked the drug, known as Contergan and available over the counter, not only as a tranquilizer, but as a sleep aid, a flu remedy, and, at least according to a doctor on retainer to Grünenthal, a suppressor of young men’s desire to masturbate. Grünenthal also claimed, based on animal studies, that the drug was completely nontoxic, which meant that it was safe to give to pregnant women.

  Frances Kelsey, an FDA physician-bureaucrat, was not so sure. In 1960, when Richardson-Merrell applied for a license to sell thalidomide in the United States, she asked the company to supply more information. She was concerned about reports of peripheral neuritis, irreversible nerve damage in patients taking thalidomide, and pointed out that there were contradictions in the safety data that might bear on this effect. She also noted that the company had not provided information on the drug’s effect on the developing fetus—not even on whether or not it crossed the placental barrier—a crucial absence given the fact that the company was pushing the drug as a remedy for women made jittery by their newly discovered pregnancies. Merrell was much slower to respond to Kelsey than they were to distribute 2.5 million doses of the drug to more than 1,200 doctors for them to use on a trial basis. Twenty thousand American patients received the drug.

  Working for Merrell, as for most pharmaceutical companies at the time, was pure gravy. The drugs were free, the doctors didn’t have to report results if they didn’t want to, and even if they did, they wouldn’t have to go to all the bother of gathering data or writing up the results. Merrell’s medical director, like medical directors at most drug firms, was glad to provide them with completed manuscripts attesting to the drug’s effectiveness and ready for their signature and to send them on to the medical journals, where they would become part of the record establishing the value of the drug.

  Even as Merrell was ramping up its marketing efforts, however, trouble was brewing. Doctors in Australia, England, and Germany were seeing not only peripheral neuritis, but something much more disturbing in the offspring of their thalidomide patients: a sudden increase in cases of phocomelia, a birth defect in which limbs fail to develop, and which leaves infants with hands and feet growing directly from their shoulders and hips. By the time epidemiological and animal studies, conducted over Grünenthal’s objections, had confirmed the link between thalidomide and phocomelia, thousands of European children had been born with massive deformities. In March 1962, Merrell, after two years of heated argument with Kelsey, finally withdrew its application for thalidomide.

  The European tragedy might have passed unnoticed in the United States, where fewer than twenty thalidomide babies were born. But Kefauver’s staff recognized the opportunity in the debacle and, three days after his proposal hit the Senate floor in July 1962, they informed a Washington Post reporter about what had happened overseas. The story was reported on the front page. In short order legislators were falling over one another to do something about the drug industry, and there just happened to be a bill ready for their approval. The Kefauver-Harris Drug Amendments to the Pure Food and Drug Act, their efficacy clause intact, passed in 1962.

  The new law had absolutely nothing to do with thalidomide. Even Roman Hruska—the Nebraska senator who had once defended a Richard Nixon Supreme Court nominee who had been called mediocre by insisting that there was a place for mediocrity in public life—could see that “thalidomide was already barred and the public was protected under the 1938 act.” But no matter. Kefauver had gotten his way. For the first time pharmaceutical companies were required to prove to the FDA that their drugs worked in order to get a license to sell them.

  Turning this requirement to corporate advantage was easier than you might think, thanks in part to Justice Holmes. The new law had to address his original worry about congressional reaching into the realm of opinion. This meant that it wasn’t enough for Congress to say that science had made it possible to sort out fact from opinion; it had to specify how those facts would be established. The answer was that “substantial evidence…consisting of adequate and well-controlled investigations…by experts qualified by scientific training and experience” would establish the efficacy of a drug.

  That seemingly innocuous phrase—“substantial evidence”—contained a huge break for drug companies. Lawmakers had considered a different standard—the preponderance of evidence. The difference, as one senator put it, was that to require only substantial proof meant that a drug could be deemed effective “even though there may be preponderant evidence to the contrary based upon equally reliable studies.” Especially after the FDA determined that two independent trials with statistically significant results in favor of the drug constituted substantial evidence, this meant that a drug up for approval could have as many do-overs as a drug company wanted to pay for. So long as the research eventually yielded evidence of efficacy, the failures would remain off the books. This is why antidepressants have been approved even though so many studies have shown them to be ineffective.

  That wasn’t the only way that Kefauver-Harris turned into a sweet deal for the drug companies. They also had in mind a way to address the requirement for adequate and well-controlled investigations: the randomized clinical trial, the method used by my doctors at Mass General. This approach, as the industry soon figured out, could easily be made to say more than it really said and do something quite different from what it was intended to do. Both the RCT and the statistics used to assay its outcome are much better at telling scientists when a treatment doesn’t work than when it does, to disprove rather than to prove drug efficacy.

  The eagerness among drug doctors to get more out of the RCT than it is equipped to provide features in the earliest attempts to sell it as the method for verifying drug efficacy. In explaining why he thought regulators should adopt the RCT, Louis Lasagna cited a momentous event in medical history. In 1747, Lasagna recalled, the ship’s doctor on the HMS Salisbur
y, James Lind, decided to check out an old and unproven theory that acids would cure scurvy. Since the Salisbury was returning to England after a long time at sea, it had no shortage of subjects. Lind divided a dozen scurvy sailors into six pairs. “Their cases were as similar as I could have them,” he later wrote. “They lay together in one place and had one diet common to all.” Lind randomly assigned each pair to one of six treatments, which included a dose of vinegar and a garlic concoction—and, fatefully, oranges and lemons. Most of the sailors stayed ill, but within a week, one of the citrus-eaters was so well that he “was appointed nurse to the rest of the sick,” and by June 16, when the Salisbury pulled into Plymouth, the other was fully recovered.

  That was a clever experiment. But even if he had randomly chosen the sailors who received the fruit and tried to keep their other conditions equal, Lind’s trial was not really controlled. He did not account for a crucial variable—the possibility that the placebo effect had cured the sailors. He knew who was getting which treatment, and he had a stake in the outcome. Even if his reports were honest, his belief might have been contagious, his enthusiasm the cause of the cure’s success. He couldn’t say with certainty that something in the fruit had cured the sailors because he did not control for credulity—his or his patients’.

  That might seem like an unfair criticism—after all, doctors of the time didn’t know that most of their medicines were placebos—but the confounding power of the placebo effect was understood by at least one eighteenth-century scientist. In 1784, Benjamin Franklin was living in Paris when Louis XVI tapped him to head a scientific commission investigating a claim that had all of Europe in a stir. Franz Anton Mesmer was telling people that he had discovered a force in the universe as real and important as gravity. He called it “animal magnetism,” and in parlors across the continent he was demonstrating how a physician could harness it in the service of healing. Patients swore that their rheumatism, skin ailments, asthma, and nervousness had been cured by Mesmer. The mesmerism craze alarmed the king, and he charged Franklin with the task of determining whether or not animal magnetism really existed.

 

‹ Prev