Book Read Free

Ending Medical Reversal

Page 16

by Vinayak K Prasad


  The problem of industry-sponsored bias is not unique to Tamiflu; it is ubiquitous in research that is paid for by the pharmaceutical industry. We know this because there are other ways to get your research funded, and we can compare research supported by different sorts of funding. Trials are also paid for by agencies that do not have a vested interest in one outcome or another—agencies such as the U.S. National Institutes of Health, the Veterans Administration, and the Department of Defense. Additional funding is obtained through nonprofit grants, private funds, cooperative research groups, and charitable organizations. An investigation published in the BMJ in 2008 compared industry-sponsored trials to trials paid for by other entities. The authors found that industry-sponsored studies were more likely to reach positive conclusions regarding the benefits or cost-effectiveness of a therapy and more likely to test a new therapy against a placebo (as opposed to a real competitor). When considering both original trials and meta-analyses, industry-sponsored studies are four times as likely to reach a positive conclusion.

  Two other findings in the comparison of industry- and non-industry-sponsored research warrant note. First, industry-sponsored studies are less likely to be published or presented, or to have been published after a delay, than nonindustry research. Second, and perhaps surprisingly, if you score the quality of industry-sponsored studies by any set of trial-design criteria, industry-sponsored research scores just as highly. How can we make sense of these findings?

  The first issue gets at selective reporting. Selective reporting happens when only some of the trials that get conducted on a question of interest are reported. The trials that we see are preferentially those that were positive. This was part of the reason that an earlier analysis of Tamiflu, published by the same group as in the recent BMJ article, concluded that the drug worked. For that analysis, the investigators only had some of the data, and those data looked good. It is not hard to understand why selective reporting leads to bias in the literature. It is the same reason why letters of recommendation are almost always positive. Anyone can find three people willing to say something nice about them—the breadth of opinion is only revealed by consulting everyone they know.

  The second issue, that industry-sponsored studies appear to be as methodologically sound as nonindustry studies, emphasizes the inadequate measures by which we judge the quality of medical research. When we examine a randomized, double-blind, placebo-controlled trial, we see that industry-sponsored researchers do just as well at randomizing and blinding the participants. However, the criteria we use to evaluate the research evaluate just a few, very basic, measures. This is the equivalent of checking to see if a person has a pulse and concluding that he is in good health. It is not surprising that the industry does just as well on these measures—the way in which bias is introduced is subtle (and creative).

  Recently, one of us noticed a worrisome discrepancy in a trial comparing two drugs. The trial called for doses of the drugs to be reduced if patients experienced prespecified adverse effects. When the trial started, the two drugs were dosed equivalently, but if a patient required a dose reduction, the dose of the industry drug fell a bit, while the dose of the comparison drug fell a lot. This kind of trial design does not routinely set off any warning bells—the bias in the design is easy to miss—but it shows how hard it is to develop a scale to catch all the small ways a trial may be biased.

  Given these points, it should not be surprising that physicians are more wary of industry-sponsored trials than those sponsored by governmental or nonprofit groups—collectively called nonconflicted bodies. Aaron Kesselheim and colleagues proved this in a recent study. The authors randomly assigned doctors to read summaries of a hypothetical study. Some doctors were told that the pharmaceutical industry funded the study, and others were told the National Institutes of Health (a nonconflicted body) did. The rest of the text was identical. Across the board, physicians were less confident in the results and less likely to support the drug if the trial was funded by industry.

  Some use this paper to argue that criticism of industry trials is excessive because doctors are already skeptical of industry trials. Instead, we conclude that Kesselheim’s finding is reassuring. Industry sponsorship should set off our “spidey sense,” because bias is often detected only in the fine print and introduced in complicated, nuanced, and creative ways. Industry-sponsored trials do a masterful job at meeting the basic requirements for a good trial, and doctors have to look far beyond article summaries to find the reasons to be skeptical. There is another problem with this study, but we will save it for interested readers in the footnote.*

  CONFLICTS OF INTEREST IN PRACTICE GUIDELINES

  Doctors do not have time to read all the research that is out there. Most of our time is spent caring for patients. For those of us in academics, we are also trying to squeeze in teaching and scholarly activity. Our colleagues say that they can rarely read 10 articles a week, let alone read every paper with the fine-tooth comb needed to pull out every little bit of bias. For this reason, we rely on professional societies and teams of experts to synthesize the data on important medical questions and develop broad treatment principles. This task is almost never easy. Developing a guideline requires sifting through heaps of sometimes-conflicting data, weighing the risks and benefits of treatments, and trying to come to a definitive recommendation. This needs to be done while recognizing that the data you need most might not exist or might not be available. Not surprisingly, guideline groups sometimes reach controversial conclusions.

  In 2013 joint guidelines from the American College of Cardiology and the American Heart Association made a controversial recommendation for using cholesterol-lowering statin drugs for primary prevention— treating healthy people to prevent them from having a heart attack or stroke. The group recommended that people who have never had a cardiovascular event (such as heart attack) should take a statin if their ten-year risk of a cardiovascular event is greater than 7.5 percent. Some estimate that if this guideline is widely adopted, the number of Americans taking a statin will increase by 12.8 million people. If followed globally, the ACC/AHA recommendations could result in more than a billion people taking a statin. The ACC/AHA recommendation is an enormous public-health intervention.

  Whether the ACC/AHA got it right or wrong is controversial and could be (and probably will be) the topic of a book by itself. What is known for sure is that statins have real benefits and real side effects and that the effects on overall mortality, when statins are taken by healthy people, are uncertain. It is therefore fair to have a debate about the ACC/AHA recommendation and how it was reached. One of the more troubling aspects to consider is that half of the guideline committee panelists had financial ties to the manufacturers of statins. You have to question whether the members of the committee were acting impartially or whether they were influenced by the money they had been paid.

  The problem with financial conflicts in guidelines is several-fold. Some guidelines are commissioned by professional societies that receive a large chunk of funding from industry. Drug companies themselves may sponsor guidelines. Eli Lilly provided 90 percent of the funding behind guidelines for septic shock, called “Surviving Sepsis.” It surprised no one that Lilly’s own drug, Xygris, was recommended prominently. Xygris turned out to be an example of medical reversal when a 2012 randomized trial found no benefit of the drug in sepsis. A guideline may also be tainted when a panelist has received or is receiving payments or royalties from industry. In chapter 15 we discuss other, nonfinancial, ways that guidelines may be tainted, but what is clear is that guidelines are almost certainly influenced by financial conflicts of interest. When a recommendation is made in the setting of these conflicts, groundwork for a future reversal is laid.

  THE FDA APPROVAL PROCESS

  Another way our medical system predisposes us to medical reversal is a lenient standard for the approval of new therapies. When drugs, devices, and procedures are allowed to come to market without clear evidence th
at they work (yet are paid for by the government and insurers), conditions are ripe for reversal.

  The U.S. Food and Drug Administration gets criticized from all directions. Proponents of evidence-based medicine criticize approvals for coming too early, without clear evidence that a drug works. The pharmaceutical industry and patient advocates often fault the FDA for taking too much time to make decisions, thus stifling “progress.” Working at the FDA is a hard job. In our experience, FDA regulators are some of the smartest and most sincere people in the business. They are trying to do the best they can, balancing diverse interests and pressures, within the rules mandated by Congress. For example, the FDA cannot consider cost as part of its deliberations, even though the United States does not have a group of experts (as many European nations have) who subsequently balance costs and benefits following a drug’s approval. The FDA has a mandate to ensure safety and efficacy, but not comparative efficacy. If a company develops the eighth statin drug, the FDA cannot demand that the company prove that this drug is more effective than the cheaper ones that are already on the market.

  Our position here is obvious: because we are concerned with medical reversal, we favor higher standards for approval upfront. Drugs, devices, and procedures should not be debuted unless we have good evidence that they work or unless a trial adequately testing whether they work has recruited all its participants and is ongoing. To emphasize how critical this stance is, we focus on three areas related to the FDA approval process, all of which predispose us to adopt therapies before their time. Two of these issues are directly related to the FDA approval process: the standards for device approval and the process known as accelerated approval. The third issue is a striking example of pharmaceutical-company malfeasance: off-label marketing.

  :: DEVICE APPROVAL

  Medical devices are a serious matter. These constructs of plastic, metal, and batteries are often implanted within the body. The purpose of a medical device is to ease suffering or prevent death. Common examples are artificial joints and pacemakers. In principle, thinking about a device is no different from thinking about a pill. If a new pill is developed and claims to reduce chronic back pain, it should be tested before we use it. Assemble a group of people in whom you think it might work, randomize them to the pill or placebo, and monitor pain scores. Follow these people for months or (ideally) years, and observe whether the drug has any benefit and whether that benefit persists over time. Chronic back pain is a persistent condition. The classic placebo response would be an initial benefit that gets smaller over time.

  Now, let’s say that instead of a pill, the treatment at issue is a neurosurgically implanted electrical stimulator. The theory is that the device provides gentle electrical current to nerve roots and reduces pain. How should we test this device? The standard is the same. Assemble people in whom you hope the device will work, randomize half to the device, and the other half to . . . Here, it gets complicated. Ideally, you would actually randomize your participants to three groups. One group would get the actual device. One group would have a sham device, an inert box, implanted. The last group would get the best management they could without the implantation of any hardware. If the stimulator’s effect is real, only the group with the working device will have a benefit. If the effect is that of a placebo, both the real and the sham device groups will improve. If all groups are identical, then the device does not even have a placebo benefit. If the medical-management group does best, then you have to worry that the device is actually causing pain.

  Unfortunately, the logical trial we describe is rarely done. Spinal-cord stimulators really do exist but have been compared only to medical management in small, randomized trials (of about 100 people) or against a surgical operation (itself not proved) in even smaller studies. The stimulators used in clinical practice have never been compared to a sham device. Some stimulators were approved based only on “before and after studies,” which showed that a few dozen patients’ symptoms improved after receiving the device. This low bar for device approval is especially concerning because most devices have the potential to cause harms. In the case of spinal-cord stimulators, nearly 20 percent of patients experience electric lead migration; in other words, the wires end up somewhere different from where they were placed. There are many other dangerous complications, the risks of which might be worth taking if the stimulator was actually known to work.

  For another example of the low standards that are currently the norm for the approval of medical devices, we look to the work of Sankeet Dhruva, Lisa Bero, and Rita Redberg, who systematically examined the strength of evidence behind the FDA approval of high-risk cardiovascular devices. In 123 studies supporting the approval of 78 devices, randomized trials accounted for only 27 percent of studies (and remember, we are still not talking about comparison to a sham device). Nearly one-third of studies used historical controls, which, as we have discussed, are notoriously prone to bias. The vast majority of studies (88 percent) had a surrogate end point as the primary one, and 78 percent of studies had discrepancies in the number of patients assigned to a treatment and the number that were subsequently analyzed. As the experience with Tamiflu made clear, you have to consider outcomes in all the patients you assign to treatment, not just in those who are most likely to benefit.

  :: ACCELERATED APPROVAL

  In the early 1990s, with the experience of the HIV/AIDS epidemic, the FDA pioneered the accelerated-approval program. This program allows drugs for serious diseases, diseases for which there are few treatment options, to gain approval by showing benefit on a surrogate end point that is reasonably likely to predict a clinical benefit. After approval is granted, the drug is given a period of time to prove that it benefits a more important end point. Since the adoption of this program, hundreds of drugs have gained FDA approval in this manner.

  Accelerated approval is not a bad idea. The problem is that it has not been instituted as originally designed. Confirmatory studies seldom come quickly. Often, studies have not completed participant enrollment (or have not even begun) when approval is granted. After a drug is approved, it is much harder to get participants to sign up for a study. Too often, confirmatory studies never get completed. In 2009 the Government Accountability Office summarized the experience of nearly 20 years of accelerated approval. Although fully one-third of postapproval confirmatory studies had not been completed, the FDA had never removed an “accelerated-approval” drug from market. This included examples in which companies had been delinquent in providing confirmatory data for as long as 13 years.

  In 2010 the FDA finally stood its ground. The agency revoked the approval of bevacizumab for breast cancer. (We talked about bevacizumab in connection with surrogate outcomes in chapter 3.) The drug had initially been shown to slow the growth of breast tumors, but multiple studies failed to replicate this finding, and the drug had absolutely no benefit on survival. The decision to remove the approval for bevacizumab generated an outcry, but it was based on rigorous data analysis. Despite the revoked approval, many insurance companies continue to pay for bevacizumab for the treatment of breast cancer, and the National Comprehensive Cancer Network (NCCN), an alliance of 25 cancer centers, continues to recommend it in its guidelines. Because Medicare must pay for anything recommended by the NCCN, Medicare must pay for the off-label use of a drug that has not been proved to work and which the government’s own agency has condemned. The case of bevacizumab shows just how hard it is to take away a drug that was granted accelerated approval.

  Accelerated approval needs to operate in the way it was intended. The benefits of the program are obvious—if a drug goes on to make a positive difference, then the more people that get early access to it, the better. But there are potential risks. A drug might turn out to be ineffective, or even harmful. For the system to work, drugs must be subjected to a confirmatory study in a reasonable length of time. If a drug is found to be ineffective, the health-care industry must accept the data and quickly withdraw the drug.

&
nbsp; :: OFF-LABEL MARKETING

  When the FDA approves a new treatment, it approves it for a given indication. Atorvastatin is approved to treat various sorts of high cholesterol in various situations. Phenytoin is approved to treat seizures. Once approved, however, these drugs may be prescribed by doctors for other indications. If a doctor wants to prescribe phenytoin for hair loss, she can. This may seem odd but in some cases is reasonable. As we discuss in chapter 18, it is sometimes necessary for doctors to use treatments not supported by a robust evidence base. There are treatments that we have used for years for indications for which they have not been approved. Why have they not been approved? Sometimes there is little incentive to study an intervention because the efficacy seems apparent but there is little prospect for money to be made from it. (No one will pay for a trial if there is no prospect for large returns.) Sometimes drug development and clinical experience proceed faster than the evidence base. Clinicians recognize that a drug is effective before robust trials are designed and completed. Neither of these situations is ideal, but they are the reality. As long as the doctor prescribing a drug for an off-label use is doing so with knowledge of the evidence (or lack thereof) and the patient is well informed, this approach is acceptable. The doctor prescribing phenytoin for hair growth would need to know that there are no data that the drug regrows hair and would need to inform the patient of this fact. Whether insurance companies—and, indirectly, all of us—should pay for these remedies is another question entirely.

  What is not acceptable is for a drug company to develop a drug for one indication and then market it for something else. This is the classic bait-and-switch, and it is not rare. In this situation a pharmaceutical company wins approval for a drug for an indication for which it clearly works. Then the company tries to persuade doctors to use it for another indication in order to sell more product. Usually, the indication they promote it for is far more common than the one for which it won approval. In some instances, approval for the new indication may follow a year or two later and no harm is done. In other instances we find that the drug really does not work for the new indication and we have another example of reversal—doctors have been persuaded to use an ineffective drug.

 

‹ Prev