Ending Medical Reversal

Home > Other > Ending Medical Reversal > Page 21
Ending Medical Reversal Page 21

by Vinayak K Prasad


  In contrast, a skeptic of the “new snake” theory has no burden of proof. How can someone prove that a snake does not exist? One might produce an array of circumstantial evidence. Among all graduate students who claim to have discovered a new species of snake, what percentage is correct? Moreover, the discovery rate has surely changed over time. We might guess that in the 19th century, perhaps 20 percent of “new discoveries” were truly new, but by 2015, this number has surely fallen, perhaps as low as 1 percent. The skeptic might also show how a previously described snake, indigenous to Nicaragua, could be mistaken for this “new snake.” In the right light and against the right tree, could this have been a Central American tree boa? But a skeptic can never “prove” that the snake does not exist. The burden of proof, in cases of newly discovered species, must be on the discoverer.

  The burden-of-proof principle is not just a legal principle, but a fundamental principle of logical assertions. Depending on the claim, one party has the obligation to prove it is true. If a friend says he can drive from Chicago to Washington, D.C., in five hours, you can be skeptical, knowing the drive takes you twice as long, but you cannot disprove the claim. It’s up to your friend to prove his own claim.

  In law, the burden of proof is an accepted concept. For murder cases, the burden rests on the prosecutor to prove that the defendant committed the crime. It is not up to the accused to prove his innocence. For malpractice claims, the plaintiff has to show the doctor was at fault. The doctor’s actions must have directly contributed to damage (some harm a patient experiences) and must have constituted dereliction of the doctor’s duty. Proving these “4 Ds” is the plaintiff’s burden of proof.

  Although its application seems sensible, the concept of the burden of proof is not really considered when we think about medical innovation. Every doctor has been part of a conversation that goes something like this.

  DR. SMITH: “There is no evidence that what you are suggesting works.”

  DR. JONES: “True, but there is no evidence that it does not work, is there?”

  In fact, there is a saying in medicine that captures the extent to which this debate is alive: “The absence of evidence is not evidence of absence.” Not having proof that a treatment works is not proof that it does not work. In the day-to-day care of patients, when the evidence base for what needs to be done is often thin, this statement might, on occasion, make sense (more on this in chapter 18). However, it should not be applied to new drugs or devices.

  The burden of proof has a long-standing tradition in medicine with varying standards and requirements, but its modern incarnation began with a major expansion of the FDA’s power in the approval of new medications. For the first half of the 20th century, the U.S. Food and Drug Administration was charged with ensuring the safety of drugs. Whether or not a drug actually did what was advertised was not a requirement for approval. Then in 1962, with the passage of the Kefauver-Harris Amendment, the agency was given the additional task of ensuring the efficacy of drugs. Since that time, a drug developer has to show some evidence that the new drug actually does what it is purported to do. Of course, as we discussed in chapter 12, there are many ways that this requirement has been eroded. The accelerated-approval pathway allows drugs to come to market if they improve surrogate end points that are “reasonably likely to predict” true efficacy. In several dramatic cases, like that of bevacizumab in breast cancer (chapter 3), this policy has allowed drugs to reach the market that ultimately had no value (and sometimes did harm).

  Medicine’s experience with device development perfectly illustrates the lack of a clear standard of burden of proof. Many medical devices have been approved for use with little evidence that they work—the maker was not forced to prove that the product would work. A wonderful (and disturbing) example is the inferior-vena-cava filter used to treat pulmonary embolism. Pulmonary embolism occurs when blood clots form, usually in the legs, often after a period of immobility (after surgery, a long car ride, or a flight from Istanbul) and then embolize (“travel”) to the lungs. When these clots lodge in the lungs’ circulation, they can be deadly. The inferior-vena-cava (IVC) filter is one of those ingenious inventions that seems like it should work. It is a small metallic basket, placed in the large vessel between the legs and the lungs, that is intended to catch a blood clot before it reaches the lungs. This device has been widely used for decades. We have both prescribed IVC filters for patients (admittedly, largely before we investigated the device). However, to date, there is no evidence that this basket actually improves any patient outcomes. There is clear evidence that having the basket implanted in your body causes harms—an increase in leg pain, swelling, and the risk of recurrent blood clots in the legs. The device gained approval through an FDA pathway (510k) that demands neither safety nor efficacy data.

  The debate around the use of the IVC filter perfectly illustrates why we need to clarify the burden of proof in medical innovation. On the one side are skeptics who (rightly) argue that there is no good evidence that the IVC filter works and that its use should be restricted to randomized trials testing its benefit. On the other side are believers who argue that the IVC filter has certainly not been resoundingly disproved and may work. “Why not use it?” they ask. In this debate the believers are winning, and the device is implanted hundreds of times each day in America.

  Much of medicine happens here, in the no-man’s-land in which there is little evidence that a treatment helps and often evidence that it may do harm. We believe it is time to formalize the burden-of-proof principle and set a high bar in medicine by requiring developers to clearly prove that an innovation works prior to its adoption. Currently (you have heard this before), new interventions are often debuted and accepted into practice before they have been shown to benefit patients in robust clinical trials. This is not done for malicious reasons but because the therapies make sense and all involved (developers, doctors, patients) hope that they will work. Years may then pass before the treatment is put to the test in large, well-done randomized trials. These trials, when they are finally completed, not infrequently show that the treatment is ineffective. In America today, it is not the innovators and manufacturers who are carrying the burden of proof to design, pay for, and run these trials. Instead, it is third parties funding creative (and brave) researchers who are willing to challenge medical standards years after the introduction of these widely used (and often highly profitable) therapies. This must change. The burden of proof that an intervention works must be borne by those who develop a new therapy and by the practitioners who prescribe it (both of whom are likely to profit from it).

  :: THREE ARGUMENTS FOR THE BURDEN OF PROOF

  We believe in a careful adherence to the burden-of-proof principle for three reasons. The first, alluded to above, is that placing the burden of proof on the developers of the therapies is the most practical approach. It is easier and safer to prove that a treatment works before deploying it widely, than to prove that the therapy does not work (or does harm) after it is widely available. Although it may be easier to prove that placing coronary-artery stents does not benefit people with stable coronary-artery disease than it is to prove that a species of snake does not exist, such proof is predictably followed by caveat seekers. “Sure you proved stents do not work in that population,” they say, “but how about in older people, or people with diabetes, or people with higher cholesterol levels?” The appropriate response, and one that must be the new normal, is to begin with the proof that the intervention is truly and unquestionably effective for the indication it is claimed to help.

  The second reason to endorse a strong burden of proof is that so few medical innovations actually are successful. Among all medical innovations, what percent are likely to work? You might think back to chapter 7 and say about half of them. That is the figure we arrived at from our work and that of the British Medical Journal Clinical Evidence project (figure 7.2). However, it is worth remembering that we were asking what proportion of innovat
ions that are already widely accepted are effective. Now we are asking, instead, what proportion of all medical innovations are likely to work. Half is probably an overestimate. The number is likely to be quite low. Of 100 drugs that are conceived, at most 1 successfully becomes a commercial product and an even smaller proportion are resoundingly effective. This would give a rate of less than 1 percent. Moreover, as we have seen, no observational studies (no matter how many) or mechanistic logic is sufficient to prove that a treatment will work. Recently a drug that by all measures should have been better than placebo in the treatment of liver cancer was found not to work. The authors of the study wrote, “Despite the strong scientific rationale [they go on to cite nine references] and preclinical data [four references!] [the experimental drug] plus [the] best supportive care failed to improve survival over placebo and the best supportive care.” In short, the probability that a therapeutic intervention actually works is very, very low—despite abundant “promising” and “encouraging” studies.

  Third, our argument for a strong burden of proof rests on the medical principle of primum non nocere—first, do no harm. By all means, a doctor’s goal should be to recommend treatments that benefit his patients. But if he cannot do that—if he cannot offer an intervention that has been tested and proved, then, at a minimum, he must do no harm. Better not to “give it a shot” and cause problems. Instead, perhaps the best thing to do is to provide support and comfort to the patient. From a historical standpoint, this principle would have served doctors well. Just think of bloodletting, trephination,* and arsenic therapy for syphilis. In the modern world, as in the past, doctors and patients often think that giving it a shot is preferable. But, if you think that, refer back to chapter 7.

  :: BEHAVIORAL CHANGES

  Where would adoption of this standard of burden of proof require behaviors to change the most? The sites of greatest change would be in regulatory agencies and in doctors’ offices. First, because doctors can only prescribe and recommend drugs and devices that are approved, we suggest that all approvals by the regulatory agencies must be based on clear evidence of safety and efficacy. Furthermore, the efficacy of a new treatment must be demonstrated to be at least equivalent to accepted and proven treatment options combined with the best medical care. A developer cannot test a marginal drug in a Third World country with no other care, show a benefit over placebo, and try to apply that result to the U.S. market, where patients would be getting other approved drugs and the best supportive care.† Clearly, placing the burden of proof on drug and device developers and holding the regulatory approval process to exacting standards by which to evaluate the data provided are important steps in decreasing reversal. It is nothing more than asking the FDA to fulfill the charge of the agency: ensure safety and efficacy prior to approval.

  Second, doctors themselves will need to change the way they practice. Just because the FDA approves a drug or device does not mean it automatically enters widespread usage; this requires that doctors recommend it. Here too, adopting the burden of proof would require new practices. Before a doctor recommends a treatment to a patient, she should ask herself whether there is good evidence that it works. At a minimum, if there is not strong evidence of efficacy, that information should be shared with patients. More radically, if the evidence is not there, the doctor should not offer the intervention.

  These behavioral changes will not be easy—especially for regulatory agencies. For the past 20 years, first with accelerated approval and now with the FDA’s “breakthrough designation,” more and more pills can reach the marketplace without good evidence that they improve the end points important to patients.* When one considers not just pills but devices and surgeries, regulatory agencies have often not insisted that inventors provide good evidence that their inventions work prior to their debut. It is worth noting that we are not alone in insisting on this provision of proof. When the Institute of Medicine considered the FDA’s device-approval process, it called for the most permissive pathway (called 510k) to be eliminated. It is time that we, as a society, make sure we get things right in medicine while the horse is still in the barn. One of the lessons of medical reversal is that horses are very difficult to rein in once they are loose.

  NUDGING OUR WAY FORWARD

  If the medical field accepts this new ethic and agrees that the burden of proving that a new treatment is effective should be placed on developers and upheld by regulators and prescribers, we will need to vastly increase the number of therapies being tested in randomized controlled trials. How do we accomplish this?

  Frequently at a restaurant, while you wait for your food (or your server), you are served complimentary bread, or chips and salsa. Is this a beneficial practice for the restaurant? Recently the Freakonomics Radio Podcast dedicated an episode to this question. Many interested parties debated whether an amuse-bouche increased or decreased a restaurant’s revenue. One argument went that a free appetizer would fill people up, lead to the ordering of less food or fewer desserts, and thus decrease revenue. Another argument was that yes, free bread does those things, but in doing so it makes people leave sooner, freeing up the table for another seating. Even though the restaurant moves fewer desserts, there is more table traffic and, in the end, the restaurant makes more money. You could argue about this forever (and, in fact, the discussion on this podcast did become a bit inane), or you could do a randomized controlled trial. But what would you randomize? Would it be each table? In that case, the worry is that if a control table sees that another table is getting free bread, the customers at the control table may feel put out. Perhaps you could randomize several restaurants? How about a nationwide study in which 200 restaurants that do not serve free bread are randomized to offering it or not and sales are followed. In just a few weeks, we would have an answer that might end up changing restaurant practices globally. Why have we not seen this study? The problem: how do you get all of those restaurants to participate?

  Sure, this example is silly, but the question of how to get restaurants enrolled is an important one. How do you increase participation in trials? In medicine there are countless important questions that small, simple trials could easily answer. The barrier is getting people to enter the trial. Consider the story of pediatric versus adult cancers. For most of the 1990s, pediatricians developed a rich and comprehensive network so that nearly all children with cancer were enrolled in some form of clinical trial. Much of their success—large improvements in survival for their patients—is attributed to the push for clinical trials. In contrast, to date, less than 10 percent of adult cancer patients participate in clinical trials, and, arguably, adult care has lagged behind. Of course, we cannot really compare improvements in adult and pediatric cancer care—they are apples and oranges. That being said, the comparison is hypothesis-generating. Would medicine be better if a larger proportion of patients were enrolled in clinical trials? and if so, how might we entice (or nudge) these subjects into trials?

  Richard Thaler, a professor of behavioral science and economics, and Cass Sunstein, a professor of law, introduced the nudge principle in their book Nudge: Improving Decisions about Health, Wealth, and Happiness. This principle might suggest the way to increase enrollment in clinical trials. The nudge principle is simple: if you want people to do something, make that action the default option while still letting them opt out if they want to. For years, activists tried everything to improve the percentage of people who volunteered to be organ donors. The simple solution is to change the question from “Do you want to donate?” to “Do you not want to donate?” For many things—from retirement savings accounts to healthy choices in the school lunch line—simply changing the default, while giving people the freedom to opt out, dramatically increases the desired behavior.

  How would the nudge principle work in medicine? Consider, as an example, the treatment of pneumonia. There are numerous potential treatments, and nobody really knows which is better. What if we said to the next 1,000 pneumonia patients, “There
are many different ways to treat this infection. All of them work, but we do not know which is best. To study this, we are going to randomly pick one of these effective treatments for you, unless you want to opt out.” Most people would probably say, “Sure, that’s fine.” We doubt that many people have an allegiance to moxifloxacin over ceftriaxone. What if every patient who sought care in the hospital contributed not just to a single trial, but to multiple studies?

  The number of questions that could be easily and quickly studied in this way is exciting. The questions do not need to be profound ones. Is it better to let the average hospitalized patient sleep through the night, or should we wake her to measure her blood pressure (as we presently do). With enough patients, you could have a definitive trial done in a month. One of the most common reasons for admission to the hospital is syncope, the transient loss of consciousness and postural tone—in lay terms, fainting. Often, a patient’s history alone reveals the diagnosis, but sometimes the cause is more enigmatic. Presently we spend a lot of time and energy (and money) evaluating these patients. Does every person really need every test? A few randomized trials of a thousand people could optimize our approach to syncope.

 

‹ Prev