Pharmageddon
Page 13
If the primary ethical as well as scientific purpose of controlled trials was initially to debunk unwarranted therapeutic claims, companies have transformed them into technologies that mandate action. The method originally designed to stop misguided therapeutic bandwagons has in company hands become the main fuel of the latest bandwagons. A method that is of greatest use when it demonstrates drugs either do not work or have minimal effects has become a method to transform snake oil into a must-use life-saving remedy. In the process, evidence-based medicine has become evidence-biased medicine.
EVIDENCE-BIASED MEDICINE
In 1972, two decades after randomized controlled trials came into use, Archie Cochrane, a physician based in Cardiff, Britain, who had worked with Austin Bradford Hill when the first clinical trials were being set up, published an influential book on the role of evidence in medicine. The vast majority of medical services and procedures still had not been tested for their effectiveness, he noted, while many other services and procedures that had been tested and shown to be unsatisfactory, still persisted.37 Cochrane was a randomization extremist; in his view, not only doctors but also judges and teachers should be randomizing what actions they took to see what worked, but all three unfortunately had God complexes—they “knew” what the right thing to do was. As late as the 1980s, Cochrane claimed fewer than 10 percent of the treatments in medicine were evidence based.38
Cochrane made it clear that using controlled trials to evaluate treatments was not a matter of dragging rural hospitals up to the standards of Harvard or Oxford. Rather, mortality often seemed to him greater where there were more medical interventions rather than fewer. After coronary care units (CCUs) came into fashion in the 1960s, for instance, he suggested randomizing patients who were having heart attacks to treatment in a CCU versus home treatment. Cardiff physicians refused to participate on the grounds that CCUs were so obviously right. Cochrane ran the trial in neighboring Bristol instead. When he first presented the results, he transposed them so that the home treatment results, which were actually the better ones, appeared under the CCU column and vice versa. His audience demanded an instant halt to home treatment. But the response was quite different when the “error” was corrected and it was made clear that the data favored home treatment. To this day there is a reluctance to believe that home care might be better than care in a CCU.
Iain Chalmers, a perinatologist and health services researcher from Oxford picked up the baton from Cochrane. He was similarly struck that physicians often seemed slow to implement procedures that had been shown to work and instead stuck with approaches that had not been shown to work or had been shown not to work. His concern lay not just in encouraging trials but in accessing the information from trials that had already been done.39 Everyone knew there had been an explosion in the medical literature since World War II, but efforts to collect reports of clinical trials began to reveal that there were far fewer published trials than many had thought. Some of the trials done had been published multiple times, while others had not been published at all.
Many of the articles that dictated clinical practice, furthermore, were framed as review essays, published under the names of some of the most eminent academics in the field, but on closer inspection, these often lengthy articles with their impressively long reference lists espoused only one point of view of a topic. These academics were not systematically considering all the available research, in other words. These were not scientific reviews—they were rhetorical exercises. Recognition that a scientific review should be systematic led Chalmers to set up the Cochrane Center in 1992 dedicated to amassing all available clinical trial evidence in every branch of medicine, even when the evidence had not been published.
It was David Sackett at Canada's McMaster University, outlining a program for educating medical students to practice according to the evidence, who branded the new dispensation evidence-based medicine.40 When it came to considering the evidence, Sackett drew up a hierarchy in which systematic reviews and randomized controlled trials offered gold standard evidence, while at the bottom of the hierarchy came individual clinical or anecdotal experience. This was a world turned upside down. Just a few years earlier, clinical judgment had been seen as the height of medical wisdom.
The implication was that we should submit every procedure to controlled trial testing. Even if newer treatments were more expensive as a result, in due course the health services would gain because money would be saved as ineffective treatments were abandoned and better treatments reduced the burden of chronic illnesses. This seemed to be a win-win claim for those paying for health services, for physicians and their patients, as well as for scientific journals. It quickly became almost impossible to get anything other than clinical trials published in leading journals.
When Cochrane advocated for randomized controlled trials, Chalmers campaigned for comprehensive collection of their results, and Sackett drew up his hierarchy of evidence placing trial results at the top, no distinction was drawn between independent and company trials. Controlled trials were controlled trials. It seemed so difficult to get doctors to accept the evidence that their pet treatments didn't work, that any indication that doctors were practicing in accordance with clinical trial evidence seemed a step in the right direction.
There are two problems with this approach. The first applies to both independent and company trials—namely, that we appear to have lost a sense that, other than when they demonstrate treatments don't work, what controlled trials do primarily is to throw up associations that still need to be explained. Until we establish what underpins the association, simply practicing on the basis of numbers involves sleepwalking rather than science—equivalent to using plaster casts indiscriminately rather than specifically on the fractured limb.
The second is that in the case of company trials, the association that is marketed will have been picked out in a boardroom rather than at the bedside. One of the most dramatic examples of what this can mean comes from the SSRIs, where the effects of these drugs on sexual functioning are so clear that controlled trials would be merely a formality. In contrast, hundreds of patients are needed to show that a new drug has a marginal antidepressant effect. Yet the marketers know that with a relentless focus on one set of figures and repetitions of the mantra of statistical significance they can hypnotize clinicians into thinking these drugs act primarily on mood with side effects on sexual functioning when in fact just the opposite would be the more accurate characterization. Because it has become so hard to argue against clinical trials of this nature, there is now almost no one at the séance likely to sing out and break the hypnotic spell.
A cautionary tale involving reserpine may bring home how far we have traveled in the last half century. In the early 1950s, medical journals were full of reports from senior medical figures claiming the drug worked wonderfully to lower blood pressure; what was more, patients on it reported feeling better than well.41
Reserpine was also a tranquilizer and this led Michael Shepherd, another of Bradford Hill's protégés, in 1954 to undertake the first randomized controlled trial in psychiatry, in this case comparing reserpine to placebo in a group of anxious depressives.42 While reserpine was no penicillin, some patients were clearly more relaxed and less anxious while on it, so it was something more than snake oil. Shepherd's trial results were published in the Lancet, a leading journal; nevertheless, his article had almost no impact. The message sank without trace, he thought, because medicine at the time was dominated not by clinical trials but by physicians who believed the evidence of their own eyes or got their information from clinical articles describing cases in detail— “anecdotes”—as they would now be called.43
Ironically the two articles preceding Shepherd's in the same issue of the Lancet reported hypertensive patients becoming suicidal on reserpine.44 Reserpine can induce akathisia, a state of intense inner restlessness and mental turmoil that can lead to suicide. The case reports of this new hazard were so compelling, the occurr
ence of the problem so rare without exposure to a drug, and the onset of the problem subsequent to starting the drug plus its resolution once the treatment was stopped so clear that clinical trials were not needed to make it obvious what was happening. On the basis of just such detailed descriptions, instead of becoming an antidepressant, reserpine became a drug that was said to cause depression and trigger suicides. But the key point is this—even though superficially contradictory, there is no reason to think that either the case reports or the controlled trial findings were wrong. It is not so extraordinary for a drug to suit many people but not necessarily suit all.
Fast forward thirty-five years to 1990. A series of trials had shown Prozac, although less effective than older antidepressants, had modest effects in anxious depressives, much as reserpine had. On the basis of this evidence that it “worked,” the drug began its rise to blockbuster status. A series of compelling reports of patients becoming suicidal on treatment began to emerge, however.45 These were widely dismissed as case reports—anecdotes. The company purported to reanalyze its clinical trials and claimed that there was no signal for increased suicide risk on Prozac in data from over three thousand patients, when in fact there was a doubling of the risk of suicidal acts on Prozac but this increase was not statistically significant and thus was ignored. Even if Prozac had reduced suicide and suicidal-act rates, it would still be possible for it to benefit many but pose problems to some. But the climate had so shifted that instead the fuss generated by the Prozac case reports added impetus to the swing of the pendulum away from clinical reports in favor of controlled trials.
But as we saw in the analysis of antidepressants, in addition to the 40 percent who responded to placebo, a further 50 percent of patients (five out of ten) did not respond to treatment, so that in only publishing controlled trials and not the convincing reports of hazards for treatments like the antidepressants, journals are privileging the experiences of the one specific drug responder over the nine-fold larger pool of those who in one way or another are not benefitting specifically from the drug. Partly because of selective publication practices, partly because of clever trial design, only about one out of every hundred drug trials published in major journals today is likely to do what trials do best—namely, debunk therapeutic claims. The other ninety-nine are pitched as rosily positive endorsements of the benefits of statins or mood stabilizers, treatments for asthma or blood pressure or whatever illness is being marketed as part of the campaign to sell a blockbuster.
The publishing of company trials in preference to carefully described clinical cases, allied to the selective publication of only some trials of a drug, and interpretations of the data that are just plain wrong amounts to a new anecdotalism. The effect on clinical practice has been dramatic. Where once clinicians were slow to use new drugs if they already had effective treatments, and when they did use the new drug, if their patients had a problem, they stopped the treatment and described what had happened, we now have clinicians trained to pay heed only to controlled trials—clinicians who, on the basis of evidence that is much less generalizable than they think, have rapidly taken up a series of newer but less effective treatments.
The development of randomized controlled trials in the 1950s is now widely acclaimed as at least as significant for the development of medicine as any of the breakthrough drugs of the period. If controlled trials functioned to save patients from unnecessary interventions, it would be fair to say they had contributed to better medical care. They sometimes fill this role, but modern clinicians, in thrall to the selective trials proffered up by the pharmaceutical companies, and their embodiment in guidelines, are increasingly oblivious to what is happening to the patients in front of them, increasingly unable to trust the evidence of their own eyes.
We have come to the outcome that Alfred Worcester feared but not through the emphasis on diagnosis and tests that so concerned him. It has been controlled trials, an invention that was designed to restrict the use of unnecessary treatments and tests, which he would likely have fully approved of, that has been medicine's undoing.
This company subversion of the meaning of controlled trials does not happen because of company malfeasance. It happens because both we and our doctors as well as the government or hospital service that employs our physicians, in addition to companies, all want treatments to work. It is this conspiracy of goodwill that leads to the problems outlined here.46 But in addition to this, uniquely in science pharmaceutical companies are able to leave studies unpublished or cherry-pick the bits of the data that suit them, maneuvers that compound the biases just outlined.
Two decades after introducing the randomized controlled trial, having spent years waiting for the pendulum to swing from the personal experience of physicians to some consideration of evidence on a large scale, Austin Bradford Hill suggested that if such trials ever became the only method of assessing treatments, not only would the pendulum have swung too far, it would have come off its hook.47 We are fast approaching that point.
4
Doctoring the Data
By 1965, the flood tide of innovative compounds ranging from the early antibiotics to the first antipsychotics that had transformed medicine in the 1950s appeared to be ebbing. Desperate to continue with business as usual, the pharmaceutical industry had to decide if it made business sense to allow its researchers to pursue scientific innovations in quite the ad hoc way that had worked so well for the previous two decades. This was the question the major drug companies put to a new breed of specialists, management consultants, who were called in to help them reorganize their operations with a view to maintaining the success of previous decades. The answers these consultants provided have shaped not only industry but also the practice of medicine ever since.
In the preceding decades, scientists working within pharmaceutical companies took the same approach to research that scientists based in universities did: they conducted wide-ranging, blue-skies research out of which new compounds might serendipitously fall and for which there might initially be no obvious niche—as had once been the case for a host of drug innovations that later became huge money makers, including oral contraceptives, the thiazide antihypertensives, the blood-sugar- lowering tolbutamide, chlorpromazine and subsequent antipsychotics, and imipramine and later antidepressants. But under changed conditions and the coming of the consultants, the mission changed to one in which clinical targets were to be specified by marketing departments and pursued in five-year programs. If that meant discarding intriguing but unplanned leads, so be it.
Where once pharmaceutical companies had been prospectors for drugs, more like oil exploration companies, they now changed character. Their share prices had soared but these were now dependent on the recommendations of analysts who scrutinized the company's drug pipeline and business plans. Accordingly companies had to do business in a different way. Even though the best way to find new drugs is to watch closely for drug side effects in people who are taking them, just as simply drilling oil wells is still the best way to find oil, this avenue of drug development was cut off.
Fatefully, in tandem with these corporate changes a second wave of drug development had come to fruition. The original, serendipitous discoveries of the 1940s had not only offered stunning new treatments but also greatly advanced our understanding of biology. Out of this new understanding came a further group of compounds, like James Black's beta-blockers for hypertension and H-2 antagonists for ulcers, as well as Arvid Carlsson's selective serotonin reuptake inhibiting antidepressants (SSRIs). This second wave initially gave hope to those who like business plans—it appeared that drug development could be made rational and predictable in a manner that might fit into a business model. But since the 1970s, this new tide has also gone out. The number of new drugs registered yearly and the number of breakthrough compounds has dropped dramatically, leading companies to hunt for new solutions, one of which has been to outsource drug development to start-up companies.
While the changes
in drug development programs that began in the 1960s have been enormous, the key reorganization came at the marketing and clinical-trial end of company operations, and these have transformed pharmaceutical outfits into companies who market drugs rather than companies who manufacture drugs for the market.1 As it happened, these corporate changes coincided with three other developments that were to have far more profound effects on the drug industry than any management consultant in the 1960s would likely have supposed. It was one thing to reorganize pharmaceutical companies, it was quite another to end up with almost complete control of therapeutics.
The first of these developments was a decline in US government funding for clinical research beginning in the 1960s. If industry was already funding many studies in order to get approval for drugs, why not let them carry an even larger share of the burden? So the thinking went. For well-done randomized controlled trials, it shouldn't make much difference where the funding came from.