Bad Pharma

Home > Science > Bad Pharma > Page 18
Bad Pharma Page 18

by Ben Goldacre


  This is a universal problem in the politics and management of regulators, and it can be seen in the organisational structures: around the world, the departments in charge of monitoring safety and removing drugs from the market are much smaller and less powerful than the departments that approve drugs, which makes institutions reluctant to impose suspensions. Since we are discussing matters of line management and organ-isational structure, and you might suspect that this is merely a vague, handwaving assertion, let me tell you that it is also the verdict of every serious study of regulators,49 from the Institute of Medicine50 to the semi-official biography of the FDA,51 various academics,52 and people from within the organisations.

  That is the reason there were so many calls for the EU to create a new Drug Safety Agency, and that is why it’s so concerning that these calls have been ignored. In fact, the same old models have been put back in place, only under different names. The EMA’s Pharmacovigilance Risk Assessment Committee, which decides on whether to remove an approved drug from the market, still reports to the Committee for Medical Products for Human Use, which is the one that approves them in the first place. This perpetuates all of the old problems about removal being difficult, more lowly than approval, and an embarrassment to the approvers.

  So what steps can a regulator take when it has established that there is a problem? In very extreme cases it can remove a drug from the market (although in the US, technically drugs usually stay on the market, with the FDA advising against their use). More commonly it will issue a warning to doctors through one of its drug safety updates, a ‘Dear Doctor’ letter, or by changing the ‘label’ (confusingly, in reality, a leaflet) that comes with the drug. Drug-safety updates are sent to most doctors, though it’s not entirely clear whether they are widely read. But, amazingly, when a regulator decides to notify doctors about a side effect, the drug company can contest this, and delay the notice being sent out for months, or even years.

  In February 2008, for example, the MHRA published a small piece in its bulletin Drug Safety Update, which is read by all too few people. The article stated that the agency was planning a change to the drug label for all statins, a class of drug given to reduce cholesterol and prevent heart attacks, following a review of clinical trial data, spontaneous reports of suspected adverse drug reactions, and the published literature. ‘Product information for statins is being updated to reflect a number of different side-effects as class effects of all statins.’ It explained: ‘Patients should be made aware that treatment with any statin may sometimes be associated with depression, sleep disturbances, memory loss, and sexual dysfunction.’ The agency also planned a new warning that – very rarely – statin therapy might be associated with interstitial lung disease, a serious medical condition.

  The decision to add these new side effects to the label was made in February 2008, but it took until November 2009 for an announcement that the change was finally being made. This is a delay of almost two years. Why did it take so long? The Drugs and Therapeutics Bulletin discovered the reason: ‘One of the innovator MA [marketing authorisation] Holders was not in agreement with this wording.’53 So, a drug company was able to delay the inclusion of safety warnings on a whole class of drugs prescribed to four million people in the UK for twenty-two months because it didn’t agree with the wording.

  But what good would have come of changing the label in any case?

  This is the final component of our story. It’s difficult for doctors and patients to get a clear, up-to-date picture of the risks and benefits of drugs, from any source, but since the regulators have privileged access to information, we should expect them to do a particularly clear job of communicating what they have, as there is by definition no competition for providing information here, and no opportunity to shop around: the regulators are the only people with access to all of the data.

  Drug labels are lauded by regulators as a single, awesome repository of information, by which prescribers and patients alike can be educated and informed. In reality, they are chaotic and not very informative. They often discuss trials, but give no reference to enable you to find out more, or even to work out which trial they’re discussing. Sometimes the basic elements of a trial are so bizarrely different in the regulator document and the published paper that it’s hard to match them up even if you try, and even if the trial has been published. What’s more, most labels feature long lists of hundreds of side effects, with poor information as to how common they are, even though most of them are very rare, and are not even confidently associated with the drug anyway. Too much information, communicated chaotically, is every bit as unhelpful as too little information.

  Some US researchers have been campaigning for over a decade to add a simple ‘drug facts box’ to the information given to doctors and patients alongside the rather dense and confusing ‘label’. This box would be a summary document giving clear, quantitative information on the risks and benefits of the drug, using evidence-based strategies for communicating statistical information to lay people. There is randomised controlled trial evidence showing that patients given this drug facts box have better knowledge of the benefits and risks of their drugs.54 The FDA has suggested that it will think about using them. I hope that one day it will, and that it will make these boxes itself.

  So you can see the difference for yourself, below is the drug facts box for a sleeping pill called ‘Lunesta’.

  This drugs fact box is briefer than the official label for the same drug, which appears after it: I think it’s also much more informative. It doesn’t solve all the problems of secrecy, or even all the problems of poor communication. But it does demonstrate very clearly that regulators have neither earned nor respected their special status when it comes to assessing and communicating risk.

  Solutions

  We have established that there are some very serious problems here, both in how we approve drugs, and in how we monitor their safety once they become available. Drugs are approved on weak evidence, showing no benefit over existing treatments, and sometimes no benefit at all. This gives us a market flooded with drugs that aren’t very good. We then fail to collect better evidence on them once they’re available, even when we have legislative power to force companies to do better trials, and even when they’ve promised to do so. Lastly, side-effects data is gathered in a slightly ad hoc fashion, behind closed doors, with secret documents and ‘risk management plans’ that are hidden from doctors and patients for no good reason. The results of this safety monitoring are communicated inconsistently, through mechanisms that are uninformative and are therefore used infrequently, and which are, in any case, vulnerable to spectacular delays imposed by drug companies.

  We could tolerate some of these problems, but enduring all of them at once creates a dangerous situation, in which patients are routinely harmed for lack of knowledge. It wouldn’t matter, for example, that the market is flooded with drugs that are of little benefit, or are worse than their competitors, if doctors and patients knew this, could find out immediately and conveniently which are the best options, and could change their behaviour to reflect that. But this is not possible when we are deprived of existing information on risks and benefits by secretive regulators, or where good-quality trial data is not even collected.

  In my view, fixing this situation requires a significant cultural shift in how we approach new medicines; but before we get to that, there are several small, obvious steps which should go without saying.

  Drug companies should be required to provide data showing how their new drug compares against the best currently available treatment, for every new drug, before it comes onto the market. It’s fine that sometimes drugs will be approved despite showing no benefit over current treatments, because if a patient has an idiosyncratic reaction to the current common treatment, it is useful to have other inferior options available in your medical arsenal. But we need to know the relative risks and benefits, if we are to make informed decisions.

  Regulators and
healthcare funders should use their influence to force companies to produce more informative trials. The German government have led the field here, setting up an agency in 2010 called IQWiG, which looks at the evidence for all newly approved drugs, to decide if they should be paid for by Germany’s healthcare providers. IQWiG has been brave enough to demand good quality trials, measuring real-world outcomes, and has already refused to approve payments for new drugs where the evidence provided is weak. As a result, companies have delayed marketing new drugs in Germany, while they try to produce better evidence that they really do work:55 patients don’t lose out, since there’s no good evidence that these new drugs are useful. Germany is the largest market in Europe, at 80 million patients, and they’re not a poor country. If all purchasers around the world held the line, and refused to buy drugs presented with weak evidence, then companies would be forced to produce meaningful trials much more quickly.

  All information about safety and efficacy that passes between regulators and drug companies should be in the public domain, as should all data held by national and international bodies about adverse events on medications, unless there are significant privacy concerns on individual patient records. This has benefits that go beyond immediate transparency. Where there is free access to information about a treatment, we benefit from ‘many eyes’ on the problems around it, analysing them more thoroughly, and from more perspectives. Rosiglitazone, the diabetes drug, was removed from the market because of problems with heart failure, but those problems weren’t identified and acted on by a regulator: they were spotted by an academic, working on data that was, unusually, made more publicly available as the result of a court case. The problems with the pain drug Vioxx were spotted by independent academics outside the regulator. The problems with the diabetes drug benfluorex were spotted, again, by independent academics outside the regulator. Regulators should not be the only people who have access to this data.

  We should aim to create a better market for communicating the risks and benefits of medications. The output of regulators is stuffy, legalistic and impenetrable, and reflects the interests of regulators, not patients or doctors. If all information is freely available, then it can be repurposed by those who have access to it, and précised into better forms. These could be publicly funded and given away, or privately funded and sold, depending on business models.

  This is all simple. But there is a broader issue, that no government has ever satisfactorily addressed, bubbling under in the culture of medicine: we need more trials. Wherever there is true uncertainty about which treatment is best, we should simply compare them, see which is best at treating a condition, and which has worse side effects.

  This is entirely achievable, and at the end of the next chapter I will outline a proposal for how we can carry out trials cheaply, efficiently and almost universally, wherever there is true uncertainty. It could be used at the point of approval of every new drug, and it could be used throughout all routine treatment.

  But first, we need to see just how rubbish some trials can be.

  4

  Bad Trials

  So far I’ve taken the idea of a clinical trial for granted, as if there was nothing complicated about it: you just take some patients; split them in half; give one treatment to one group, another to the other; and then, a while later, you see if there is any difference in outcomes between your two groups.

  We’re about to see the many different ways in which trials can be fundamentally flawed, by both design and analysis, in ways that exaggerate benefits and underplay harms. Some of these quirks and distortions are straightforward outrages: fraud, for example, is unforgivable, and dishonest. But some of them, as we will see, are grey areas. There can be close calls in hard situations, to save money or to get a faster result, and we can only judge each trial on its own merits. But it is clear, I think, that in many cases corners are cut because of perverse incentives.

  We should also remember that many bad trials (including some of the ones discussed in the pages to follow) are conducted by independent academics. In fact, overall, as the industry is keen to point out, where people have compared the methods of independently-sponsored trials against industry-sponsored ones, industry-sponsored trials often come out better. This may well be true, but it is almost irrelevant, for one simple reason: independent academics are bit players in this domain. Ninety per cent of published clinical trials are sponsored by the pharmaceutical industry. They dominate this field, they set the tone, and they create the norms.

  Lastly, before we get to the meat, here is a note of caution. Some of what follows is tough: it’s difficult science, that anyone can understand, but some examples will take more mental horsepower than others. For the complicated ones I’ve added a brief summary at the beginning, and then the full story. If you find it hard going, you could skip the details and take the summaries on trust. I won’t be offended, and the final chapter of the book – on dodgy marketing – is filled with horrors that you mustn’t miss.

  To the bad trials.

  Outright fraud

  Fraud is an insult. In the rest of this chapter we will see wily tricks, close calls, and elegant mischief at the margins of acceptability. But fraud disappoints me the most, because there’s nothing clever about it: nothing methodologically sophisticated, no plausible deniability, and no argument about whether it breaks the data. Somebody just made the results up, and that’s that. Delete, ignore, start again.

  So it’s lucky – for me and for patients – that fraud is also fairly rare, as far as anyone can tell. The best current estimate of its prevalence comes from a systematic review in 2009, bringing together the results of survey data from twenty-one studies, asking researchers from all areas of science about malpractice. Unsurprisingly, people give different responses to questions about fraud depending on how you ask them. Two per cent admitted to having fabricated, falsified or modified data at least once, but this rose to 14 per cent when they were asked about the behaviour of colleagues. A third admitted other questionable research practices, and this rose to 70 per cent, again, when they were asked about colleagues.

  We can explain at least part of the disparity between the ‘myself’ and ‘others’ figures by the fact that you are one person, whereas you know lots of people, but since these are sensitive issues, it’s probably safe to assume that all responses are an underestimate. It’s also fair to say that sciences like medicine or psychology lend themselves to fabrication, because so many factors can vary between studies, meaning that picture-perfect replication is rare, and as a result nobody will be very suspicious if your results conflict with someone else’s. In an area of science where the results of experiments are more straightforwardly ‘yes/no’, failed replication would expose a fraudster much more quickly.

  All fields are vulnerable to selective reporting, however, and some very famous scientists have manipulated their results in this way. The American physicist Robert Millikan won a Nobel Prize in 1923 after demonstrating with his oil-drop experiment that electricity comes in discrete units – electrons. Millikan was mid-career (the peak period for fraud) and fairly unknown. In his famous paper from Physical Review he wrote: ‘This is not a selected group of drops, but represents all of the drops experimented on during sixty consecutive days.’ That claim was entirely untrue: in the paper there were fifty-eight droplets, but in his notebooks there are 175, annotated with phrases like ‘publish this beautiful one’ and ‘agreement poor, will not work out’. A debate has raged in the scientific literature for many years over whether this constitutes fraud, and to an extent, Millikan was lucky that his results could be replicated. But in any case, his selective reporting – and his misleading description of it – lies on a continuum of all sorts of research activity that can feel perfectly innocent, if it’s not closely explored. What should a researcher do with the outliers on a graph that is otherwise beautifully regular? When they drop something on the floor? When the run on the machine was probably contaminated? For this reaso
n, many experiments have clear rules about excluding data.

  Then there is outright fabrication. Dr Scott Reuben was an American anaesthetist working on pain who simply never conducted at least twenty clinical trials published over the previous decade.1 In some cases, he didn’t even pretend to get approval for conducting studies on patients in his institution, and simply presented the results of trials that were conjured out of nothing. Data in medicine, as we should keep remembering, is not abstract or academic. Reuben claimed to have found that non-opiate medications were as effective as opiates for the management of pain after surgical operations. This pleased everyone, as opiates are generally addictive, and have more side effects. Practice in many places was changed, and now that field is in turmoil. Of all the corners in medicine where you could perpetrate fraud, and change the decisions that doctors and patients make together, pain is one area that really matters.

  There are various ways that fraudsters can be caught, but constant vigilant monitoring by the medical and academic establishment is not one of them, as that doesn’t happen to any sufficient extent. Often detection is opportunistic, accidental or the result of local suspicions. Malcolm Pearce, for example, was a British obstetric surgeon who published a case report claiming he had reimplanted an ectopic pregnancy, and furthermore that this had resulted in the successful delivery of a healthy baby. An anaesthetist and a theatre technician in his hospital thought this was unlikely, as they’d have heard if such a remarkable thing had happened; so they checked the records, found no matching records for any such event, and things collapsed from there.2 Notably, in the same issue of the same journal, Pearce had also published a paper reporting a trial of two hundred women with polycystic ovary syndrome whom he treated for recurrent miscarriage. The trial never happened, and not only had Pearce invented the patients and the results, he had even concocted a fictitious name for the sponsoring drug company, a company that never existed. In the era of Google, that lie might not survive for very long.

 

‹ Prev