Bad Science

Home > Science > Bad Science > Page 24
Bad Science Page 24

by Ben Goldacre


  All this is made possible in Britain because of the GP Research Database, or GPRD, which has been running for many years. This contains anonymised medical records of several million patients from participating GPs’ surgeries, and is already widely used to do the kinds of side-effects monitoring studies I discussed earlier: in fact, this database is currently owned and run by the MHRA itself. So far, however, it has only been used for observational research, rather than randomised trials: people’s prescriptions and medical conditions are monitored, and analysed in bulk, in the hope that we can spot patterns. This can be helpful, and has been used to generate useful information about several medicines, but it can also be very misleading, especially when you try to compare the benefits of different treatment options.

  This is because, often, the people given one treatment aren’t quite the same as the people given another, even though you think they are. There can be odd, unpredictable reasons why some patients are prescribed one drug, and some another, and it’s very hard to work out what these reasons are, or to account for them after the fact, when you’re analysing data you’ve collected from routine medical practice in the real world.

  For example, maybe people in a posh area are more likely to be prescribed the more expensive of two similar drugs, because budgets in that clinic are less pressed, and the expensive one is more heavily marketed. If so, then even though the expensive drug is no better than a cheaper alternative, it would appear superior in the observational data, because wealthy people, overall, are healthier. This effect can also make drugs look worse than they really are. Many people have mild kidney problems, for example, which grumbles along in the background alongside their other medical problems; it causes them no specific health issues, but their doctor is aware, from blood tests, that their kidneys are no longer clearing things from their bloodstream quite as efficiently as they do for the healthiest people in the population. When these patients are being treated for depression, say, or high blood pressure, maybe they will be put on a drug that is regarded as having a better safety profile, just to be on the safe side, on account of their mild kidney problems. In this case, that drug will look much less effective than it really is, when you follow up the patients’ outcomes, because many of the people receiving it were sicker to start with: the patients with minor things, like mild kidney problems, were actively channelled onto the drug believed to be safest.

  Even when you know these things are happening, it’s hard to account for them in your analysis; but often there are gremlins distorting your findings, and you don’t even realise they’re there. Sometimes this has led to serious problems: hormone replacement therapy is just one memorable case of people being misled, by trusting ‘observational’ data, instead of doing a trial.

  HRT is a reasonably safe and effective short-term treatment to reduce the unpleasant symptoms that some women experience while going through the menopause. But it has also been prescribed much more freely to patients, some of whom have received it for many years on end, for reasons that border on the aesthetic: HRT was regarded as a way to cheat ageing, and it maintained various features of a younger body, in a way that was desirable for many women. But this wasn’t the only reason that doctors gave long-term prescriptions for these drugs. By observing the health records of older women, researchers were able to spot what they believed was a reassuring pattern: women who take HRT for many years live longer, healthier lives. This was very exciting news, and it helped to justify the long-term prescription of HRT to an even greater extent. Nobody had ever done a randomised trial – randomly assigning women either to receive HRT or to receive normal management without HRT. Instead, the results of the ‘observational’ studies were simply taken at face value.

  When a randomised trial was finally done, it revealed a terrible surprise. Far from protecting you, HRT in fact increases your chances of various heart problems. It had only appeared to be beneficial because overall the women requesting HRT from their doctors were likely to be wealthy, vivacious, active, and many of the other things we know are already associated with living longer. We weren’t comparing like with like, and because we accepted observational data uncritically, and failed to do a randomised trial, we continued to prescribe a treatment that exposed women to risks nobody knew about. Even if we accept that some women might have chosen to risk their lives for the other benefits of long-term HRT, all women were deprived of this choice by our failure to conduct fair tests.

  This is why we need to do randomised trials wherever there is genuine uncertainty as to which drug is best for patients: because if we want to make a fair comparison of two different treatments, we need to be sure that the people getting them are absolutely identical. But randomly assigning real-world patients to receive one of two different treatments, even when you have no idea which is best, attracts all kinds of worried attention.

  This is best illustrated by a bizarre paradox which currently exists in the regulation of everyday medical practice. When there is no evidence to guide treatment decisions, out of two available options, a doctor can choose either one arbitrarily, on a whim. When you do this there are no special safeguards, beyond the frankly rather low bar set by the GMC for all medical work. If, however, you decide to randomly assign your patients to one treatment or another, in the same situation, where nobody has any idea which treatment is best, then suddenly a world of red tape emerges. The doctor who tries to generate new knowledge, improve treatments and reduce suffering, at no extra risk to the patient, is subject to an infinitely greater level of regulatory scrutiny and oversight; but above all, that doctor is also subject to a mountain of paperwork, which slows the process to the point that research simply isn’t practical, and so patients suffer, through the absence of evidence.

  The harm done by these disproportionate delays and obstructions is well illustrated by two trials, both conducted in A&E departments in the UK. For many years it was common to treat patients who’d had a head injury with a steroid injection. This made perfect sense in principle: after a head injury, your brain swells up, and since the skull is a box with a fixed volume, any swelling in there will crush the brain. Steroids are known to reduce swelling, and this is why we inject them into knees, and so on: so giving them to people with head injuries should, in theory, prevent the brain from being crushed. Some doctors gave steroids on the basis of this belief, and some didn’t. Nobody knew who was right. People on both sides were pretty convinced that the people on the other side were dangerously mad.

  The CRASH trial was designed to resolve this uncertainty: patients with serious head injury would be randomised, while still unconscious, to receive either steroids or no-steroids, and the researchers would follow them up to see how they got on.2 This created huge battles with ethics committees, which didn’t like the idea of randomising patients who were unconscious, even though they were being randomly assigned to two treatments, both in widespread use throughout the UK, where we had no idea whatsoever which was better. Nobody was losing out by being in the trial, but the patients of the future were being harmed with every day this trial was delayed.

  When the trial was finally approved and conducted, it turned out that steroids were harming patients, and in large numbers: a quarter of the people with serious head injuries died whichever treatment they received, but there were two and a half extra deaths for every hundred people treated with steroids. Our delay in discovering this fact led to the unnecessary and avoidable deaths of very large numbers of people, and the authors of the study were absolutely clear who should take responsibility for this: ‘The lethal effects we have shown might have been found decades ago had the research ethics community accepted a responsibility to provide robust evidence that its prescriptions are likely to do more good than harm.’

  But this wasn’t the only harm done. Many trial centres insisted on delaying treatment, in order to get written consent to participation in the trial from a relative of the unconscious patient. This written consent would not have been necessary t
o receive steroids, if you happened to be treated by a doctor who was a believer in them; nor would you have needed written consent to not receive steroids from a doctor who wasn’t a believer. It was only an issue because patients were being randomised to one treatment or the other, and ethics committees choose to introduce greater barriers when that happens, even though the treatments patients are randomised to are the exact ones they would have got anyway. In the treatment centres where the local regulators insisted on family consent to randomisation, it delayed treatment with steroids by 1.2 hours on average. This delay, to my mind, is disproportionate and unnecessary: but as it happened, in this case, it did no harm, because steroids don’t save lives (in fact, as we now know, they kill people).

  In other studies, such a delay would cost lives. For example, the CRASH-2 trial was a follow-up piece of research, conducted in A&E departments by the same team. This study looked at whether trauma patients with severe bleeding are less likely to die if they’re given a drug called tranexamic acid, which improves clotting. Since these patients are bleeding to death, there is a degree of urgency about getting them treated. Of course, all patients were given all the usual treatment you would expect them to get; the only extra feature of their management, determined by the trial, was whether they were randomly assigned to get tranexamic acid on top of normal management, or not.

  The trial found that tranexamic acid is hugely beneficial, and saves lives. But again, some sites delayed giving it, while they tried to contact relatives and get consent for randomisation. A one-hour delay in giving tranexamic acid reduces the number of patients helped from 63 per cent to 49 per cent, so patients in the trial were directly harmed by a delay introduced to get consent for randomisation between two options, where nobody knew which was better anyway, and patients throughout the UK are liable to get one or the other on almost entirely arbitrary grounds anyway.

  This, once again, is something I would regard as disproportionate – and disproportionate is exactly the correct word. It is vitally important that the rights of patients are protected, and that they are not subjected to dangerous treatments in the name of research. Where trials are examining the effects of new, highly experimental treatments, it’s absolutely right that there should be an enormous amount of regulatory oversight, and a wealth of information communicated clearly and compulsorily to the patient. But when someone is in a trial comparing two currently used treatments, both of which are believed to be equally safe and effective, where randomisation adds no extra risk, the situation is very different.

  This is the situation for our trial in GPs’ practices comparing two statins: in routine everyday practice in the UK, patients are sometimes given atorvastatin, and sometimes simvastatin. No doctor alive knows which is better, because there is absolutely no evidence comparing the two, on real-world outcomes like heart attack and death. When doctors make their arbitrary ‘choice’ of which to give, on the basis of no evidence, nobody is interested in regulating that, so there is no special process, and no form to complete explaining that there is no evidence for the decision. It seems to me that this everyday doctor, blithely giving one or other treatment in the absence of evidence, who makes no attempt to improve our understanding of which treatment is best, is committing something of an ethical crime, for the simple reason that they are perpetuating our ignorance. That doctor is exposing large numbers of future patients around the world to avoidable harm, and misleading their current patient about what we know of the benefits and risks of the treatments they are giving, with their fake certainty, or at best their failure to be honest about our uncertainty, and for no discernible benefit. But there is no special ethics committee oversight of that doctor’s activity.

  Meanwhile, when a patient is randomly assigned to one or other statin in our trial, suddenly this becomes a major ethical issue: the patient must fill out several pages of paperwork, over the course of twenty minutes, explaining that they understand all of the risks of the treatment they are being given, and that they are in a trial. They have to do this, even though no extra risk is introduced during the course of the trial; even though they were going to get one or other statin anyway; even though the trial imposes no extra burden on their time; and even though their medical records are already in the GP Research Database, and so are monitored for observational research regardless of their participation in the trial. These two statins are already used by millions of people around the world, and have been shown to be safe and effective: the only question for the trial is which is better. If there actually is a difference between them, huge numbers of people will be dying unnecessarily while we don’t know.

  The twenty-minute delay introduced by the consent form for this trial is interesting, because it’s not simply an inconvenience. Firstly, it may not even address the concerns that the ethicists are seeking to address: these committees and experts are keen to tell everyone that their restrictions are necessary, but they have collectively failed to produce research demonstrating the value of the interventions they force researchers to comply with, and in some cases, what little evidence we do have suggests that their interventions may even have the opposite effect to what they intend. The only research into what patients remember from consent forms, for example, shows that people remember more information, in total, from short forms than they do from long twenty-minute ones.3

  But more than that, a twenty-minute consent process, to receive a drug you were going to get anyway, threatens the whole purpose of the trial, which is to try to randomise patients as seamlessly and unobtrusively as possible, in routine clinical care. It doesn’t just make simple pragmatic trials slower and more expensive; it also makes them less representative of normal practice. When you introduce a twenty-minute consent process to receive a statin the patient was going to get anyway, the doctors and patients being recruited aren’t normal doctors and patients, but the unrepresentative ones willing to stop what they’re doing and spend twenty minutes going through a form together.

  This isn’t a problem for the pragmatic statins trial I have described, because the purpose of that trial isn’t really to find out which statin is better. In reality, it’s about the process, and its aim is to answer a much more fundamental and important question: can we randomise patients in routine care, cheaply and seamlessly? If we cannot, then we need to find out why not, and ask whether the barriers are proportionate, and whether they can be safely overcome. Ethicists appear to argue that the twenty-minute consent process is so valuable that we are better off letting patients die while we continue to practise in ignorance.

  I’m not simply saying that I disagree with this; I’m saying that I think the public deserve a say in whether they agree, through an informed, open debate.

  But more than that, I worry that these regulations express an implicit fantasy about normal clinical practice, which has never been adequately challenged: that of spurious over-certainty. Perhaps if all doctors were forced to admit to the uncertainties in our day-to-day management of patients, it might make us a little more humble, and more inclined to improve the evidence base on which we base our decisions. Perhaps if we honestly told patients, ‘I don’t know which of these two treatments is best,’ whenever that was the case, patients would start to ask questions themselves. ‘Why not?’ might be the first, and ‘Why don’t you try to find out?’ might follow shortly afterwards.

  Some patients will prefer to avoid randomisation, for the illusion of certainty, and the fantasy that their doctor has been able to make a tailored decision about which statin, or any other drug, is best for them. But I think we should be able to offer everyone the chance to be randomised wherever there is true uncertainty about which is the better of two widely used treatments that are already known to be safe and effective. I think this should be done on the basis of a brief consent form, no more than a hundred words, with the option to access more detailed explanatory material for anyone who wants it. And I think research ethicists should be asked to provide evidence that the harm
they are inflicting on patients around the world by imposing inflexible rules, such as a twenty-minute consent process, is proportional to whatever benefit they believe they confer.

  More than this, I think we need a cultural shift in the way we all, as patients, view our reciprocal relationship with research in medicine. We only know what works because of trials, and we all profit from the participation of patients before us in these trials; but many of us seem to have forgotten this. By remembering, we could create a social contract whereby everyone expects their health service to be constantly conducting trials, simple A/B tests, comparing treatments against each other to see which is the best, or even the cheapest, if they’re both equally effective. A doctor failing to take part in such tests could be regarded as an oddity who is harming future patients. It could be obvious to all patients that participating in these trials is a normal reflection of the need to produce better evidence to improve medical treatments, for themselves in future, and for the others in the community with whom they share their medical system.

 

‹ Prev