Bad Pharma

Home > Science > Bad Pharma > Page 25
Bad Pharma Page 25

by Ben Goldacre


  In almost every developed country in the world, medicine is provided free at point of access, by the community, funded through taxation. From the perspective of the community, this whole process could be regarded as a simple bargain: we provide medicines free at point of access; in exchange, you need to let us find out what works best for you and others. The NHS could be in a constant cycle of testing and learning, improving its performance, and improving outcomes for everyone in the country, and everyone in the world, by creating better knowledge on what works.

  It’s a quirk of history that has been largely lost to most doctors and academics, but this was essentially the dynamic around the first ever truly modern randomised trial. In 1946 the antibiotic streptomycin had just been discovered, and after huge effort, 50kg was produced for the UK. It was hoped that this drug could be used to treat tuberculosis, but it was incredibly expensive, and we needed to find out if it actually worked. Patients with TB meningitis, in the brain, weren’t a problem: they would die in front of you, and fast, almost every time; so if any of these patients survived after being given streptomycin, you knew the drug was probably effective. For pulmonary TB, in the lungs, the story was more complicated: people would often recover by themselves over time, without any medication, so it would be harder to tell if the drug really had improved their chances, or hastened their recovery.

  In the US this drug was available on the open market, at huge prices. If you wanted to try it, you simply bought some, took it, and hoped for the best. But the UK Medical Research Council was in sole charge of our 50kg supply, and decided it was going to use this expensive new drug efficiently, in a randomised trial, to find out whether it really did make any difference to survival (and also whether it caused any unpredictable side effects). Doctors weren’t pleased, but in the immediate post-war environment, with rationing still commonplace, the notion of central control for the sake of the greater good was not so unusual. The first proper modern randomised trial went ahead, and the whole world’s understanding of streptomycin’s effectiveness was generated, essentially because the MRC forced our hands.

  If that whole story sounds uncomfortably Stalinist, then I apologise, but you may have misunderstood. I’m not proposing that we coerce every patient to participate in a trial, wherever there is uncertainty about which is the best treatment for them, by exploiting the opportunity that the state has to ration supply: I’m simply suggesting that trials should be routinely embedded in all clinical practice, as the norm, as an everyday act. If people really want to opt out, and take drugs of unknown effectiveness without generating any new knowledge, then of course I accept their desire to be antisocial for no personal gain.

  But this is a need that becomes more pressing with every day. Health care is cripplingly expensive, trials are the best tool we have to make our treatment decisions more cost-effective, and they can be run on many of the most important questions in medicine at very little cost, inflicting no harm at all on participants. Irrational prescribing costs lives, and it costs money; while the cost of research to prevent irrational prescribing is trivial in comparison, and large simple routine trials would swamp the bad evidence that has polluted medical practice, in just a few years. Our extreme effort, aiming to run trials at almost no cost in routinely collected electronic health records, is just one example of how this could be done.

  Instead, we have occasional, small, brief trials, in unrepresentative populations, testing irrelevant comparisons, measuring irrelevant outcomes, with whole trials that go missing, avoidable design flaws, and endless reporting biases that only persist because research is conducted chaotically, for commercial gain, in spuriously expensive trials. The poor-quality evidence created by this system harms patients around the world.

  And if we wanted, we could fix it.

  6

  Marketing

  So far, we have established that the evidence gathered to guide treatment choices in medicine suffers from a huge number of avoidable biases and problems. But that is only part of the story: this poorly collected evidence is then disseminated, and implemented, through chaotic and biased systems, which adds a whole extra layer of exaggeration and error.

  To understand what is happening here, we need to ask a simple question: how do doctors decide what to prescribe? This is a surprisingly complicated issue, and to feel our way through it we need to think about the four main players exerting pressure: the patient; the funder (which in the UK means the NHS); the doctor; and the drug company.

  For patients, things are simple: you want a doctor to prescribe the best treatment for your medical problem. Or rather, you want the treatment that has been shown, overall, in fair tests, to be better than all the others. You will probably trust your doctor to make this decision, and hope that there are systems in place to ensure that it is done properly, because getting involved in every single decision yourself would be enormously time-consuming.

  That’s not to say that patients are locked out, either by tradition or by design. It’s true that it’s rare for patients to make decisions about which treatment is best entirely for themselves, by reading the primary research literature, and spotting the strengths and flaws in each trial for themselves. I feel bad about that, and wish that this book could teach you everything you need to know, but the reality is that medical decision-making requires a lot of specialist knowledge and skills, which take time and practice to acquire at a safe level of competence, and there is a serious risk of people making very bad decisions when it’s not done well.

  That said, doctors and patients do make decisions together all the time, when medical practice is at its best, in discussions where doctors act as a kind of personal shopper, eliciting the outcomes a patient is most interested in achieving, and communicating the best existing evidence clearly, to allow an informed decision. Some patients might want a longer life at any cost, for example, while others might hate the hassle of taking a pill twice a day, and prefer to tolerate a greater risk of a bad long-term outcome. We will discuss how this can best be done later, but for now we will settle on the fact that in most cases, patients just want the best treatment.

  Our next players are the funders, and for them, the answer is also fairly simple: they want the same thing as the patient, unless it’s insanely expensive. For common drugs, and common decisions, they might have a set ‘pathway’ that dictates to GPs (more commonly than to hospital doctors) which drug is to be used, but outside those simple rules for simple situations, they rely on doctors’ judgements.

  Now we come to our core player in the individual treatment decision: doctors. They need good-quality information, but they need it, crucially, under their noses. The problem of the modern world is not information poverty, after all, but information overload, and even more precisely, what Clay Shirky calls ‘filter failure’. As recently as the 1950s, remember, medicine was driven almost entirely by anecdote and eminence; in fact, it’s only in the past couple of generations that we have collected good-quality evidence at all, in large amounts, and for all the failures in our current systems, we suddenly now have an overwhelming avalanche of data. The exciting future, for evidence-based medicine, is an information architecture that can get the right evidence to the right doctor at the right time.

  Does this happen? The simple answer is no. Although there are many automated systems for disseminating knowledge, for the most part we continue to rely on systems that have evolved over centuries, like the long, meandering essays in academic journals that are still used to report the results of clinical trials. Often, if you ask a doctor whether they know if one particular treatment is best for a particular medical condition, they’ll tell you they certainly do, and name it. But if you ask them how they know it is the best, their answer might scare you.

  They might say: that’s what I learnt at medical school; that’s what the person in the office next door told me she uses; that’s what I see the local consultant prescribing in his letters on patients I’ve referred; that’s what the lo
cal drug rep told me; that’s what I picked up on a teaching day two years ago; that’s what I think I read in a review article somewhere; that’s what I remember from some guidelines I looked up once; that’s what the local prescribing guidelines recommend; that’s what a trial I read said; that’s what I’ve always used; and so on.

  In reality, doctors can’t read every scientific article that’s relevant to their work, and that’s not just my opinion, or even a moan about my own reading pile. There are tens of thousands of academic journals, and millions of academic medical papers in existence, with more produced every day. One recent study tried to estimate how long it would take to keep up with all this information.1 The researchers collected every academic paper published in a single month that was relevant to general practice. Taking just a few minutes for each one, they estimated it would take a doctor six hundred hours to skim through them all. That’s about twenty-nine hours each weekday, which is, of course, not possible.

  So doctors will not be going through every trial, about every treatment relevant to their field, meticulously checking each one for the methodological tricks described in this book, diligently keeping their knowledge perfectly current. They will take shortcuts, and these shortcuts can be exploited.

  To see how bad doctors are at prescribing efficiently, we can look at national prescribing patterns. The NHS spends £9 billion a year on drugs. You know by now that many of the drugs on the market are ‘me-too’ drugs, which are no better than the drugs they copy, and that often the branded ‘me-too’ drugs could be replaced with equally effective drugs from the same class which are old enough to have come out of patent.

  In 2010 a team of academics analysed the top ten most highly prescribed classes of drugs in the NHS, and calculated that at least £1 billion is wasted, every year, from doctors using branded me-too drugs in a situation where there was an equally effective off-patent drug available.2

  For example: atorvastatin and simvastatin are both equally effective, as far as we currently know (we keep returning to statins, because so many people take these drugs), and simvastatin came off patent six years ago. So you would expect that everyone should be taking simvastatin, instead of atorvastatin, unless there’s a very good idiosyncratic reason to choose the unnecessarily expensive one in a specific patient. But even in 2009 there were still three million prescriptions a year for atorvastatin, not much down from the six million in 2006: this cost the NHS an unnecessary £165 million a year. And all those prescriptions for atorvastatin were despite major national programmes to try and get doctors switching.

  The same pattern can be seen across the board. Losartan is an ‘ARB’-type blood-pressure drug: there are lots of me-too drugs in this class, and because high blood pressure is so common, this class of medicines is the fourth most expensive for the NHS. In 2010, losartan came off patent: it is clinically almost indistinguishable from other ARB drugs, so you would expect the NHS to have switched everyone onto it, ready for the big price drop. But even after the price drop came, only 0.3 million of the 1.6 million people taking an ARB were on Losartan, so the NHS lost £200 million a year.

  If we can’t manage rational prescribing decisions even for these incredibly common medicines, then that is good evidence that prescription is a haphazard affair, where clear information is not efficiently disseminated to the people making the decisions, on either effectiveness or cost-effectiveness. I can honestly say, if I were in charge of the medical research budgets, I would cancel all primary research for a year, and only fund projects devising new ways to optimise our methods for disseminating information, ensuring that the evidence we already have is summarised, targeted and implemented. But I am not in charge, and there are some much more powerful influences out there.

  Now let’s think through a doctor’s prescribing decision from the perspective of a drug company. You want the doctor to prescribe your product, and you will do everything you can to make that happen. You might dress this up as ‘raising awareness of our product’, or ‘helping doctors make decisions’, but the reality is, you want sales. So you will advertise your new treatment in medical journals, stating the benefits but downplaying the risks, and leaning away from unflattering comparisons. You will send out ‘drug reps’ to meet doctors individually, and talk up the merits of your treatment. They will offer gifts, lunches, and forge personal relationships that may be mutually beneficial later.

  But it goes deeper than this. Doctors need ongoing education: they practise for decades after they leave medical school, and looking back from today, medicine has changed unrecognisably since, say, the 1970s, which is when many currently practising doctors came out of medical training. This education is expensive, and the state is unwilling to pay, so it is drug companies that pay for talks, tutorials, teaching materials, conference sessions, and whole conferences, featuring experts who they know prefer their drug.

  All of this is built on the back of a published academic evidence base that drug companies have carefully nurtured, through selective publication of flattering results and judicious use of design flaws, to give a flattering picture of their product. But those aren’t the only tools available to companies for influencing what appears in journals. They pay professional writers to produce academic papers, following their own commercial specifications, and then get academics to put their names to them. This acts as covert advertising, and will get more academic publications on their drug, more rapidly. It also aggrandises the favoured experts’ CVs, and helps doctors friendly to the company get the kudos and veneer of independence that comes from a university post.

  The company can also give money to patient groups, if those groups’ views and values help it sell more drugs, and so give them greater prominence, power and platform. On top of all this, it can then pay academic journals to accept papers, with advertising revenue and ‘reprint’ orders, and with these academic papers it can foreground the evidence showing that its treatment works, and even expand the market for its drug, by producing work that helpfully shows that the problem it treats is actually much more widespread than people realise.

  All of this sounds very expensive, and it is: in fact, the pharmaceutical industry overall spends about twice as much on marketing and promotion as it does on research and development. At first glimpse, this seems extraordinary, and it’s worth mulling over in various contexts. For example, when a drug company refuses to let a developing country have affordable access to a new AIDS drug it’s because – the company says – it needs the money from sales to fund research and development on other new AIDS drugs for the future. If R&D is a fraction of the company’s outgoings, and it spends twice as much on promotion, this moral and practical argument fails to hold water.

  The scale of this spend is fascinating in itself, when you put it in the context of what we all expect from evidence-based medicine, which is that people will simply use the best treatment for the patient. Because when you pull away from the industry’s carefully fostered belief that this marketing activity is all completely normal, and stop thinking of drugs as being a consumer product like clothes or cosmetics, you suddenly realise that medicines marketing only exists for one reason. In medicine, brand identities are irrelevant, and there’s a factual, objective answer to whether one drug is the most likely to improve a patient’s pain, suffering and longevity. Marketing, therefore, exists for no reason other than to pervert evidence-based decision-making in medicine.

  This is a very powerful machine: tens of billions of pounds are spent each year, $60 billion in the US alone, on medicines marketing.3 And, most impressively, this money isn’t plucked from the air: it is paid for by patients, funded entirely from the public purse, or patients’ payments into medical insurance companies. About a quarter of the money taken by pharmaceutical companies for the drugs they sell is turned around into promotional activity which has, as we will see, a provable impact on doctors’ prescribing. So we pay for products, with a huge uplift in price to cover their marketing budget, and that mon
ey is then spent on distorting evidence-based practice, which in turn makes our decisions unnecessarily expensive, and less effective.

  All of this comes on top of a system for evidence-based medicine that is already gravely wounded, with poor-quality trials that are poorly communicated to doctors at the best of times.

  It’s magnificent. Now, on to the details.

  Adverts to patients

  It is doctors who make the final decision about signing a prescription, but in reality the decision on which treatment to choose – and whether to bother with treatment at all – is made between them and their patients. This is entirely how you would want things to be; but it does make patients another lever to be leaned on, by an industry keen to increase sales.

  We will see in this chapter that the techniques used by drug companies to do this are many and varied: the invention of whole new diseases and explanatory models; funding patient groups; running star patients who fight (with professional PR assistance) against governments that have refused them expensive drugs; and more. But we will start with advertising, because there is an ongoing battle to bring it to the UK, and compared to the more covert strategies, it seems positively transparent.

  Direct-to-consumer drug advertising has been banned in almost all industrialised countries since the 1940s, for the simple reason that it works: adverts distort doctors’ prescribing behaviour – by design – and increase costs unnecessarily. The USA and New Zealand (along with Pakistan and South Korea) changed their minds in the early 1980s, and permitted a resurgence of this open marketing. That doesn’t, however, mean that these ads are someone else’s problem. There is a constant battle to reopen new territories, and these adverts leak through national borders in the age of the internet; but more than anything, they expose some clear truths about the industry’s thinking.

 

‹ Prev