Deadly Medicines and Organised Crime

Home > Other > Deadly Medicines and Organised Crime > Page 9
Deadly Medicines and Organised Crime Page 9

by Peter Gotzsche


  62 Harris G. As doctors write prescriptions, drug company writes a check. New York Times. 2004 June 27.

  63 Lane C. Bad medicine: GlaxoSmithKline’s fraud and gross negligence. Psychology Today. 2011 Jan 7.

  64 Silverman E. Glaxo to pay $750M for manufacturing fraud. Pharmalot. 2010 Oct 26.

  65 Wikipedia. GlaxoSmithKline. Available online at: http://en.wikipedia.org/wiki/GlaxoSmithKline (accessed 20 June 2012).

  66 Carpenter G. Italian doctors face charges over GSK incentive scheme. Over 4000 doctors are alleged to have received cash, gifts, and prizes to encourage them to prescribe GSK products. Lancet. 2004; 363: 1873.

  67 Company news; drug maker agrees to pay $175 million in lawsuit. New York Times. 2004 Feb 7.

  68 Prescription generics & patent management. Strategies in the Pharmaceutical Industry 2004. 2004 Nov 29.

  69 Relman AS, Angell M. America’s other drug problem: how the drug industry distorts medicine and politics. The New Republic. 2002 Dec 16: 27–41.

  70 Jack A. Legal tactics to delay launch of generic drugs cost Europe €3bn. BMJ. 2008; 337: 1311.

  71 Tanne JH. Bristol-Myers Squibb made to pay $515 m to settle US law suits. BMJ. 2007; 335: 742–3.

  72 Anonymous. Bristol-Myers will settle antitrust charges by U.S. New York Times. 2003 March 8.

  73 Avorn J. Powerful Medicines: the benefits, risks, and costs of prescription drugs. New York: Vintage Books; 2005.

  74 European Commission. Antitrust: Commission fines Lundbeck and other pharma companies for delaying market entry of generic medicines. Press release. 2013 June 19.

  75 Abelson R. Whistle-blower suit says device maker generously rewards doctors. New York Times. 2006 Jan 24.

  76 Poses RM. Medtronic settles, yet again. Blog post. Health Care Renewal. 2011 Dec 15. Available online at: http://hcrenewal.blogspot.co.nz/2011/12/medtronic-settles-yet-again.html (accessed 10 July 2013).

  77 Tanne JH. US companies are fined for payments to surgeons. BMJ. 2007; 335: 1065.

  78 Harris G, Pear R. Drug maker’s efforts to compete in lucrative insulin market are under scrutiny. New York Times. 2006 Jan 28.

  79 Abelson R. How Schering manipulated drug prices and Medicaid. New York Times. 2004 July 31.

  80 Harris G. Drug makers settled 7 suits by whistle-blowers, group says. New York Times. 2003 Nov 6.

  81 OxyContin’s deception costs firm $634M. CBS News. 2007 May 10.

  82 Zee A van. The promotion and marketing of OxyContin: commercial triumph, public health tragedy. Am J Publ Health. 2009; 99: 221–7.

  83 Wordsworth M. Deadly epidemic fears over common painkiller. ABC News. 2012 Nov 14.

  84 Kendall B. Court backs crackdown on drug officials. Wall Street Journal. 2010 July 27.

  85 Tansey B. Huge penalty in drug fraud: Pfizer settles felony case in Neurontin off-label promotion. San Francisco Chronicle. 2004 May 14.

  86 Collier J. Big pharma and the UK government. Lancet. 2006; 367: 97–8.

  87 Ferner RE. The influence of big pharma. BMJ. 2005; 330: 857–8.

  88 Smith R. Curbing the influence of the drug industry: a British view. PLoS Med. 2005; 2: e241.

  89 Moynihan R. Officials reject claims of drug industry’s influence. BMJ. 2004; 329: 641.

  90 Goldacre B. Bad Pharma. London: Fourth Estate; 2012.

  91 Free Online Law Dictionary. Organized crime. Available online at: http://legal-dictionary. thefreedictionary.com/Organized+Crime (accessed 2 December 2012).

  92 Peter Rost. Blog. Available online at: http://peterrost.blogspot.dk (accessed 26 June 2012).

  93 Almashat S, Preston C, Waterman T, et al. Rapidly increasing criminal and civil monetary penalties against the pharmaceutical industry: 1991 to 2010. Public Citizen. 2010 Dec 16.

  94 Almashat S, Wolfe S. Pharmaceutical industry criminal and civil penalties: an update. Public Citizen. 2012 Sept 27.

  4

  Very few patients benefit from the drugs they take

  I am sure this statement will surprise many patients who faithfully take their drugs every day, and I shall therefore explain in some detail why it is correct, using depression as an example.

  If we treat patients with depression in primary care with an antidepressant drug for 6 weeks, about 60% of them will improve.1 This seems like a good effect. However, if we treat the patients with a blinded placebo that looks just the same as the active pill, 50% of them will improve. Most doctors interpret this as a large placebo effect, but it isn’t possible to interpret the result in this way. If we don’t treat the patients at all, but just see them again after 6 weeks, many of them will also have improved. We call this the spontaneous remission of the disease or its natural course.

  It is important to be aware of these issues. At my centre, we do research on antidepressant drugs, and I have often explained to the media that most patients don’t benefit from their treatment. Leading psychiatrists have counter-argued that, although the effect is modest, the patients will benefit from what they erroneously call the ‘placebo effect’, which they exaggerated to be about 70%.

  Thus, there are three main reasons why a patient may feel better after having been treated with a drug: the drug effect, the placebo effect and the natural course of the disease. If we wish to study the effect of giving patients placebo, we will need to look at trials where some of the patients are randomised to placebo and others to no treatment. One of my co-workers, Asbjørn Hróbjartsson, identified 130 such trials in 2001, most of which had a third group of patients that received an active intervention, often similar in appearance to the placebo. Contrary to the prevailing belief that placebos have large effects, we found – much to our surprise – that placebo might have a possible small effect on pain, but we couldn’t exclude the possibility that this result was caused by bias and not by the placebo.2

  The bias we mentioned occurs because it isn’t possible to blind patients to the fact that they don’t get any treatment. These patients may therefore become disappointed and tend to report less improvement than what actually occurred, e.g. in their depression or pain. Conversely, patients on placebo may tend to exaggerate the improvement, particularly in three-armed trials where they don’t know what they get but hope they receive active treatment rather than placebo.

  We have updated our results with recent trials and now have 234 trials investigating 60 different clinical conditions in our Cochrane review.3 We confirmed our original findings that placebo interventions do not seem to have important clinical effects in general and that it is difficult to distinguish a true effect of placebo from biased reporting.

  You may wonder why I tell you so much about the effects of placebos and not of drugs, but that’s because drug effects are determined relative to placebo in placebo-controlled trials. And if the intended blinding is not impeccable, we would expect the reported effect of a drug to be exaggerated when the outcome is subjective, such as general mood or pain.

  So how often is the blinding not working? Quite often, for two reasons. First, trials called double-blind may not have been effectively blinded at the outset. As an example, researchers that performed six double-blind studies of antidepressants or tranquillisers noted that in all cases, the placebo was different from the active drug in physical properties such as texture, colour and thickness.4 Second, even when drug and placebo are indistinguishable in their physical properties, it is usually difficult to maintain the blind during trial conduct because drugs have side effects, e.g. antidepressant drugs cause dryness of the mouth.

  Because of these inherent problems in testing drugs, the true difference in the improvement rates of 60% and 50% on an antidepressant drug and placebo, respectively, in these trials is likely considerably smaller than 10%. But let’s first assume, for the sake of the argument, that these rates are true and construct a trial with such improvement rates (see Table 4.1). We have randomised 400 patients into two groups, and 121 of 200 patients (60.5%) improved on active drug and 100 of 200 patients (50.0%) on placebo.
Should we then believe that the drug is better than placebo or could the difference we observed have arisen by chance? We may address this question by asking how often we will see a difference of 21 improved patients or more, if we repeat the trial many times, if the truth is that the drug has no effect.

  Table 4.1 Results of a randomised trial that compared an antidepressant drug with placebo

  Improved Not improved Total

  Drug 121 79 200

  Placebo 100 100 200

  This is where statistics is so helpful. A statistical test calculates a P value, which is the probability that we will observe a difference of 21 patients or more if the drug doesn’t work. In this case, P = 0.04. The medical literature is full of P values, and the tradition is that if P is less than 0.05, we say that the difference is statistically significant and choose to believe that the difference we found is real. P = 0.04 means that we would only observe a difference of 21 patients or more four times in a hundred if the drug didn’t work and we repeated our trial many times.

  If two fewer patients had improved on active drug, i.e. 119 rather than 121, the difference would still be very much the same, 19 patients instead of 21, but the difference would not have been statistically significant (P = 0.07).

  What this illustrates is that, quite often, a ‘proof’ that a treatment works hinges on a few patients even though, as in the example, 400 patients were randomised, which is a fairly large trial for depression. It usually doesn’t take much bias to convert a non-significant result into a significant one. Sometimes, investigators or companies reinterpret or reanalyse the data after they have found a P value above 0.05 until they come up with one below 0.05 instead, for example by deciding that a few more patients had improved on active drug, or a few less on placebo, or by excluding some of the randomised patients from the analysis.5 This is not an honest approach to science, but as we shall see in Chapters 5 and 9, violations of good scientific practice are very common.

  Apart from such scientific misconduct, insufficient blinding can also make us believe that ineffective drugs are effective. Blinding is not only important when the patients evaluate themselves, but also when their doctors evaluate them. Depression is evaluated on elaborate scales with many subjective items, and it’s clear that knowledge about which treatment the patient receives can influence the doctor’s assessments in a positive direction.

  This was shown convincingly by Hróbjartsson and colleagues in 2012 using trials in a variety of disease areas that had both blinded and nonblinded outcome assessors. A review of 21 such trials, which had mostly used subjective outcomes, found that the effect was exaggerated by 36% on average (measured as odds ratio) when nonblinded observers rather than blinded ones evaluated the effect.6 This is a disturbingly large bias considering that the claimed effect of most of the treatments we use is much less than 36%.

  Thus, a double-blind trial that is not effectively blinded may exaggerate the effect quite substantially. We can try this out on our antidepressant example, assuming for simplicity that the blinding is broken for all patients. To calculate the odds ratio, we rearrange the numbers so that a low odds ratio means a beneficial effect, which is the convention (see Table 4.2). The odds ratio for the significant effect is (79 ∙ 100)/(121 ∙ 100) = 0.65. As we expect this effect to be exaggerated by 36%, we may estimate what the true effect is. A bias of 36% means that the ratio between the biased and the true odds ratio is 0.64. Thus, the true result is 0.65/0.64, or an odds ratio of 1.02. As the odds ratio is now about 1, it means that the antidepressant drug didn’t work.

  Table 4.2 Same results as in Table 4.1, but rearranged

  Not improved Improved Total

  Drug 79 121 200

  Placebo 100 100 200

  My example was too simplified, as the blinding is rarely broken for all the patients, but the exercise was nevertheless very sobering. Even if the blinding is broken for only a few patients, it can be enough to render a nonsignificant result significant. In fact, Hróbjartsson and colleagues noted in their review that the 36% exaggeration of treatment effects associated with nonblinded assessors was induced by the misclassification of the trial outcome in a median of only 3% of the assessed patients per trial (corresponding to 12 of the total of 400 patients in the example).

  Thus, it takes very little unblinding to turn a totally ineffective drug into one that seems to be quite effective.

  The importance of this finding for patients cannot be overstated. Most drugs have conspicuous side effects, so there can be no doubt that the blinding is broken for many patients in most placebo-controlled trials. When we use drugs to save people from dying, it doesn’t matter that the blinding is broken, as we can say with certainty whether a patient is alive or not. However, we are rarely in that situation. Most of the time, we use drugs to reduce the patients’ symptoms or to reduce the risk of complications to their disease, and the outcomes are very often subjective, e.g. degree of depression or schizophrenia, anxiety, dementia, pain, quality of life, functional ability (often called activities of daily living), nausea, insomnia, cough and dyspnoea. Even to decide whether a patient has had a heart attack can be rather subjective (see Chapter 5).

  The randomised clinical trial is the most reliable design we have for evaluating treatments. But we have accepted much too readily that what comes out of these experiments should be believed if the trial was blinded and the main result is accompanied by a significant P value.

  What is so disturbing about this is that all drugs cause harms whereas many of the drugs we use aren’t effective at all. We are therefore harming immense numbers of patients in good faith, as our randomised trials don’t allow us to say which of the drugs that don’t work.

  On this background, it is easy to understand why companies that have shown that their drug works for a disease that the drug was supposed to influence through its mechanism of action can later study the drug in many, completely unrelated diseases and find that their drug also works for these. The unblinding is a major reason why it is so much easier to invent new diseases than to invent new drugs.7,8 It is easy to show some effect on a simple or more elaborated scale that, on top of this, may have little clinical relevance and let the marketing machine do the rest.

  An older member of my golf club once told me that he was uncertain whether the pills he took for his dementia had any effect. He wondered whether he should stop taking them and asked for my advice. I rarely give advice to patients, as I am not their doctor, not a specialist in the area in question, and don’t have any knowledge about their medical histories and preferences. He also told me, however, that he was bothered by the drug’s side effects and its high price. Given that the effect of antidementia drugs isn’t impressive and has been established in industry sponsored trials with highly subjective outcomes, and given the many other biases in industry trials, I made an exception. I told him that if I were him, I wouldn’t take the drug. As he was pretty demented, I doubt he followed my advice, which he likely forgot.

  The lack of effective blinding should make doctors much more cautious than they are; they should wait and see, think twice before they prescribe drugs to patients, write in their notes exactly what they want to obtain by using a drug and when, and remember to stop the drug if the goal is not obtained.

  A convenient way to see that few patients will be helped by the drugs we give them – even if we choose to believe the results from trials at face value – is to convert improvement rates into the Number Needed to Treat (NNT). This is the inverse of the risk difference. Thus, if we believe that 60% of patients receiving an antidepressant become better and 50% of those on placebo improve, the NNT is 1/(60% – 50%) = 10.

  This means that for every 10 patients we treat with an antidepressant, only one will achieve any benefit. If we accept that any possible placebo effect is so small that we can disregard it,3 it furthermore means that it made no difference for the other nine patients that they received a drug, apart from its side effects and cost. Even if we don’t
accept the findings that placebos are generally pretty ineffective, it would still be true that very few patients benefit from an antidepressant drug. It is actually much worse than this, not only because of the lack of effective blinding, but also because the 10% difference is derived from industry trials that were carefully designed to recruit those types of patients that are most likely to respond (see Chapter 17).9 In actual practice, the NNT is much higher than 10.

  If we turn our attention to prophylaxis, i.e. to healthy citizens rather than patients with a disease, the NNT becomes much larger. Statins are very popular drugs, as they lower cholesterol, and a trial from 1994 showed that if patients at very high risk for a coronary attack received simvastatin for 5 years, 30 patients would need to be treated to avoid one death.10 This is impressive, but simvastatin was very expensive in the 1990s when it was a patented drug. I therefore looked at Table 1 in the paper, which describes the enrolled patients. Although 80% of them had already had a heart attack before they entered the study, only one-third were in treatment with aspirin, although it is a life-saver. Furthermore, one-quarter were smokers although all of them suffered from either angina or had had a heart attack. Thus, we could have saved many lives very cheaply by reminding the physicians that their patients should receive aspirin, and also that they needed to talk to them a bit more about quitting smoking; even brief conversations have an effect on smokers.11

  Statins are currently intensively marketed to the healthy population, both by the industry and some enthusiastic doctors, but the benefit is very small when statins are used for primary prevention of cardiovascular disease. When the data from eight trials were combined in a Cochrane review, the researchers found that statins reduced all-cause mortality by 16%.12 This looks like an impressive effect, and this is also how the drug industry advertises their findings. However, it says virtually nothing about the benefit of the prophylaxis, as we don’t know what the death rate was in those who didn’t take a statin. The authors reported that 2.8% of the trial participants died (note that I don’t call healthy people patients, as they are not patients). What was missing in this review was the NNT. A 16% reduction from a rate of 2.8% gives a rate of 2.35% and an NNT of 1/(2.8% – 2.35%) = 222.

 

‹ Prev