by Ben Goldacre
If similar judgements are influencing the content of academic journals, then we have a problem. But can it really be the case that academic journals are the bottleneck, preventing doctors and academics from having access to unflattering trial results about the safety and effectiveness of the drugs they use? This argument is commonly deployed by industry, and researchers too are often keen to blame journals for rejecting negative findings en masse. Luckily, this has been the subject of some research; and overall, while journals aren’t blameless, it’s hard to claim that they are the main source of this serious public-health problem. This is especially so since there are whole academic journals dedicated to publishing clinical trials, with a commitment to publishing negative results written into their constitutions.
But to be kind, for the sake of completeness, and because industry and researchers are so keen to pass the blame on to academic journals, we can see if what they claim is true.
One survey simply asked the authors of unpublished work if they had ever submitted it for publication. One hundred and twenty-four unpublished results were identified, by following up on every study approved by a group of US ethics committees, and when the researchers contacted the teams behind the unpublished results, it turned out that only six papers had ever actually been submitted and rejected.34 Perhaps, you might say, this was a freak finding. Another approach is to follow up all the papers submitted to one journal, and see if those with negative results are rejected more often. Here again, the journals seem blameless: 745 manuscripts submitted to the Journal of the American Medical Association (JAMA) were followed up, and there was no difference in acceptance rate for significant and non-significant findings.35 The same thing has been tried with papers submitted to the BMJ, the Lancet, Annals of Internal Medicine and the Journal of Bone and Joint Surgery.36 Again and again, no effect was found. Might that be because the journals played fair when they knew they were being watched? Turning around an entire publishing operation for one brief performance would be tough, but it’s possible.
These studies all involved observing what has happened in normal practice. One last option is to run an experiment, sending identical papers to various journals, but changing the direction of the results at random, to see if that makes any difference to the acceptance rates. This isn’t something you’d want to do very often, because it wastes a lot of people’s time; but since publication bias matters, it has been regarded as a justifiable intrusion on a few occasions.
In 1990 a researcher called Epstein created a series of fictitious papers, with identical methods and presentation, differing only in whether they reported positive or negative results. He sent them at random to 146 social-work journals: the positive papers were accepted 35 per cent of the time, and the negative ones 26 per cent of the time, a difference that wasn’t large enough to be statistically significant.37
Other studies have tried something similar on a smaller scale, not submitting a paper to a journal, but rather, with the assistance of the journal, sending spoof academic papers to individual peer reviewers: these people do not make the final decision on publication, but they do give advice to editors, so a window into their behaviour would be useful. These studies have had more mixed results. In one from 1977, sham papers with identical methods but different results were sent to seventy-five reviewers. Some bias was found from reviewers against findings that disagreed with their own views.38
Another study, from 1994, looked at reviewers’ responses to a paper on TENS machines: these are fairly controversial devices sold for pain relief. Thirty-three reviewers with strong views one way or the other were identified, and again it was found that their judgements on the paper were broadly correlated with their pre-existing views, though the study was small.39 Another paper did the same thing with papers on quack treatments; it found that the direction of findings had no effect on reviewers from mainstream medical journals deciding whether to accept them.40
One final randomised trial from 2010 tried on a grand scale to see if reviewers really do reject ideas based on their pre-existing beliefs (a good indicator of whether journals are biased by results, when they should be focused simply on whether a study is properly designed and conducted). Fabricated papers were sent to over two hundred reviewers, and they were all identical, except for the results they reported: half of the reviewers got results they would like, half got results they wouldn’t. Reviewers were more likely to recommend publication if they received the version of the manuscript with results they’d like (97 per cent vs 80 per cent), more likely to detect errors in a manuscript whose results they didn’t like, and rated the methods more highly in papers whose results they liked.41
Overall, though, even if there are clearly rough edges in some domains, these results don’t suggest that the journals are the main cause of the problem of the disappearance of negative trials. In the experiments isolating the peer reviewers, those individual referees were biased in some studies, but they don’t have the last word on publication, and in all the studies which look at what happens to negative papers submitted to journals in the real world, the evidence shows that they proceed into print without problems. Journals may not be entirely innocent, but it would be wrong to lay the blame at their door.
In the light of all this, the data on what researchers say about their own behaviour is very revealing. In various surveys they have said that they thought there was no point in submitting negative results, because they would just be rejected by journals: 20 per cent of medical researchers said so in 1998;42 61 per cent of psychology and education researchers said so in 1991;43 and so on.44 If asked why they’ve failed to send in research for publication, the most common reasons researchers give are negative results, a lack of interest, or a lack of time.
This is the more abstract end of academia – largely away from the immediate world of clinical trials – but it seems that academics are mistaken, at best, about the reasons why negative results go missing. Journals may pose some barriers to publishing negative results, but they are hardly absolute, and much of the problem lies in academics’ motivations and perceptions.
More than that, in recent years, the era of open-access academic journals has got going in earnest: there are now several, such as Trials, which are free to access, and have a core editorial policy that they will accept any trial report, regardless of result, and will actively solicit negative findings. With offers like this on the table, it is very hard to believe that anyone would really struggle to publish a trial with a negative result if they wanted to. And yet, despite this, negative results continue to go missing, with vast multinational companies simply withholding results on their drugs, even though academics and doctors are desperate to see them.
You might reasonably wonder whether there are people who are supposed to prevent this kind of data from being withheld. The universities where research takes place, for example; or the regulators; or the ‘ethics committees’, which are charged with protecting patients who participate in research. Unfortunately, our story is about to take a turn to the dark side. We will see that many of the very people and organisations we would have expected to protect patients from the harm inflicted by missing data have, instead, shirked their responsibilities; and worse than that, we will see that many of them have actively conspired in helping companies to withhold data from patients. We are about to hit some big problems, some bad people, and some simple solutions.
How ethics committees and universities have failed us
By now, you will, I hope, share my view that withholding results from clinical trials is unethical, for the simple reason that hidden data exposes patients to unnecessary and avoidable harm. But the ethical transgressions here go beyond the simple harm inflicted on future patients.
Patients and the public participate in clinical trials at some considerable cost to themselves: they expose themselves to hassle and intrusion, because clinical trials almost always require that you have more check-ups on your progress, more blood tests, and more exa
minations; but participants may also expose themselves to more risk, or the chance of receiving an inferior treatment. People do this out of altruism, on the implicit understanding that the results from their experience will contribute to improving our knowledge of what works and what doesn’t, and so will help other patients in the future. In fact, this understanding isn’t just implicit: in many trials it’s explicit, because patients are specifically told when they sign up to participate that the data will be used to inform future decisions. If this isn’t true, and the data can be withheld at the whim of a researcher or a company, then the patients have been actively lied to. That is very bad news.
So what are the formal arrangements between patients, researchers and sponsors? In any sensible world, we’d expect universal contracts, making it clear that all researchers are obliged to publish their results, and that industry sponsors – which have a huge interest in positive results – must have no control over the data. But despite everything we know about industry-funded research being systematically biased, this does not happen. In fact, quite the opposite is true: it is entirely normal for researchers and academics conducting industry-funded trials to sign contracts subjecting them to gagging clauses which forbid them to publish, discuss or analyse data from the trials they have conducted, without the permission of the funder. This is such a secretive and shameful situation that even trying to document it in public can be a fraught business, as we shall now see.
In 2006 a paper was published in JAMA describing how common it was for researchers doing industry-funded trials to have these kinds of constraints placed on their right to publish the results.45 The study was conducted by the Nordic Cochrane Centre, and it looked at all the trials given approval to go ahead in Copenhagen and Frederiksberg. (If you’re wondering why these two cities were chosen, it was simply a matter of practicality, and the bizarre secrecy that shrouds this world: the researchers applied elsewhere without success, and were specifically refused access to data in the UK.46) These trials were overwhelmingly sponsored by the pharmaceutical industry (98 per cent), and the rules governing the management of the results tell a story which walks the now-familiar line between frightening and absurd.
For sixteen of the forty-four trials the sponsoring company got to see the data as it accumulated, and in a further sixteen they had the right to stop the trial at any time, for any reason. This means that a company can see if a trial is going against it, and can interfere as it progresses. As we will see later (early stopping, breaking protocols, pp.184, 200), this distorts a trial’s results with unnecessary and hidden biases. For example, if you stop a trial early because you have been peeking at the preliminary results, then you can either exaggerate a modest benefit, or bury a worsening negative result. Crucially, the fact that the sponsoring company had this opportunity to introduce bias wasn’t mentioned in any of the published academic papers reporting the results of these trials, so nobody reading the literature could possibly know that these studies were subject – by design – to such an important flaw.
Even if the study was allowed to finish, the data could still be suppressed. There were constraints on publication rights in forty of the forty-four trials, and in half of them the contracts specifically stated that the sponsor either owned the data outright (what about the patients, you might say?), or needed to approve the final publication, or both. None of these restrictions was mentioned in any of the published papers, and in fact, none of the protocols or papers said that the sponsor had full access to all the data from the trial, or the final say on whether to publish.
It’s worth taking a moment to think about what this means. The results of all these trials were subject to a bias that will significantly distort the academic literature, because trials that show early signs of producing a negative result (or trials that do produce a negative result) can be deleted from the academic record; but nobody reading these trials could possibly have known that this opportunity for censorship existed.
The paper I’ve just described was published in JAMA, one of the biggest medical journals in the world. Shortly afterwards, a shocking tale of industry interference appeared in the BMJ.47 Lif, the Danish pharmaceutical industry association, responded to the paper by announcing in the Journal of the Danish Medical Association that it was ‘both shaken and enraged about the criticism, that could not be recognised’. It demanded an investigation of the scientists, though it failed to say by whom, or of what. Then Lif wrote to the Danish Committee on Scientific Dishonesty, accusing the Cochrane researchers of scientific misconduct. We can’t see the letter, but the Cochrane researchers say the allegations were extremely serious – they were accused of deliberately distorting the data – but vague, and without documents or evidence to back them up.
Nonetheless, the investigation went on for a year, because in academia people like to do things properly, and assume that all complaints are made in good faith. Peter Gøtzsche, the director of the Cochrane centre, told the BMJ that only Lif’s third letter, ten months into this process, made specific allegations that could be investigated by the committee. Two months later the charges were dismissed. The Cochrane researchers had done nothing wrong. But before they were cleared, Lif copied the letters alleging scientific dishonesty to the hospital where four of them worked, and to the management organisation running that hospital, and sent similar letters to the Danish Medical Association, the Ministry of Health, the Ministry of Science, and so on. Gøtzsche and his colleagues said that they felt ‘intimidated and harassed’ by Lif’s behaviour. Lif continued to insist that the researchers were guilty of misconduct even after the investigation was completed. So, researching in this area is not easy: it’s hard to get funding, and the industry will make your work feel like chewing on a mouthful of wasps.
Even though the problem has been widely recognised, attempts to fix it have failed.48 The International Committee of Medical Journal Editors, for example, stood up in 2001, insisting that the lead author of any study it published must sign a document stating that the researchers had full access to the data, and full control over the decision to publish. Researchers at Duke University, North Carolina, then surveyed the contracts between medical schools and industry sponsors, and found that this edict was flouted as a matter of routine. They recommended boilerplate contracts for the relationship between industry and academia. Was this imposed? No. Sponsors continue to control the data.
Half a decade later, a major study in the New England Journal of Medicine investigated whether anything had changed.49 Administrators at all 122 accredited medical schools in the US were asked about their contracts (to be clear, this wasn’t a study of what they did; rather it was a study of what they were willing to say in public). The majority said contract negotiations over the right to publish data were ‘difficult’. A worrying 62 per cent said it was OK for the clinical trial agreement between academics and industry sponsor to be confidential. This is a serious problem, as it means that anyone reading a study cannot know how much interference was available to the sponsor, which is important context for the person reading and interpreting the research. Half of the centres allowed the sponsor to draft the research paper, which is another interesting hidden problem in medicine, as biases and emphases can be quietly introduced (as we shall see in more detail in Chapter 6). Half said it was OK for contracts to forbid researchers sharing data after the research was completed and published, once again hindering research. A quarter said it was acceptable for the sponsor to insert its own statistical analyses into the manuscript. Asked about disputes, 17 per cent of administrators had seen an argument about who had control of data in the preceding year.
Sometimes, disputes over access to such data can cause serious problems in academic departments, when there is a divergence of views on what is ethical. Aubrey Blumsohn was a senior lecturer at Sheffield University, working on a project funded by Procter & Gamble to research an osteoporosis drug called risedronate.50 The aim was to analyse blood and urine samples from an earlier t
rial, led by Blumsohn’s head of department, Professor Richard Eastell. After signing the contracts, P&G sent over some ‘abstracts’, brief summaries of the findings, with Blumsohn’s name as first author, and some summary results tables. That’s great, said Blumsohn, but I’m the researcher here: I’d like to see the actual raw data and analyse it myself. The company declined, saying that this was not their policy. Blumsohn stood his ground, and the papers were left unpublished. Then, however, Blumsohn saw that Eastell had published another paper with P&G, stating that all the researchers had ‘had full access to the data and analyses’. He complained, knowing this was not true. Blumsohn was suspended by Sheffield University, which offered him a gagging clause and £145,000, and he was eventually forced out of his job. Eastell, meanwhile, was censured by the General Medical Council, but only after a staggering delay of several years, and he remains in post.
So contracts that permit companies and researchers to withhold or control data are common, and they’re bad news. But that’s not just because they lead to doctors and patients being misled about what works. They also break another vitally important contract: the agreement between researchers and the patients who participate in their trials.
People participate in trials believing that the results of that research will help to improve the treatment of patients like them in the future. This isn’t just speculation: one of the few studies to ask patients why they have participated in a trial found that 90 per cent believed they were making a ‘major’ or ‘moderate’ contribution to society, and 84 per cent felt proud that they were making this contribution.51 Patients are not stupid or naïve to believe this, because it is what they are told on the consent forms they sign before participating in trials. But they are mistaken, because the results of trials are frequently left unpublished, and withheld from doctors and patients. These signed consent forms therefore mislead people on two vitally important points. Firstly, they fail to tell the truth: that the person conducting the trial, or the person paying for it, may decide not to publish the results, depending on how they look at the end of the study. And worse than that, they also explicitly state a falsehood: researchers tell patients that they are participating in order to create knowledge that will be used to improve treatment in future, even though the researchers know that in many cases, those results will never be published.