I Think You'll Find It's a Bit More Complicated Than That

Home > Science > I Think You'll Find It's a Bit More Complicated Than That > Page 29
I Think You'll Find It's a Bit More Complicated Than That Page 29

by Ben Goldacre


  Here’s what they found: once a story gets longer than eleven paragraphs, on average, your readers will read only half. A tiny minority will make it to paragraph number 19, where, on this occasion, a fraction of the readers of the Daily Mail would have discovered that the central premise of the news story – that a new trial had found a 40 per cent reduction in cancer through intermittent dieting – was false.

  Caveats in paragraph 19 are standard practice for stories with outlandish health claims. Like nipple tassels in 1950s burlesque, they’re a way to keep it legal, but titillating; and in many cases, when the late rebuttal comes from an authority figure – calling for calm in patrician tones – it can feel as if it’s only even there to accentuate the excitement. But if your interest is informing a reader, they are plainly misleading.

  Why Don’t Journalists Link to Primary Sources?

  Guardian, 19 March 2011

  Why don’t journalists link to primary sources? Whether it’s a press release, an academic journal article, a formal report, or even the full transcript of an interview, the primary source contains more information for interested readers: it shows your working, and it allows people to check whether what you wrote is true. Here are three short stories.

  This week the Telegraph ran the headline ‘Wind farms blamed for stranding of whales’. ‘Offshore wind farms are one of the main reasons why whales strand themselves on beaches, according to scientists studying the problem,’ it continued. Baroness Warsi even cited the story on BBC Question Time this week, arguing against wind farms.

  But anyone who read the open-access academic paper in PLoS One, titled ‘Beaked Whales Respond to Simulated and Actual Navy Sonar’, would see that the study looked at sonar, and didn’t mention wind farms at all. At best, the Telegraph story was a massive exaggeration of one brief, contextual aside about general levels of manmade sound in the ocean, made by one author at the end of the press release (titled ‘Whales Scared by Sonars’). This release didn’t mention wind farms, and it didn’t say they were ‘one of the main reasons why whales strand themselves on beaches’. Anyone reading the press release could see that the study was about naval sonar.

  This Telegraph article (now deleted, with a miserly correction) was a distortion, perhaps driven by the paper’s odd editorial line about the environment. But there is a bigger fish here: if we had a culture of linking to primary sources – if they were always just a click away – then shame alone would probably have stopped it going online. Outright misrepresentations are only worth risking in an environment where the reader is routinely deprived of information.

  Sometimes the examples are sillier. Professor Anna Ahn published a paper recently, showing that people with shorter heels have larger calves. For the Telegraph this became ‘Why stilettos are the secret to shapely legs’, for the Mail ‘Stilettos give women shapelier legs than flats’, for the Express ‘Stilettos tone up your legs’.

  But anybody who read even the press release would immediately see that this study had nothing to do with shoes. It wasn’t about shoe-heel height: it looked at anatomical heel length, the distance from the back of your ankle joint to the insertion of the Achilles tendon. The participants were all barefoot, and the paper was just a nerdy insight into the engineering of a human body: if you have a shorter lever at the back of your foot, you need a bigger muscle in your calf. Once again, this story was a concoction by journalists. But more than that, no sane journalist could possibly have risked writing the story about stilettos, if there was a culture and tradition of linking to the academic paper, or even the press release: they’d have looked like idiots, and fantasists, to anyone who bothered to click.

  Lastly, on Wednesday the Daily Mail ran with the scare headline ‘Swimming too Often in Chlorinated Water “Could Increase Risk of Developing Bladder Cancer”, Say Scientists’. There’s hardly any point documenting the errors in Daily Mail health stories any more, but if you read the original paper, or even the press release, again, anyone can see that bladder cancer wasn’t measured, and the Mail’s story was a simple distortion. It’s worth mentioning that these press releases were fairly readable pieces of popular science in themselves.

  Of course, this is a problem that occurs well beyond science. Over and again, you read comment pieces that purport to be responding to an earlier piece, but distort the earlier arguments, or miss out the most important ones: they count on it being inconvenient for you to check. It’s also an interesting difference between different forms of media: most bloggers have no institutional credibility, so they must build it, by linking transparently, and allowing you to easily double-check their work.

  But more than anything, because linking sources is such an easy thing to do, and the motivations for avoiding links are so dubious, I’ve detected myself using a new rule of thumb: if people don’t link to primary sources, I don’t trust them, and I don’t read them.

  A Fishy Friend, and His Friends

  Guardian, 5 June 2010

  ‘Fish oil helps schoolchildren to concentrate’ was the headline in the Observer. The omega-3 fish-oil pill issue has dragged on for almost a decade now: the entire British news media repeatedly claim that trials show it improves school performance and behaviour in mainstream children, but no such trial has ever been published. There is something very attractive about the idea that solutions to complex problems in education can be found in a pill.

  So, have things changed? The Observer’s health correspondent, Denis Campbell, is on the case, and it sounds as if they have. ‘Boys aged eight to 11 who were given doses once or twice a day of docosahexaenoic acid, an essential fatty acid known as DHA, showed big improvements in their performance during tasks involving attention.’ Great. ‘The researchers gave 33 US schoolboys 400mg or 1,200mg doses of DHA or a placebo every day for eight weeks. Those who had received the high doses did much better in mental tasks involving mathematical challenges.’ Brilliant news.

  Is it true? After some effort, I have tracked down the academic paper. This was not a trial of whether fish-oil pills improve children’s performance: it was a brain-imaging study. They took thirty-three kids, divided them into three groups (of ten, ten and thirteen) and then gave them either no omega-3, a small dose, or a big dose. Then the children performed some attention tasks in a brain scanner, to see if bits of their brains lit up differently.

  Why am I saying ‘omega-3’? Because it wasn’t a study of fish oil, as the Observer says: it was a study of omega-3 fatty acids derived from algae. But that’s small print.

  If this had been a trial to detect whether omega-3 improves performance, it would be laughably small: about ten people in each treatment group. While small studies aren’t entirely useless, as amateurs often claim, you do have a very small number of observations to work from, so your study is much more prone to error from the simple play of chance. A study with thirty-three children, like this one, could conceivably detect an effect; but only if the fish oil caused a gigantic and unambiguous improvement in all the children who got it, and none of the children on placebo improved.

  This paper showed no difference in performance at all. Since it was a brain-imaging study, not a trial, the results of the children’s actual performance on the attention task are reported only in passing, in a single paragraph, but the researchers are clear: ‘There were no significant group differences in percentage correct, commission errors, discriminability, or reaction time.’

  So this is all looking pretty wrong. Are we even talking about the same academic paper? I’ve been trying to get mainstream media to link to original academic papers when they write about them, at least online, with some limited success on the BBC website. I asked Denis Campbell which academic paper he was referring to, but he declined to answer, and passed me on to Stephen Pritchard, the Readers’ Editor for the Observer, who answered a couple of days later to say that he didn’t understand why he was being involved. Eventually Denis confirmed, but through Stephen Pritchard, that it was indeed the paper I had fou
nd, from the April edition of the American Journal of Clinical Nutrition.

  If we are very generous, is it informative, in any sense, that a brain area lights up differently in a scanner after some pills? Intellectually, it may be. But doctors get very accustomed to drug company sales reps and enthusiastic researchers who approach them with an exciting theoretical reason why one treatment should be better than another: maybe their intervention works selectively on only one kind of receptor molecule, for example, so it should therefore have fewer side effects. Similarly, drug reps and researchers will often announce that their intervention has an effect on some kind of elaborate laboratory measure: maybe a molecule in the blood goes up in concentration, or down, in a way that suggests the intervention might be effective.

  This is all very well. But it’s not the same as showing that something really does work, back here in the real world, and medicine is overflowing with unfulfilled promises from early theoretical research. This stuff is interesting: but it’s not even in the same ballpark as showing that something works.

  Oddly enough, though, someone really has finally conducted a proper trial of fish-oil pills in mainstream children to see if they work. It’s the trial that journalists have long been waiting for: a well-conducted, randomised, double-blind, placebo-controlled trial, in 450 children aged eight to ten from a mainstream school population. It was published in full this year, and it found no improvement. Show me the news headlines about that paper.

  Meanwhile, Euromonitor estimates global sales for fish-oil pills at $2 billion, having doubled in five years, with sales projected to reach $2.5 billion by 2012, and they are now the single best-selling product in the UK food-supplement market. This has only been possible with the kind assistance of the British media, and their eagerness to write stories about the magic intelligence pill.

  The week after this piece appeared, the Independent’s health correspondent wrote an angry column, explaining that health correspondents can’t be expected to check facts. In the interests of balance, his piece is reproduced below in full.

  Jeremy Laurance: Dr Goldacre Doesn’t Make Everything Better

  Is Ben Goldacre, the celebrated author of Bad Science and scourge of health journalists everywhere, losing it? So accustomed has he become to swinging his fists at the media when they get a science story wrong, I fear he may one day go nuclear and take out three rows of medical correspondents with a single lungful of biting sarcasm.

  He was at it again in Saturday’s Guardian, pistol-whipping his Guardian and Observer colleague, health correspondent Denis Campbell, over a report he wrote about fish oil and its supposed role in improving children’s intelligence.

  Campbell had reported claims made at a press conference that fish oil improved mental performance in children taking supplements. His crime, however, was to fail to check the claims against the academic paper on which they were based. That showed that the fish oil ‘enhanced the function of those brain regions that are involved in paying attention’, as revealed by a brain scanner.

  Not quite the same as ‘improving their performance’, as Goldacre rightly pointed out. Indeed the paper revealed that there had been no improvement in the children’s performance. Time, then, for Goldacre to deliver his customary knee-capping. He did so because Campbell declined to help him with his inquiries. Small wonder, given it is the second occasion the hapless Campbell has found himself in Goldacre’s sights.

  One doesn’t know whether to laugh or cry at the Guardian’s eagerness to wash its dirty linen in public. It is undeniably magnificent, but – in my view – no way to run a newspaper. I wonder at the psychiatrist’s bills. What does it tell us about health and science reporting? First, most disinterested observers think standards are pretty high (a report by the Department of Business last January said it was in ‘rude health’). Second, reporters are messengers – their job is to tell, as accurately as they can, what has been said, with the benefit of such insight as their experience allows them to bring, not to second guess whether what is said is right. But third, reporters are also under pressure. Newspaper sales are declining, staff have been cut, demands are increasing.

  Goldacre is right to highlight the fact that there is too much ‘churnalism’ – reporters turning out copy direct from press conferences and releases, without checking, to feed the insatiable news machine. This ought to be stopped. But no one, so far, has come up with a commercially realistic idea of how to stop it.

  In the meantime, while raging rightly at the scientific illiteracy of the media, he might reflect when naming young, eager reporters starting out on their careers that most don’t enjoy, as he does, the luxury of time, bloggers willing and able to do his spadework for him (one pointed out the flaws in Campbell’s report on the Guardian website five days before Goldacre’s column appeared) and membership of a profession (medicine) with guaranteed job security, a comfortable salary and gold-plated pension. If only.

  Make of that what you will. Jeremy Laurance mentioned that this is the second time I’ve written about a piece by Denis Campbell. Below is the first, a front-page splash by Denis in the Observer.

  MMR: The Scare Stories Are Back

  British Medical Journal, 18 July 2007

  It was inevitable that the media would re-ignite the MMR (measles, mumps, rubella) autism scare during Andrew Wakefield’s General Medical Council hearing. In the past two weeks, however, a front-page splash in the Observer has drawn widespread attention: the newspaper effectively claimed to know the views of named academics better than those academics themselves, and to know the results of research better than the people who did it. Smelling a rat – as one might – for once, I decided to pursue every detail.

  The Observer’s story made three key points: that new research had found an increase in the prevalence of autism, to one in fifty-eight; that the lead academic on this study was so concerned he suggested raising the finding with public health officials; and that two ‘leading researchers’ on the team believed that the rise was due to the MMR vaccine. By the time the week was out, this story had been recycled in several other national newspapers, and the one in fifty-eight figure had even been faithfully reproduced in a BMJ news article.

  On every one of these three key points the Observer story was simply wrong.

  The newspaper claimed that an ‘unpublished’ study from the Autism Research Centre in Cambridge had found a prevalence for autism of one in fifty-eight. I contacted the centre: the study that the Observer reported is not finished, and not published. The data have been collected, but they have not been analysed.

  Unpublished data is a recurring theme in MMR scares, and it is the antithesis of what science is about: transparency, where anyone can read the methods and results, appraise the study, decide for themselves whether they would draw the same conclusions as the authors, and even replicate, if they wish.

  The details of this study illustrate just how important this transparency is. It was specifically designed to look at how different methods of assessing prevalence affected the final figure. One of the results from the early analyses is ‘one in fifty-eight’. The other figures were less dramatic, and similar to current estimates. In fact the Observer now admits it knew of these figures, and that these should have been included in the article. It seems it simply cherry-picked the single most extreme number – from an incomplete analysis – and made it a front-page splash story.

  And why was that one figure so high anyway? The answer is simple. If you cast your net as widely as possible, and use screening tools, and many other methods of assessment, and combine them all, then inevitably you will find a higher prevalence than if – for example – you simply trawl through local school records and count your cases of autism from there.

  This is not advanced epidemiology, impenetrable to journalists – this is basic common sense. It would not mean that there is a rise in autism over time, compared with previous prevalence estimates, but merely that you had found a way of assessing prevalence th
at gave a higher figure. More than that, of course, when you start doing a large-scale prevalence study, you run into all kinds of interesting new methodological considerations: Is my screening tool suitable for use in a mainstream school environment? How does its positive predictive value change in a different population with a different baseline rate? And so on.

  These are fascinating questions, and for that reason statisticians and epidemiologists were invented. As Professor Simon Baron-Cohen, lead author on the study, says: ‘This paper has been sitting around for a year and a half specifically because we’ve brought in a new expert on epidemiology and statistics, who needs to get to grips with this new dataset, and the numbers are changing. If we’d thought the figures were final in 2005, then we’d have submitted the paper then.’

  The Observer, however, is unrepentant: it has the ‘final report’. And what is this document? I can’t get the paper to show it to me (and what kind of a claim about scientific evidence involves secret data?), but grant-giving agencies expect a report every quarter, right through to the end of the grant, and it seems likely that what the Observer has is simply the last of those: ‘That might have been titled “final report”,’ said Professor Baron-Cohen. ‘It just means the funding ended, it’s the final quarterly report to the funders. But the research is still ongoing. We are still analysing.’

  But these are just nerdy methodological questions about prevalence (if you skip to the end, there is some quite good swearing). How did the Observer manage to crowbar MMR into this story? Firstly, it cranked up the anxiety. According to the newspaper, Baron-Cohen ‘was so concerned by the one in fifty-eight figure that last year he proposed informing public health officials in the county’.

 

‹ Prev