by Ben Goldacre
Hearing this on Newsnight, the viewer might naturally conclude that a study has recently been published in America showing that pomegranates can protect against ageing. But if you go to Medline, the standard search tool for finding medical academic papers, no such study exists, or at least not that I can find. Perhaps there’s some kind of leaflet from the pomegranate industry doing the rounds. He goes on: There’s a whole group of plastic surgeons in the States who’ve done a study giving some women pomegranates to eat, and juice to drink, after plastic surgery and before plastic surgery: and they heal in half the time, with half the complications, and no visible wrinkles!’ Again, it’s a very specific claim—a human trial on pomegranates and surgery—and again, there is nothing in the studies database.
So could you fairly characterise this Newsnight performance as ‘lying’? Absolutely not. In defence of almost all nutritionists, I would argue that they lack the academic experience, the ill-will, and perhaps even the intellectual horsepower necessary to be fairly derided as liars. The philosopher Professor Harry Frankfurt of Princeton University discusses this issue at length in his classic 1986 essay ‘On Bullshit’. Under his model, ‘bullshit’ is a form of falsehood distinct from lying: the liar knows and cares about the truth, but deliberately sets out to mislead; the truth-speaker knows the truth and is trying to give it to us; the bullshitter, meanwhile, does not care about the truth, and is simply trying to impress us:
It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction…When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.
I see van Straten, like many of the subjects in this book, very much in the ‘bullshitting’ camp. Is it unfair for me to pick out this one man? Perhaps. In biology fieldwork, you throw a wired square called a ‘quadrat’ at random out onto the ground, and then examine whatever species fall underneath it. This is the approach I have taken with nutritionists, and until I have a Department of Pseudoscience Studies with an army of PhD students doing quantitative work on who is the worst, we shall never know. Van Straten seems like a nice, friendly guy. But we have to start somewhere.
Observation, or intervention?
Does the cock’s crow cause the sun to rise? No. Does this light switch make the room get brighter? Yes. Things can happen at roughly the same time, but that is weak, circumstantial evidence for causation. Yet it’s exactly this kind of evidence that is used by media nutritionists as confident proof of their claims in our second major canard.
According to the Daily Mirror, Angela Dowden, RNutr, is ‘Britain’s Leading Nutritionist’, a monicker it continues to use even though she has been censured by the Nutrition Society for making a claim in the media with literally no evidence whatsoever. Here is a different and more interesting example from Dowden: a quote from her column in the Mirror, writing about foods offering protection from the sun during a heatwave: ‘An Australian study in 2001 found that olive oil (in combination with fruit, vegetables and pulses) offered measurable protection against skin wrinkling. Eat more olive oil by using it in salad dressings or dip bread in it rather than using butter.’
That’s very specific advice, with a very specific claim, quoting a very specific reference, and with a very authoritative tone. It’s typical of what you get in the papers from media nutritionists. Let’s go to the library and fetch out the paper she refers to (‘Skin wrinkling: can food make a difference?’ Purba MB et al. J Am Coll Nutr. 2001 Feb; 20(1): 71-80). Before we go any further, we should be clear that we are criticising Dowden’s interpretation of this research, and not the research itself, which we assume is a faithful description of the investigative work that was done.
This was an observational study, not an intervention study. It did not give people olive oil for a time and then measure differences in wrinkles: quite the opposite, in fact. It pooled four different groups of people to get a range of diverse lifestyles, including Greeks, Anglo-Celtic Australians and Swedish people, and it found that people who had completely different eating habits—and completely different lives, we might reasonably assume—also had different amounts of wrinkles.
To me this is not a great surprise, and it illustrates a very simple issue in epidemiological research called ‘confounding variables’: these are things which are related both to the outcome you’re measuring (wrinkles) and to the exposure you are measuring (food), but which you haven’t thought of yet. They can confuse an apparently causal relationship, and you have to think of ways to exclude or minimise confounding variables to get to the right answer, or at least be very wary that they are there. In the case of this study, there are almost too many confounding variables to describe.
I eat well—with lots of olive oil, as it happens—and I don’t have many wrinkles. I also have a middle–class background, plenty of money, an indoor job, and, if we discount infantile threats of litigation and violence from people who cannot tolerate any discussion of their ideas, a life largely free from strife. People with completely different lives will always have different diets, and different wrinkles. They will have different employment histories, different amounts of stress, different amounts of sun exposure, different levels of affluence, different levels of social support, different patterns of cosmetics use, and much more. I can imagine plenty of reasons why you might find that people who eat olive oil have fewer wrinkles; and the olive oil having a causative role, an actual physical effect on your skin when you eat it, is fairly low down on my list.
Now, to be fair to nutritionists, they are not alone in failing to understand the importance of confounding variables, in their eagerness for a clear story. Every time you read in a newspaper that ‘moderate alcohol intake’ is associated with some improved health outcome—less heart disease, less obesity, anything—to gales of delight from the alcohol industry, and of course from your friends, who say, ‘Ooh well, you see, it’s better for me to drink a little…’ as they drink a lot—you are almost certainly witnessing a journalist of limited intellect, overinterpreting a study with huge confounding variables.
This is because, let’s be honest here: teetotallers are abnormal. They’re not like everyone else. They will almost certainly have a reason for not drinking, and it might be moral, or cultural, or perhaps even medical, but there’s a serious risk that whatever is causing them to be teetotal might also have other effects on their health, confusing the relationship between their drinking habits and their health outcomes. Like what? Well, perhaps people from specific ethnic groups who are teetotal are also more likely to be obese, so they are less healthy. Perhaps people who deny themselves the indulgence of alcohol are more likely to indulge in chocolate and chips. Perhaps preexisting ill health will force you to give up alcohol, and that’s skewing the figures, making teetotallers look unhealthier than moderate drinkers. Perhaps these teetotallers are recovering alcoholics: among the people I know, they’re the ones who are most likely to be absolute teetotallers, and they’re also more likely to be fat, from all those years of heavy alcohol abuse. Perhaps some of the people who say they are teetotal are just lying.
This is why we are cautious about interpreting observational data, and to me, Dowden has extrapolated too far from the data, in her eagerness to dispense—with great authority and certainty—very specific dietary wisdom in her newspaper column (but of course you may disagree, and you now have the tools to do so meaningfully).
If we were modern about this, and wanted to offer constructive criticism, what might she have writte
n instead? I think, both here and elsewhere, that despite what journalists and self-appointed ‘experts’ might say, people are perfectly capable of understanding the evidence for a claim, and anyone who withholds, overstates or obscures that evidence, while implying that they’re doing the reader a favour, is probably up to no good. MMR is an excellent parallel example of where the bluster, the panic, the ‘concerned experts’ and the conspiracy theories of the media were very compelling, but the science itself was rarely explained.
So, leading by example, if I were a media nutritionist, I might say, if pushed, after giving all the other sensible sun advice: ‘A survey found that people who eat more olive oil have fewer wrinkles,’ and I might feel obliged to add, ‘Although people with different diets may differ in lots of other ways.’ But then, I’d also be writing about food, so: ‘Never mind, here’s a delicious recipe for salad dressing anyway.’ Nobody’s going to employ me to write a nutritionist column.
From the lab bench to the glossies
Nutritionists love to quote basic laboratory science research because it makes them look as if they are actively engaged in a process of complicated, impenetrable, highly technical academic work. But you have to be very cautious about how you extrapolate from what happens to some cells in a dish, on a laboratory bench, to the complex system of a living human being, where things can work in completely the opposite way to what laboratory work would suggest. Anything can kill cells in a test tube. Fairy Liquid will kill cells in a test tube, but you don’t take it to cure cancer. This is just another example of how nutri-tionism, despite the ‘alternative medicine’ rhetoric and phrases like ‘holistic’, is actually a crude, unsophisticated, old fashioned, and above all reductionist tradition.
Later we will see Patrick Holford, the founder of the Institute for Optimum Nutrition, stating that vitamin C is better than the AIDS drug AZT on the basis of an experiment where vitamin C was tipped onto some cells in a dish. Until then, here is an example from Michael van Straten—who has fallen sadly into our quadrat, and I don’t want to introduce too many new characters or confuse you—writing in the Daily Express as its nutrition specialist: ‘Recent research’, he says, has shown that turmeric is ‘highly protective against many forms of cancer, especially of the prostate’. It’s an interesting idea, worth pursuing, and there have been some speculative lab studies of cells, usually from rats, growing or not growing under microscopes, with turmeric extract tipped on them. There is some limited animal model data, but it is not fair to say that turmeric, or curry, in the real world, in real people, is ‘highly protective against many forms of cancer, especially of the prostate’, least of all because it’s not very well absorbed.
Forty years ago a man called Austin Bradford-Hill, the grandfather of modern medical research, who was key in discovering the link between smoking and lung cancer, wrote out a set of guidelines, a kind of tick list, for assessing causality, and a relationship between an exposure and an outcome. These are the cornerstone of evidence-based medicine, and often worth having at the back of your mind: it needs to be a strong association, which is consistent, and specific to the thing you are studying, where the putative cause comes before the supposed effect in time; ideally there should be a biological gradient, such as a dose-response effect; it should be consistent, or at least not completely at odds with, what is already known (because extraordinary claims require extraordinary evidence); and it should be biologically plausible.
Michael van Straten, here, has got biological plausibility, and little else. Medics and academics are very wary of people making claims on such tenuous grounds, because it’s something you get a lot from people with something to sell: specifically, drug companies. The public don’t generally have to deal with drug-company propaganda, because the companies are not currently allowed to talk to patients in Europe—thankfully—but they badger doctors incessantly, and they use many of the same tricks as the miracle-cure industries. You’re taught about these tricks at medical school, which is how I’m able to teach you about them now.
Drug companies are very keen to promote theoretical advantages (‘It works more on the Z4 receptor, so it must have fewer side-effects!’), animal experiment data or ‘surrogate outcomes’ (‘It improves blood test results, it must be protective against heart attacks!’) as evidence of the efficacy or superiority of their product. Many of the more detailed popular nutritionist books, should you ever be lucky enough to read them, play this classic drug-company card very assertively. They will claim, for example, that a ‘placebo-controlled randomised control trial’ has shown benefits from a particular vitamin, when what they mean is, it showed changes in a ‘surrogate outcome’.
For example, the trial may merely have shown that there were measurably increased amounts of the vitamin in the bloodstream after taking a vitamin, compared to placebo, which is a pretty unspectacular finding in itself: yet this is presented to the unsuspecting lay reader as a positive trial. Or the trial may have shown that there were changes in some other blood marker, perhaps the level of an ill-understood immune-system component, which, again, the media nutritionist will present as concrete evidence of a real-world benefit.
There are problems with using such surrogate outcomes. They are often only tenuously associated with the real disease, in a very abstract theoretical model, and often developed in the very idealised world of an experimental animal, genetically inbred, kept under conditions of tight physiological control. A surrogate outcome can—of course—be used to generate and examine hypotheses about a real disease in a real person, but it needs to be very carefully validated. Does it show a clear dose-response relationship? Is it a true predictor of disease, or merely a ‘co-variable’, something that is related to the disease in a different way (e.g. caused by it rather than involved in causing it)? Is there a well-defined cut-off between normal and abnormal values?
All I am doing, I should be clear, is taking the feted media nutritionists at their own word: they present themselves as men and women of science, fill their columns, TV shows and books with references to scientific research. I am subjecting their claims to the exact same level of very basic, uncomplicated rigour that I would deploy for any new theoretical work, any drug company claim and pill marketing rhetoric, and so on.
It’s not unreasonable to use surrogate outcome data, as they do, but those who are in the know are always circumspect. We’re interested in early theoretical work, but often the message is: ‘It might be a bit more complicated than that…’. You’d only want to accord a surrogate outcome any significance if you’d read everything on it yourself, or if you could be absolutely certain that the person assuring you of its validity was extremely capable, and was giving a sound appraisal of all the research in a given field, and so on.
Similar problems arise with animal data. Nobody could deny that this kind of data is valuable in the theoretical domain, for developing hypotheses, or suggesting safety risks, when cautiously appraised. But media nutritionists, in their eagerness to make lifestyle claims, are all too often blind to the problems of applying these isolated theoretical nuggets to humans, and anyone would think they were just trawling the internet looking for random bits of science to sell their pills and expertise (imagine that). Both the tissue and the disease in an animal model, after all, may be very different to those in a living human system, and these problems are even greater with a lab-dish model. Giving unusually high doses of chemicals to animals can distort the usual metabolic pathways, and give misleading results—and so on. Just because something can upregulate or downregulate something in a model doesn’t mean it will have the effect you expect in a person—as we will see with the stunning truth about antioxidants.
And what about turmeric, which we were talking about before I tried to show you the entire world of applying theoretical research in this tiny grain of spice? Well, yes, there is some evidence that curcumin, a chemical in turmeric, is highly biologically active, in all kinds of different ways, on all kinds of different
systems (there are also theoretical grounds for believing that it may be carcinogenic, mind you). It’s certainly a valid target for research.
But for the claim that we should eat more curry in order to get more of it, that ‘recent research’ has shown it is ‘highly protective against many forms of cancer, especially of the prostate’, you might want to step back and put the theoretical claims in the context of your body. Very little of the curcumin you eat is absorbed. You have to eat a few grams of it to reach significant detectable serum levels, but to get a few grams of curcumin, you’d have to eat 100g of turmeric: and good luck with that. Between research and recipe, there’s a lot more to think about than the nutritionists might tell you.
Cherry-picking
The idea is to try and give all the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.
Richard P. Feynman
There have been an estimated fifteen million medical academic articles published so far, and 5,000 journals are published every month. Many of these articles will contain contradictory claims: picking out what’s relevant—and what’s not—is a gargantuan task. Inevitably people will take shortcuts. We rely on review articles, or on meta-analyses, or textbooks, or hearsay, or chatty journalistic reviews of a subject.
That’s if your interest is in getting to the truth of the matter. What if you’ve just got a point to prove? There are few opinions so absurd that you couldn’t find at least one person with a PhD somewhere in the world to endorse them for you; and similarly, there are few propositions in medicine so ridiculous that you couldn’t conjure up some kind of published experimental evidence somewhere to support them, if you didn’t mind it being a tenuous relationship, and cherry-picked the literature, quoting only the studies that were in your favour.