Book Read Free

I Think You'll Find It's a Bit More Complicated Than That

Page 19

by Ben Goldacre


  But before we get on to how this can happen, we should first finish the myths about trials. From now on, these are all cases where people overstate the benefits of trials.

  For example, sometimes people think that trials can answer everything, or that they are the only form of evidence. This isn’t true, and different methods are useful for answering different questions. Randomised trials are very good at showing that something works; they’re not always so helpful for understanding why it worked (although there are often clues when we can see that an intervention worked well in children with certain characteristics, but not so well in others). ‘Qualitative’ research – such as asking people open questions about their experiences – can help give a better understanding of how and why things worked, or failed, on the ground. This kind of research can also be useful for generating new questions about what works best, to be answered with trials. But qualitative research is very bad for finding out whether an intervention has worked. Sometimes researchers who lack the skills needed to conduct or even understand trials can feel threatened, and campaign hard against them, much like the experts in Archie Cochrane’s story. I think this is a mistake. The trick is to ensure that the right method is used to answer the right questions.

  A related issue involves choosing the right outcome to measure. Sometimes people say that trials are impossible, because we can’t capture the intangible benefits that come from education, like making someone a well-rounded member of society. It’s true that this outcome can be hard to measure, although that is an argument against any kind of measurement of attainment, and against any kind of quantitative research, not just trials. It’s also, I think, a little far-fetched: there are lots of things we try to improve that are easy to measure, like attendance rates, teenage pregnancy, amount of exercise, performance on specific academic or performance tests, and so on.

  However, we should return to the exaggerated claims sometimes made in favour of trials, and the need to be a critical consumer of evidence. A further common mistake is to assume that, once an intervention has been shown to be effective in a single trial, then it definitely works, and we should use it everywhere. Again, this isn’t necessarily true. Firstly, all trials need to be run properly: if there are flaws in a trial’s design, then it stops being a fair test of the treatments. But more importantly, we need to think carefully about whether the people in a trial of an intervention are the same as the people we are thinking of using the intervention on.

  The Family Nurse Partnership is a programme that is well funded and popular around the world. It was first shown to be effective in a randomised trial in 1977. The trial participants were white mothers in a semi-rural setting in upstate New York, and people worried at the time that the positive results might have been exceptional, and occurred simply because the specific programme of social support that was offered had suited this population unusually well. In 1988, to check that the findings really were applicable to other settings, the same programme was assessed using a randomised trial in African-American mothers in inner-city Memphis, and was again found to be effective. In 1994, a third trial was conducted in a large population of Hispanic, African-American and Caucasian mothers from Denver. After this trial also showed a benefit, people in the US were fairly certain that the programme worked, with fewer childhood injuries, increased maternal employment, improved ‘school readiness’, and more.

  Now the Family Nurse Partnership programme is being brought to Britain, but the people who originally designed the intervention have insisted that a randomised trial should be run here, to see if it really is effective in the very different setting of the UK. They have specifically stated that they expect to see less dramatic benefits here, because the basic level of support for young families in the UK is much better than that in the US: this means that the difference between people getting the FNP programme and people getting the normal level of help from society will be much smaller.

  This is just one example of why we need to be thoughtful about whether the results of a trial in one population really are applicable to our own patients or pupils. It’s also an illustration of why we need to make trials part of the everyday routine, so that we can replicate them in different settings, instead of blindly assuming we can use results from other countries (or even other schools, if they have radically different populations). It doesn’t mean, however, that we can never trust the results of a trial. This is just another example of why it’s useful to know more about how trials work, and to be a thoughtful consumer of evidence.

  Lastly, people sometimes worry that trials are expensive and complicated. This isn’t necessarily true, and it’s important to be clear what the costs of a trial are being compared against. For example, if the choice is between running a trial, and simply charging ahead, implementing an idea that hasn’t been shown to work – one that might be ineffective, wasteful, or even harmful – then it’s clearly worth investing some time and effort in assessing its true impact. If the alternative is doing an ‘observational’ study, which has all the shortcomings described above, then the analysis can be so expensive and complex – not to mention unreliable – that it would have been easier to randomise participants to one intervention or the other in the first place.

  But the mechanics and administrative processes for running a trial can also be kept to a minimum with thoughtful design, for example by measuring outcomes using routine classroom data that was being collected anyway, rather than running a special set of tests. More than anything, though, for trials to be run efficiently, they need to be part of the culture of teaching.

  Making evidence part of everyday life

  I’m struck by how much enthusiasm there is for trials and evidence-based practice in some parts of teaching; but I’m also struck that much of this enthusiasm dies out before it gets to do good, because the basic structures needed to support evidence-based practice are lacking. As a result, a small number of trials are done, but these exist as isolated islands, without enough bridges joining the people and strands of work together. This is nobody’s fault: creating an ‘information architecture’ out of thin air is a big job, and it might take decades. The benefits, though, are potentially huge. Some individual randomised trials from the UK have produced informative results, for example, but these results are then poorly communicated, so they don’t inform and change practice as well as they might.

  Because of this, I’ve sketched out the basics of what education would need, as a sector, to embrace evidence-based practice in a serious way. The aim – which I hope everyone would share – is to get more research done, involving as many teachers as possible; and to get the results of good-quality research disseminated and put into practice. It’s worth being clear, though, that this is a first sketch, and a call to arms. I hope that others will pull it apart and add to it. But I also hope that people will be able to act on it, because structures like these in medicine help capture the best value from the good work – and hard work – that is done all around the country.

  Firstly – and most simply – it’s clear that we need better systems for disseminating the findings of research to teachers on the ground. While individual studies are written up in very technical documents, in obscure academic journals, these are rarely read by teachers. And rightly so: most doctors rarely bother to read technical academic journals either. The British Medical Journal has brief summaries of important new research from around the world; and there is a thriving market of people offering accessible summary information on new ‘what works’ research to doctors, nurses and other healthcare professionals. The US government has spent vast sums of money on two similar websites for teachers: ‘Doing What Works’, and the ‘What Works Clearing House’. These are large, with good-quality resources, and they are written to be relevant to teachers’ needs, rather than dry academic games. While there are some similar resources in the UK, these are often short-lived, and on a smaller scale.

  For these kinds of resources to be useful at all, they
then need to land with teachers who know the basics of ‘how we know’ what works. While much teacher training has reflected the results of research, this evidence has often been presented as a completed canon of answers. It’s much rarer to find all young teachers being taught the basics of how different types of research are done, and the strengths and weaknesses of each approach on different types of question (although some individual teachers have taught themselves on this topic, to a very high level). Learning the basics of how research works is important, not because every teacher should be a researcher, but because it allows teachers to be critical consumers of the new research findings that will come out during the many decades of their career. It also means that some of the barriers to research that arise from myths and misunderstandings can be overcome. In an ideal world, teachers would be taught this in basic teacher training, and it would be reinforced in Continuing Professional Development, alongside summaries of research.

  In some parts of the world, it is impossible to rise up the career ladder of teaching without understanding how research can improve practice, and publishing articles in teaching journals. Teachers in Shanghai and Singapore participate in regular ‘Journal Clubs’, where they discuss a new piece of research, and its strengths and weaknesses, before considering whether they would apply its findings in their own practice. If the answer is no, they share the shortcomings in the study design that they’ve identified, and then describe any better research that they think should be done on the same question.

  This is an important quirk: understanding how research is done also enables teachers to generate new research questions. This, in turn, ensures that the research which gets done addresses the needs of everyday teachers. In medicine, any doctor can feed up a research suggestion to NIHR (the National Institute for Health Research), and there are organisations that maintain lists of what we don’t yet know, fed by clinicians who’ve had to make decisions, without good-quality evidence to guide them. But there are also less tangible ways that this feedback can take place.

  Familiarity with the basics of how research works also helps teachers to get involved in research, and to see through the dangerous myths about trials being actively undesirable, or even ‘impossible’, in education. Here, there is a striking difference with medicine. Many teachers pour their heart and soul into research projects which are supposed to find out whether something worked; but in reality the projects often turn out to be too small, being run by one person in isolation, in only one classroom, and lack the expert support necessary to ensure a robust design. Very few doctors would try to run a quantitative research project alone in their own single practice, without expert support from a statistician, and without help from someone experienced in research design.

  In fact, most doctors participate in research by playing a small role in a larger research project which is coordinated, for example, through a research network. Many GPs are happy to help out on a research: they recruit participants from among their patients; they deliver whichever of two commonly used treatments has been randomly assigned to their patient; and they share medical information for follow-up data. But they get involved by putting their name down with the Primary Care Research Network covering their area. Researchers interested in running a randomised trial in GP patients then go to the Research Network, and find GPs to work with.

  This system represents a kind of ‘dating service’ for practitioners and researchers. Creating similar networks in education would help join up the enthusiasm that many teachers have for research that improves practice with researchers, who can sometimes struggle to find schools willing to participate in good-quality research. This kind of two-way exchange between researchers and teachers also helps the teacher-researchers of the future to learn more about the nuts and bolts of running a trial; and it helps to keep researchers out of their ivory towers, focusing more on what matters most to teachers.

  In the background, for academics, there is much more to be said on details. We need, I think, academic funders who listen to teachers, and focus on commissioning research that helps us learn what works best to improve outcomes. We need academics with quantitative research skills from outside traditional academic education departments – economists, demographers, and more – to come in and share their skills more often, in a multidisciplinary fashion. We need more expert collaboration with Clinical Trials Units, to ensure that common pitfalls in randomised trial design are avoided; we may also need – eventually − Education Trials Units, helping to support good-quality research throughout the country.

  But just as this issue stretches way beyond a few individual research projects, it also goes way beyond anything that one single player can achieve. We are describing the creation of a whole ecosystem from nothing. Whether or not it happens depends on individual teachers, researchers, heads, politicians, pupils, parents, and more. It will take mischievous leaders, unafraid to question orthodoxies by producing good-quality evidence; and it will need to land with a community that – at the very least – doesn’t misunderstand evidence-based practice, or reject randomised trials out of hand.

  If this all sounds like a lot of work, then it should do: it will take a long time. But the gains are huge, and not just in terms of better evidence, and better outcomes for pupils. Right now, there is a wave of enthusiasm for good-quality evidence, passing through all corners of government. This is the time to act. Teachers have the opportunity, I believe, to become an evidence-based profession, in just one generation: embedding research into everyday practice; making informed decisions independently; and fighting off the odd spectacle of governments telling teachers how to teach, because teachers can use the good-quality evidence that they have helped to create, to make their own informed judgements.

  There is also a roadmap. While evidence-based medicine seems like an obvious idea today – and we would be horrified to hear of doctors using treatments without gathering and using evidence on which works best – in reality these battles were only won in very recent decades. Many eminent doctors fought viciously, as recently as the 1970s, against the very idea of evidence-based medicine, seeing it as a challenge to their expertise. The case for change was made by optimistic young practitioners like Archie Cochrane, who saw that good evidence on what works best was worth fighting for.

  Now we recognise that being a good doctor, or teacher, or manager, isn’t about robotically following the numerical output of randomised trials; nor is it about ignoring the evidence, and following your hunches and personal experiences instead. We do best by using the right combination of skills to get the best job done.

  DRUGS

  A Rock of Crack as Big as the Ritz1

  Guardian, 21 February 2009

  In a week where our dear Daily Mail ran with the headline ‘How Using Facebook Could Raise your Risk of Cancer’, I will exercise some self-control, and write about drugs instead.

  ‘Seven hundred British troops seized four Taliban narcotics factories containing £50m of drugs,’ said the Guardian on Wednesday. ‘Troops recovered more than 400kg of raw opium in one drug factory and nearly 800kg of heroin in another.’ That is good. In the Telegraph, British forces had seized ‘£50 million of heroin and killed at least twenty Taliban fighters in a daring raid that dealt a significant blow to the insurgents in Afghanistan’. Everyone carried the good news. ‘John Hutton, Defence Secretary, said the seizure of £50m of narcotics would “starve the Taliban of funding preventing the proliferation of drugs and terror in the UK”.’

  Well.

  First up, almost every paper – the people we pay to précis facts for mass consumption – got both the quantities and the substances wrong. From the MoD press release (a romping read), three batches of opium were captured, but no heroin:

  ‘over 60kg of wet opium’, ‘over 400kg of raw opium’ and ‘the largest find of opium on the operation, nearly 800kg’.

  So the army captured 1,260kg of opium. Opium is not heroin; it takes about 10kg of opium to make 1kg of hero
in. They also found some chemicals and vats. The opium was enough to make roughly 130kg of heroin.

  How much was this haul worth to the Taliban, and exactly how much of a blow will it strike? Heroin is not very valuable in itself, because opium is easy to grow, and you can turn it into heroin over the course of three simple steps using some school science-class chemicals in your kitchen (or a barn in rural Afghanistan). Heroin becomes expensive because it is illegal, so people must take risks to produce and distribute it, and as a result, they want money.

  The ‘farm gate’ price of 1kg of opium in Afghanistan is $100 at best (we’ll use dollars, since the best figures are from the UN Drugs Control Programme 2008 world report). So the 1,260kg of opium captured on this raid in Afghanistan is worth somewhere near $126,000 (not £50 million).

  Even if it had been converted to heroin – it wasn’t – the money doesn’t get much better. The price for 1kg of heroin in Afghanistan is not much higher than the price for the 10kg of opium you need to make it. That’s because heroin was invented over a hundred years ago, and making it, as I said, is cheap and easy. We could be generous and say that heroin is worth $2,000 per kilo in Afghanistan: this would make the army’s (potential) 130kg of heroin worth about $250,000.

  That’s still not £50 million. So where did this enormous number come from? Perhaps everyone was trying to calculate it by using the wholesale price in the UK, assuming that the Taliban ran the entire operation from ‘farm gate’ to ‘warehouse in Essex’. This is a stretch of our generosity, but we can still run the numbers: the wholesale price of heroin in the UK has fallen dramatically over the past two decades, from $54,000 per kilo in 1990 to $28,000 today. That would make our 130kg of (potential) heroin worth $3.6 million.

  We’re still nowhere near £50 million. So: maybe the army thinks that every sweaty kid with missing teeth in King’s Cross selling £10 bags is actually a Taliban agent, passing profits on – in full, no cream off the top – to Taliban HQ, several thousand miles away in Aghanistan. Even then, UK heroin is $71 per gram at retail prices (down from $157 a gram in 1990), so the value of our 130kg is $9 million. We can be generous, and say our street heroin is only 30 per cent pure (it’s usually much better): in this case, finally, the haul is worth $30 million on the streets, or £20 million, at absolute best. To do this, we had to assume that every penny of the street-level UK retail price, at the smallest unit of sale, went straight to the Taliban, and that’s not £50 million.

 

‹ Prev