by Ben Goldacre
Once you’ve got something that works in a dish, you give it to an animal. At this point you’re measuring lots of different things. How much of the drug do you find in the blood after the animal eats the pill? If the answer is ‘very little’, your patients will need to eat giant horse pills to get an active dose, and that isn’t practical. How long does the drug stay in the blood before it’s broken down in the body? If the answer is ‘one hour’, your patients will need to take a pill twenty-four times a day, and that’s not useful either. You might look at what your drug molecule gets turned into when it’s broken down in the body, and worry about whether any of those breakdown products are harmful themselves.
At the same time you’ll be looking at toxicology, especially very serious things that would rule a drug out completely. You want to find out if your drug causes cancer, for example, fairly early on in the development process, so you can abandon it. That said, you might be OK if it’s a drug that people only take for a few days; by the same token, if it harms the reproductive system but is a drug for – let’s say – Alzheimer’s, you might be less worried (I only said less worried: old people do have sex). There are lots of standard methods at this stage. For example, it can take several years to find out if your drug has given living animals cancer, so even though you need to do this for regulatory approval, you’ll also do early tests in a dish. One example is the Ames test, which lets you see if a drug has caused mutation in bacteria very quickly, by looking at what kinds of food they need to survive in a dish.
It’s worth noting at this point that almost all drugs with desirable effects will also have unintended toxic effects at some higher dose. That’s a fact of life. We’re very complicated animals, but we only have about 20,000 genes, so lots of the building blocks of the human body are used several times over, meaning that something which interferes with one target in the body might also affect another, to a greater or lesser extent, at a higher dose.
So, you’ll need to do animal and lab studies to see if your drug interferes with other things, like the electrical conductivity of the heart, that won’t make it popular with humans; and various screening tests to see if it has any effect on common drug receptors, rodents’ kidneys, rodents’ lungs, dogs’ hearts, dogs’ behaviour; and various blood tests. You’ll look at the breakdown products of the drug in animal and human cells, and if they give very different results you might try testing it in another species instead.
Then you’ll give it in increasing doses to animals, until they are dead, or experiencing very obvious toxic effects. From this you’ll find out the maximum tolerable dose in various different species (generally a rat or other rodent, and a non-rodent, usually a dog), and also get a better feel for the effects at doses below the lethal ones. I’m sorry if this paragraph seems brutal to you. It’s my view – broadly speaking, as long as suffering is minimised – that it’s OK to test whether drugs are safe or not on animals. You might disagree, or you might agree, but prefer not to think about it.
If your patients are going to take the drug long-term, you’ll be particularly interested in effects that emerge when animals have been taking it for a while, so you’ll generally dose animals for at least a month. This is important, because when you come to give your drug to humans for the first time, you can’t give it to them for longer than you’ve given it to animals.
If you’re very unlucky, there’ll be a side effect that animals don’t get, but humans do. These aren’t hugely common, but they do happen: practolol was a beta-blocker drug, very useful for various heart problems, and the molecule looks almost exactly the same as propranolol (which is widely used and pretty safe). But out of the blue, practolol turned out to cause something called multi-system oculomucocutaneous syndrome, which is horrific. That’s why we need good data on all drugs, to catch this kind of thing early.
As you can imagine, this is all very time-consuming and expensive, and you can’t even be sure you’ve got a safe, effective drug once you’ve got this far, because you haven’t given it to a single human yet. Given the improbability of it all, I find it miraculous that any drug works, and even more miraculous that we developed safe drugs in the era before all this work had to be done, and was technically possible.
Early trials
So now you come to the nerve-racking moment where you give your drug to a human for the first time. Usually you will have a group of healthy volunteers, maybe a dozen, and they will take the drug at escalating doses, in a medical setting, while you measure things like heart function, how much drug there is in the blood, and so on.
Generally you want to give the drug at less than a tenth of the ‘no adverse effects’ dose in the animals that were most sensitive to it. If your volunteers are OK at a single dose, you’ll double it, and then move up the doses. You’re hoping at this stage that your drug only causes adverse effects at a higher dose, if at all, and certainly at a much higher dose than the one at which it does something useful to the expected target in the body (you’ll have an idea of the effective dose from your animal studies). Of all the drugs that make it as far as these phase 1 trials, only 20 per cent go on to be approved and marketed.
Sometimes – mercifully rarely – terrible things happen at this stage. You will remember the TGN1412 story, where a group of volunteers were exposed to a very new kind of treatment which interfered with signalling pathways in the immune system, and ended up on intensive care with their fingers and toes rotting off. This is a good illustration of why you shouldn’t give a treatment simultaneously to several volunteers if it’s very unpredictable and from an entirely new class.
Most new drugs are much more conventional molecules, and generally the only unpleasantness they cause comes from nausea, dizziness, headache and so on. You might also want a few of your test subjects to have a dummy pill with no medicine in it, so you can try to determine if these effects are actually from the drug, or are just a product of dread.
At this moment you might be thinking: what kind of reckless maniac gives their only body over for an experiment like this? I’m inclined to agree. There is, of course, a long and noble tradition of self-experimentation in science (at the trivial end, I have a friend who got annoyed with feeding his mosquitoes the complicated way, and started sticking his arm in the enclosure, rearing a PhD on his own blood). But the risks might feel more transparent if it’s your own experiment. Are the subjects in first-in-man trials reassured by blind faith in science, and regulations?
Until the 1980s, in the US, these studies were often done on prisoners. You could argue that since then such outright coercion has been softened, rather than fully overturned. Today, being a guinea pig in a clinical trial is a source of easy money for healthy young people with few better options: sometimes students, sometimes unemployed people, and sometimes much worse. There’s an ongoing ethical discussion around whether such people can give meaningful consent, when they are in serious need of money, and faced with serious financial inducements.2 This creates a tension: payments to subjects are supposed to be low, to reduce any ‘undue inducement’ to risky or degrading experiences, which feels like a good safety mechanism in principle; but given the reality of how many phase 1 subjects live, I’d quite like them to be paid fairly well. In 1996 Eli Lilly was found recruiting homeless alcoholics from a local shelter.3 Lilly’s director of clinical pharmacology said: ‘These individuals want to help society.’
That’s an extreme case. But even at best, volunteers come from less well-off groups in society, and this creates a situation where the drugs taken by all of us are tested – to be blunt – on the poor. In the US, this means people without medical insurance, and that raises another interesting issue: the Declaration of Helsinki, the ethics code which frames most modern medical activity, says that research is justified if the population from whom participants are drawn would benefit from the results. The thought behind this is that a new AIDS drug shouldn’t be tested on people in Africa, for example, who could never afford to buy it. But
uninsured unemployed people in the US do not have access to expensive medical treatments either, so it’s not clear that they could benefit from this research. On top of that, most agencies don’t offer free treatment to injured subjects, and none give them compensation for suffering or lost wages.
This is a strange underworld that has been brought to light for the academic community by Carl Elliot, an ethicist, and Robert Abadie, an anthropologist who lived among phase 1 participants for his PhD.4 The industry refers to these participants by the oxymoron ‘paid volunteers’, and there is a universal pretence that they are not paid for their work, but merely reimbursed for their time and travel expenses. The participants themselves are under no such illusions.
Payment is often around $200 to $400 a day, studies can last for weeks or more, and participants will often do several studies in a year. Money is central to the process, and payment is often back-loaded, so you only receive full payment if you complete the study, unless you can prove your withdrawal was due to serious side effects. Participants generally have few economic alternatives, especially in the US, and are frequently presented with lengthy and impenetrable consent forms, which are hard to navigate and understand.
You can earn better than the minimum wage if you ‘guinea pig’ full-time, and many do: in fact, for many of them it’s a job, but it’s not regulated as any other job might be. This is perhaps because we feel uncomfortable regarding this source of income as a profession, so new problems arise. Participants are reluctant to complain about poor conditions, because they don’t want to miss out on future studies, and they don’t go to lawyers for the same reason. They may be disinclined to walk away from studies that are unpleasant or painful, too, for fear of sacrificing income. One participant describes this as ‘a mild torture economy’: ‘You’re not being paid to do a job…you’re being paid to endure.’
If you really want to rummage in this underworld, I recommend a small photocopied magazine called Guinea Pig Zero. For anyone who likes to think of medical research as a white-coated exercise, with crisp protocols, carried out in clean glass-and-metal buildings, this is a rude awakening.
The drugs are hitting the boys harder than the girls. The ephedrine is not so bad, it’s like…over the counter speed. Then they increased our dosage and things got funky. This is when the gents took to the mattresses…We women figured we had more endurance…No. 2 was feeling so bad that he hid the pills under his lounge during the dosing procedure. The coordinator even checked his mouth and he still got away with it…this made No. 2 twice as sick after the next dosing – he couldn’t fake it for the rest of the study.5
Guinea Pig Zero published investigations into deaths during phase 1 trials, advice for participants, and long, thoughtful discursions on the history of guinea-pigging (or, as the subjects themselves call it, ‘our bleeding, pissing work’). Illustrations show rodents on their backs with thermometers in their anuses, or cheerfully offering up their bellies to scalpels. This wasn’t just idle carping, or advice on how to break the system. The volunteers developed ‘research unit report cards’ and discussed unionising: ‘The need exists for a set of standard expectations to be set down in an independently controlled, guinea-pig based forum so we volunteers can rein in the sloppy units in a way that doesn’t bring ourselves harm.’
These report cards were informative, heartfelt and entertaining, but as you might expect, they were not welcomed by the industry. When three of them were picked up by Harper’s magazine, it resulted in libel threats and apologies. Similarly, following a Bloomberg news story from 2005 – in which more than a dozen doctors, government officials and scientists said the industry failed to adequately protect participants – three illegal immigrants from Latin America said they were threatened with deportation by the clinic they had raised concerns about.
We cannot rely solely on altruism to populate these studies, of course. And even where altruism has provided, historically, it has been in extreme or odd circumstances. Before prisoners, for example, drugs were tested on conscientious objectors, who also wore lice-infested underpants in order to infect themselves with typhus, and participated in ‘the Great Starvation Experiment’ to help Allied doctors understand how we should deal with malnourished concentration camp victims (some of the starvation subjects committed acts of violent self-mutilation).6
The question is not only whether we feel comfortable with the incentives and the regulation, but also whether this information is all new to us, or simply brushed under the carpet. You might imagine that research all takes place in universities, and twenty years ago you’d have been correct. But recently, and very rapidly, almost all research activity has been outsourced, often far away from universities, into small private clinical research organisations, which sub-contract for drug companies, and run their trials all around the world. These organisations are atomised and diffuse, but they are still being monitored by frameworks devised to cope with the ethical and procedural problems arising in large institutional studies, rather than small busi nesses. In the US, in particular, you can shop around for Institutional Review Board approval, so if one ethics committee turns you down, you simply go to another.
This is an interesting corner of medicine, and phase 2 and 3 trials are being outsourced too. First, we need to understand what those are.
Phase 2 and 3
So, you’ve established that your drug is broadly safe, in a few healthy people referred to by popular convention as ‘volunteers’. Now you want to give it to patients who have the disease you’re aiming to treat, so you can try to understand whether it works or not.
This is done in ‘phase 2’ and ‘phase 3’ clinical trials, before a drug comes to market. The line between phase 2 and 3 is flexible, but broadly speaking, in phase 2 you give your drug to a couple of hundred patients, and try to gather information on short-term outcomes, side effects and dosage. This will be the first time you get to see if your blood-pressure drug does actually lower blood pressure in people who have high blood pressure, and it might also be the first time you learn about very common side effects.
In phase 3 studies you give your drug to a larger group of patients, usually somewhere between three hundred and 2,000, again learning about outcomes, side effects and dosage. Crucially, all phase 3 trials will be randomised controlled trials, comparing your new treatment against something else. (All of these pre-marketing trials, you will notice, are in fairly small numbers of people, which means that rarer side effects are very unlikely to be picked up. I’ll come back to this later.)
Here again, you may be wondering: who are these patients, and where do they come from? It’s clear that trial participants are not representative of all patients, for a number of different reasons. Firstly, we need to consider what drives someone to participate in a trial. It would be nice to imagine that we all recognise the public value of research, and it would be nice to imagine that all research had public value. Unfortunately, many trials are conducted on drugs that are simply copies of other companies’ products, and are therefore an innovation designed merely to make money for a drugs company, rather than a significant leap forward for patients. It’s hard for participants to work out whether a trial they’ve been offered really does represent a meaningful clinical question, so to an extent we can understand people’s reluctance to take part. But in any case, wealthy patients from the developed world have become more reluctant to participate in trials across the board, and this raises interesting issues, both ethical and practical.
In the US, where many millions of people are unable to pay for health care, clinical trials are often marketed as a way to access free doctors’ appointments, scans, blood tests and treatment. One study compared insurance status in people who agreed to participate in a clinical trial with those who declined;7 participants are a diverse population, but still, those agreeing to be in a trial were seven times more likely to have no health insurance. Another study looked at strategies to improve targeted recruitment among Latinos, a group
with lower wages, and poorer health care, than the average:8 96 per cent agreed to participate, a rate far higher than would normally be expected.
These findings echo what we saw in phase 1 trials, where only the very poor were offering themselves for research. They also raise the same ethical question: trial participants are supposed to come from the population of people who could realistically benefit from the answers provided by that trial. If participants are the uninsured, and the drugs are only available to the insured, then that is clearly not the case.
But selective recruitment of poor people for trials in the USA is trivial compared to another new development, about which many patients − but also many doctors and academics – are entirely ignorant. Drug trials are increasingly outsourced around the world, to be conducted in countries with inferior regulation, inferior medical care, different medical problems and – in some cases – completely different populations.
‘CROs’ and trials around the world
Clinical research organisations are a very new phenomenon. Thirty years ago, hardly any existed: now there are hundreds, with a global revenue of $20 billion in 2010, representing about a third of all pharma R&D spending.9 They conduct the majority of clinical trials research on behalf of industry, and in 2008 CROs ran more than 9,000 trials, with over two million participants, in 115 countries around the world.
This commercialisation of trials research raises several new concerns. Firstly, as we have already seen, companies often bring pressure to bear on academics they are funding, discouraging them from publishing unflattering results and encouraging them to put spin on both the methods and the conclusions of their work. When academics have stood up to those pressures, the threats have turned into grim realities. What employee or chief executive of a CRO is likely to stand up to a company which is directly paying the bills, when the staff all know that the CRO’s hope for future business rides on how it manages each demanding client?