Pharmageddon
Page 15
The editorial staffs of leading journals such as the British Medical Journal (BMJ), New English Journal of Medicine (NEJM), and others regularly attend the meetings of medical writers. They encourage writers to contact them early in a manuscript's development so that the extent of mutual interest can be gauged. If I am a ghost, I can contact the editorial desk and find out if the topic on which I'm writing would be of interest—if the journal has several articles on this already in press, for instance, they may not want more. But if they indicate interest, I am half way to having the article accepted as it is more likely to be sent out to sympathetic reviewers if this is an article the editor wants.
It is not so surprising that journals have drifted into a position of collaboration with industry. They have vested interests in getting an option on publication of the latest studies. Today, reports of randomized controlled trials stand at the top of the evidence hierarchy and the leading journals realize that their own influence and the perception of how seriously their journals are taken depend on getting access to these trials. Given that industry runs the great bulk of such trials and medical writers produce articles that tick all the quality boxes, avoid the excesses of the marketers, and turn around a product in a timely fashion, with a paper trail that makes for accountability, what's not in it for journal editors to cooperate?
When the matter of ghostwriting comes up, the medical journals seem mostly concerned with whether or not the role of the ghost should be acknowledged. Article writers, journal editors, and pharmaceutical companies all believe that prescribing doctors would find the articles less persuasive if they knew they were ghostwritten—that this would convey the message that these were commercial product placements rather than scientific articles. The implication is that if the authorship issues were managed in a better way, the system could be salvaged, ignoring the larger issues of scientific and medical integrity.
Faced, for instance, with evidence that Merck had employed Scientific Therapeutics Information (STI) to write up a series of studies on the drug Vioxx and that critical data on the hazards of Vioxx had been concealed,13 Catherine DeAngelis, the editor of Journal of the American Medical Association (JAMA), in a 2008 editorial deplored the deception.14 But she did not deplore the ghostwriting per se. She was in fact concerned for the medical writers who had actually authored the piece but had not been included on the authorship line and so had not received due acknowledgement.
The next slip down the slope from the presentation of science by scientists to our current world of scientific appearances has been from ghostwriting into ghost-presenting. There is an increasing chance that the named authors on articles in journals such as JAMA, New English Journal of Medicine, or the Lancet will know relatively little about the study they have apparently authored. Having written the article, the medical writer will often be the person best placed to answer questions arising from any publication. As a result it is not uncommon in major meetings to find poster presentations of a study's results tended by attractive, confident women whom the passing doctors will assume are postdoctoral researchers linked to the study. Far better have this kind of arrangement than some academic stumbling unconvincingly through the study design and results, drifting perhaps off message.
It is difficult to think of any other domain of professional life where this would be possible—except in the entertainment industry. A lawyer would not be able to bring in someone who “looked good” to sway a jury. If asked thirty years ago which professions could be body-snatched in this way, academic medicine would have seemed an unlikely candidate.
Until relatively recently in the biomedical sciences the most distinguished scientists in the field, Nobelists or aspiring Nobelists, might have four to five hundred articles on their curriculum vitae when they reached their sixties. The great men would often joke that if they hadn't gotten a Nobel Prize for science, they would have been in line for one for fiction. Many of today's ghostwriters in their thirties or forties will have more articles published in major journals than any of these Nobelists. But the most interesting figures are the academics still in their forties that the pharmaceutical industry has made into opinion leaders and who may as a result appear as an author on eight hundred to a thousand articles. Sometimes the marketing copy slips and such an academic figure may be described as someone whose views you can trust because they have over eight hundred articles to their name.
When it comes to choosing names to go on a paper's authorship line, the marketing department of the parent company has the key input. Companies will pay some heed to the contribution an individual may have made to study design, execution, or review of the manuscript, but they put greatest store on the profile of the academics and their ability to serve as spokespersons for the study among their peers.
When choosing a journal, companies review their portfolio of articles and decide on the mix they need for marketing purposes—some for the New England Journal of Medicine, some for the Lancet, and some for more specialized outlets. Factors such as the journal editor and the likely speed of acceptance and publication of the article make a difference too. In the case of Vioxx, Merck was interested in JAMA, for example, because it offers a fast-track option for “important” papers.
How many articles are now ghostwritten? In the late 1990s a document developed by Current Medical Directions, a medical writing company that was at the time coordinating a portfolio of relentlessly positive articles on Zoloft for Pfizer, surfaced that helped answer this question. Along with colleagues, I had a chance to analyze the papers being written and this made it clear that even by the late 1990s well over 50 percent of all articles on a drug such as Zoloft were likely to have been written by medical writers and over 90 percent of those appeared in major journals.
Of the eighty-five articles in the portfolio we tracked fifty-five to publication. All were devoted, it seemed, to securing marketing niches. There were articles, for example, on the virtues of Zoloft for anxiety, for panic disorder, for depression, for dysthymia, for obsessive-compulsive disorder, for children, for the elderly, and for women, along with articles on how Zoloft's metabolic profile was better than those of competitor SSRIs.15 None of the published articles shed light on what SSRIs such as Zoloft actually do or what their hazards might be.
Two articles in particular bring out what is involved. These reported clinical trial results of Zoloft in the treatment of post-traumatic stress disorder (PTSD). At the time the portfolio was assembled these two articles were listed as written but their authors were “TBD” (to be determined). Despite this they were scheduled to appear in JAMA and the New England Journal of Medicine. (They ultimately appeared in JAMA and Archives of General Psychiatry.)
“PTSD” as a defining label only came into being in 1980, and many experts were and are still skeptical that there is any such thing, arguing instead that the patients concerned are either anxious or depressed. In any case, in the late 1990s no company had a treatment that they were entitled to claim could benefit PTSD and so all companies were scrambling to be the first with a license. The two trials on which the two articles were based were part of Pfizer's effort. If Zoloft got on the market for PTSD, then publishing studies on PTSD would become a means to sell Zoloft. At the same time, selling Zoloft would establish some legitimacy for PTSD—as doctors and patients are likely to think a condition has to be real if a drug makes a difference to it, whether PTSD, female sexual dysfunction (FSD), compulsive shopping disorder, or adult ADHD. And of course, there is something real there—people.
In fact Pfizer had conducted four controlled trials for treatment of PTSD with Zoloft. In all four the drug had proved ineffective for men, a large proportion of whom had a clear history of having experienced a traumatic event through wartime exposure. In the two studies listed in the portfolio, some women, a much smaller proportion of whom had clear-cut evidence of having experienced a traumatic event, showed sufficient response for the company to steer the drug past the FDA and get it licensed.
Some years later, in 2007, there was widespread publicity about soaring suicide rates among traumatized US soldiers returning from the Second Gulf War or on active service in Iraq.16 Many of these will have been treated according to the best guidelines—with Zoloft or other SSRIs, even though the evidence these drugs work for men is almost nonexistent and there had been compelling evidence for some years that Zoloft and other SSRIs might trigger suicide.
A CUCKOO'S EGG IN THE NEST OF SCIENCE
In response to revelations about ghostwriting, medical journals began to put safeguards in place. The safeguards for the most part were a matter of ticking boxes by authors to say they had been involved in the study in some capacity. Safeguards of this kind tend to play into the hands of ghosts, who are more likely to adhere to the wording of the latest best publishing-practice guidelines than any academic.
The medical writers manage to see and portray themselves in general as adhering to high ethical standards, but both they and journal editors miss both how they are being used by pharmaceutical companies and the effects of such medical writing on medicine. What is at stake was brought out best in the case of Study 329 and Study 377, two clinical trials of the effects of GlaxoSmithKline's antidepressant Paxil in children that were conducted in the early 1990s and were designed to get a license for the drug to treat depressed children.
In these two studies Paxil failed to produce a clear benefit in depressed children compared to placebo, and the children on the drug, it was later revealed, became suicidal at over triple the rate of those on a comparison antidepressant (imipramine) or on placebo. Faced with these findings, an internal SmithKline memorandum from 1998 shows that company personnel decided that they could not show the data to the regulator and that their best strategy was to publish what they saw as the good bits of Study 329. Sally Laden of STI was given the task.17
The resulting manuscript portrayed Paxil in a very favorable light. As James McCafferty, Laden's SmithKline contact, put it in a July 19, 1999 email, “It seems incongruous that we state that paroxetine [Paxil] is safe yet report so many SAEs [serious adverse events]. I know the investigators have not raised an issue, but I fear that the editors will. I am still not sure how to describe these events. I will again review all the SAEs to make myself feel comfortable about what we report in print.”18
The study had been multicentered, and only SmithKline had access to all the data. Thus, if a center's investigators had seen a problem or two at their location, they were not to know that the pattern was repeating at each of the other centers. Like Sheffield's Richard Eastell in the Actonel case, what these investigators saw when they thought they were seeing the data were summary tables on side effects. These tables did show an increase in emotional lability on Paxil, but few if any investigators would have known that this meant suicidality. I, for instance, was an investigator on Paxil clinical trials at this time and did not know what problems lay beneath the rock of emotional lability.
If the authors who put their names on the resulting paper saw no problem with the paper, when it went out to review a peer reviewer would be even less likely to spot problems, as the reviewers were one step further removed from the data. Similarly, when presented to audiences of several hundred at academic meetings, who if anyone was likely to be in a position to spot the problem? For anyone in these audiences who might have been suspicious, speaking up would have meant voicing doubts about a study whose authors were particularly eminent and whose article had been peer-reviewed by one of the most prominent journals in the field—not an easy thing to do.
At one center in this study, linked to Brown University in Rhode Island, we now know that some of the children who deteriorated on treatment and dropped out of the study were coded as noncompliant rather than as emotionally labile or suicidal.19 It only takes a few misjudgments in coding like this, possibly only one per center, for the meaning of a trial to change quite dramatically. Laden did not have the records of the children coded as noncompliant—the real data. But even if she had, who was she to gainsay the judgment of a senior clinical investigator?
The article Laden wrote appeared in 2001 in the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP), the most influential journal in child psychiatry. It claimed Paxil was safe and effective for children.20 Martin Keller of Brown University was listed first in the list of twenty-two authors, among whom were some of the most distinguished names in American pediatric psychopharmacology. Laden's name did not feature on the paper.
In addition to illustrating the industrialization of clinical trials that happened when CROs took over their management and the data control and authorship tactics drug companies employ for studies like 329, this example shows how the publication of clinical trials today entails an entirely different kind of scientific article than the articles that came out of studies run by academics in the 1950s, 1960s, and 1970s. It would once have been inconceivable, and is still almost inconceivable, that academics in possession of a full set of data would choose only to publish the good bits of the data on a clinical issue this serious—if only because they knew that others could request to see all the data rather than because academics were once more ethical than they are now.
It would once have been inconceivable for academics to hire public relations companies to promote their research. But public relations companies are now hired by pharmaceutical companies to ensure both physicians and the wider public hear about company publications that support their drugs. In the case of Study 329, though GlaxoSmithKline could not legally have sales representatives talk to doctors about using
Paxil for childhood depression, they could expect that a prominently placed article suggesting such use that was backed by well-known academics would lead many doctors on their own to prescribe the drug “off-label.”
Off-label use like this, which is typically based on some academic article, may account for up to half of all medical prescriptions—and more for children. Successful media coverage of a study may generate so much off-label sales that applying to the FDA for a license for some new indication of an already approved drug may not be needed. Even though company representatives cannot sell the drug directly for childhood depression, they can hand out academic articles, along with marketing copy for those articles. In the case of Study 329, the marketing copy the PR firm Cohn and Wolf developed claimed that “[in this] ‘cutting-edge' landmark study…Paxil demonstrates REMARKABLE Efficacy and Safety in the treatment of adolescent depression.”21 The “remarkable” thing about Study 329 is how far we appear to have travelled from a negative set of data to a sparkling publication diamond whose flaws few doctors would be able to spot.
Much of the Paxil story we might not know but for a modern reworking of the Emperor's new clothes in the form of a documentary television series. Starting in 2002 the BBC ran a series of Panorama programs on Paxil that eventually brought the full details of Study 329 and other trials of antidepressants done on children to light, showing essentially that these drugs produced no benefits to warrant the risks being taken. A journalist with no medical training, Shelley Jofre, it turned out, could spot all the problems that the academic authors or readers of the article had failed to detect.
How did Jofre do it? Essentially the same way as many readers of this book might have done it. Because she was not in the field, the reputation of the journal meant nothing to her, and the distinction of the names on the authorship line escaped her. Because statistical significance was not part of her everyday world, she wasn't hypnotized into thinking that events that were not significant weren't happening. She didn't assume that emotional lability was some inconsequential change on treatment; she noticed that it was happening a lot more on Paxil and began to ask questions. The lack of sensible answers ultimately led to the discovery of company documents conceding that the data showed Paxil didn't in fact work.
Do ghostwriters lie? When asked this question straight up, they typically answer no. But they concede they have considera
ble skills in polishing a manuscript or inching it in the right direction. Data on a drug showing it barely beats placebo becomes evidence that the drug is effective. The writer can choose to let the world know about the side effects occurring in 1 percent, 5 percent or 10 percent of subjects. Thus a paper might avoid mention of a serious side effect that occurred at a 9 percent rate without technically lying by simply announcing that only events occurring at a 10 percent rate or more would be reported.
For the most part ghosts are likely to miss their own bias, but the process by which trials are presented is bringing us closer and closer to what we expect from politicians rather than scientists. Sally Laden caught it best when faced with a SmithKline decision to abandon a manuscript on the withdrawal effects linked to Paxil—“there are some data that no amount of spin will fix.”22 For a writer who could transform Study 329 from a failed study showing Paxil should not be used in children into an advertisement advocating its use, this is a telling admission.
What about the academics who are at least the nominal investigators of these studies? The standard response from medical writers and pharmaceutical companies is that the academics have a chance to review everything that is done in their name. In fact, however, as the Current Medical Directions material outlined above revealed, articles are often, perhaps typically, close to completion before an academic gets to see any draft, and after that it is common for the paper to be submitted without the academics making any changes whatsoever.
In a sense, however, this focus on darker arts of scientific spin misses the bigger point about lack of access to data. In the absence of the data, neither the medical writers involved nor the academics have either the ability or the incentive to make sure the paper is a reasonable representation of what took place. If no one can access the underlying data, no one is likely to be able to take issue with the paper as a fair representation of what happened in the trial.