Pharmageddon
Page 27
After the birth, I sent her to a hospital that had a facility for new mothers and their babies. The psychiatric team that took over her care there, I learned, thought she had schizophrenia. She was put on regular antipsychotics but apparently was not making much progress and the baby was taken from her. Some months later, I heard she had been given weekend leave; one evening, having told her parents she was going out for a walk, she laid her neck on the track in the face of an oncoming express train.
Looking back at Cora's confusion, emotional lability, and switches between immobility and overactivity, I came to see that she had a textbook case of uncomplicated catatonia. Few readers of this book will know what catatonia is, as it has supposedly vanished, even though fifty years ago up to 15 percent of patients in asylums were estimated to suffer from it, and it was one of the most horrifying mental illnesses. While mental health professionals are aware catatonia is still listed in the Diagnostic and Statistical Manual, few would spot a case if faced with it.
If Cora had a rare condition that doctors do not now need to recognize, if she was the exception that proves the rule of medical progress, she would have been unfortunate. But in fact up to 10 percent of patients going through mental health units in the United States and indeed worldwide still have the features of catatonia—if these were looked for.1 Sometimes the only condition they have is catatonia; other times catatonic features complicate another disorder and resolving the catatonia may make it easier to clear whatever other problem is present. But almost no one thinks of catatonia and so, like me, they miss the diagnosis. Cora was given antipsychotics, which are liable to make a catatonic syndrome worse. She died when a few days' of consistent treatment with a benzodiazepine would almost certainly have restored her to normal, making her death scandalous rather than accidental.
The benzodiazepines are a group of drugs that are no longer on patent, and thus no company has any incentive to help doctors see what might be in front of their eyes when it comes to a disease like catatonia. Instead, company exhortations are to attend to diseases for which on-patent drugs are designed, even if this means diseases conjured out of thin air—disease mongering—such as fibromyalgia to market on-patent medications such as Pfizer's Lyrica (pregabalin) or restless legs syndrome, a disorder conjured up as a target for GlaxoSmithKline's Requip (ropinirole).2
No one has any idea how many versions of Cora's story play out in daily clinical practice, the opportunity cost of disease mongering. These deaths are lost in the chatter about disorders that match up with onpatent drugs, invisible to doctors pleased with themselves for making a fashionable diagnosis like fibromyalgia and who, even in the face of treatment failure, will add ever more on-patent drugs to a patient's treatment regimen rather than go back to the drawing board and look more closely at the patient in front of them. Once upon a time the height of medical art lay in being able to go back and look at cases afresh and match the profile of symptoms against less fashionable or apparently uncommon disorders—no longer.3
But the outlines of an even more disturbing series of scandalous deaths have emerged from the pages of this book. Unlike catatonia, deaths in this case are from a disorder with no name. It is never recorded on death certificates as a cause of death even though we suspect that in hospital settings nearly two decades ago it was the fourth leading cause of death,4 and reports to regulators of deaths from this cause increase annually.5 It is almost certainly commoner now as drug prescriptions have escalated, and even commoner in community settings than in the hospital, but yet in an evidence-based medicine and guideline-driven era, there is no evidence base or guideline for its management.
If we are to cure this disease, we clearly need to name it. This is a disorder that was formerly sidelined as iatrogenic but if missing or corrupted data lie at the root of the problem, getting doctors to shoulder the blame—as iatrogenic suggests—no longer seems correct. A possible name is pharmakosis—a name that hints at some loss of insight.6
Here is where the interplay between cure and care, outlined in earlier chapters, comes into clearest focus. The critical test of medical care lies in how a doctor or medical system deals with the possibility that the latest treatment might be responsible for part of a patient's current problems—that the poison may actually be poisoning. Rather than care, for over a century we have had a default option to regard cures as an excellent form of care. But here is a disorder to cure, which offers no choice between cure and care—they are one and the same.
Thalidomide is the drug disaster that is classically seen as inaugurating our modern medical era but retrospectively it looks more like a bookend for an older style of medicine than representative of the problems that drug treatments and pharmaceutical companies now pose. Most of us, whether doctors or patients, likely think we can link problems as obvious as those caused by thalidomide to treatments we might have taken. But we are not faced with such obvious problems any more. The difficulties in grappling with what happens when treatments go wrong now come instead from the mechanisms put in place to ensure thalidomide could not happen again—these include controlled trials, a prescription-only status for drug treatments, and efforts to restrict drug use to traditional medical diseases. The story of tolbutamide, whose development paralleled that of thalidomide, brings out far better the difficulties we now face.
WHEN TREATMENT GOES WRONG
The discovery of insulin in 1922 is one of the most celebrated breakthroughs in medicine. A disease that came with sweet-smelling and frequent urine had been recognized in antiquity and had been named diabetes. It was occasionally possible to manage the disorder, which we now know is caused by a lack of insulin leading to raised blood sugars, for a time in older people by restricting sugar intake, but sweetsmelling urine heralded the end for most people. In childhood, the disease was even more malignant; for juvenile-onset diabetes, the discovery of insulin was the difference between life and death.
After 1924, the availability of insulin meant that a growing number of people could survive for decades longer than had been possible before that. Still, as those treated aged, a series of complications of living with diabetes became clear. The blood vessels of the eye might deteriorate, leading to blindness. Damage to the blood vessels or nerves to the pelvis brought about impotence. The nerves or bloods vessels to the legs might be compromised, leading to ulcers, possibly gangrene, and potentially amputation. The risk of heart attacks and strokes was increased.
It seemed reasonable at the time to think that these problems in part stemmed from the mismatch between naturally and artificially controlled blood sugars and in part from the fact that the insulin initially used was bovine or porcine rather than human insulin. But whatever the cause of such problems, people were alive who wouldn't have been alive and while they were alive the problems could be worked on. Perhaps improved preparations of insulin would reduce the risk of complications.
In an effort to improve the care of patients with diabetes, researchers renewed their focus on blood sugars and discovered that these vary substantially during the day in everyone, but especially in patients with diabetes. Might a tight control of blood sugar variability improve outcomes? Controlling blood sugars requires gadgets to read sugar levels and a range of insulin preparations. It also requires close cooperation between the medical team delivering care and the patient with the disease.
The teamwork that grew up around monitoring the hazards of diabetes and its treatment during the 1940s and 1950s became a byword for good medical care. Giving injections of insulin sounds simple, but there is a technique to giving the treatment subcutaneously. You need to learn what the symptoms of an overdose might be—slightly too much insulin risks reducing blood sugars to the point where the taker becomes confused or slips into a coma. Strategies have to be worked out to manage this hazard. The effort to balance the risks that come from allowing blood sugar levels to ride at too high a level and the risks inherent in reducing them too aggressively has to be incorporated into lives that n
eed to be lived. A workman doing heavy labor will have to juggle things in a different way than a ghostwriter working by computer from home. Some people need gadgets to check their blood sugar levels, others seem able to read their bodies.7
Getting this right requires teamwork. The nurses of the diabetes team learn from Mr. X how he manages to slot the need for blood testing and injections into his schedule, while Ms. Y tells them of how she manages. They hear about how people juggle social situations, from business meals to outdoor activities with friends. And they pass these lessons on to new patients who have to be helped to handle the ramifications of a diabetes diagnosis at the right pace for them. If medical care is key to how well patients use the technologies available, and therefore how well they do, advance in technology is key to inching forward in treatment possibilities. Teamwork, in turn, helps make the best possible use of those new technologies—such as the new pills that were discovered in the 1950s that could, in tandem with insulin, better manage blood sugars.
While insulin mobilized all the best in medicine, it did not provide a good basis for business. Lilly attempted to wrest the American patent for insulin away from its discoverers—the University of Toronto—but failed.8 But even had they succeeded, the commercial opportunities for an injection were limited, even though insulin was quickly used for all sorts of things other than just diabetes. The drug's appetite-increasing properties were put to use in rest cures aimed at “building people up” through a program of sleep and eating. In high doses insulin will induce a coma—the ultimate rest cure—and insulin-induced coma treatment was used to treat drug abuse and schizophrenia.9 Despite all these uses, a pill would be far better for business than any injection because a much greater number of people have raised blood sugars (prediabetes) than have frank diabetes and this market would be easier to develop with a pill than with an injection.
The research that led to a pill that seemed promising stemmed from the first of modern medicine's magic bullets, the sulfanilamide antibiotics that were discovered in Germany just prior to the outbreak of World War II.10 Building on these discoveries, French investigators soon developed a range of sulfa drugs, among which was one that lowered blood sugars. Despite the devastation of war, it was in fact a German company, Boehringer-Ingelheim, that in 1956 came up with the first oral blood-sugar-lowering drug (a hypoglycemic drug), carbutamide, and soon thereafter another—tolbutamide.
The Michigan-based Upjohn company bought the US rights for tolbutamide and began marketing it as Orinase in 1961. It effectively lowered blood sugar, but it turned out to be close to useless for the management of juvenile-onset diabetes. It would not replace insulin, then, but it might have a place in the treatment of patients who developed diabetes in their middle to older years who had been able to manage their condition by diet alone before they eventually graduated to insulin.
The availability of tolbutamide led to distinctions between type 1, or insulin-dependent diabetes, and type 2, or noninsulin-dependent diabetes. Where before there had been little emphasis on seeking out information on blood-sugar elevations, as few people wanted to know about diabetes until they had to, there was a new premium on detection. For people who had elevated levels of blood sugars but who were not overtly diabetic, dieting offered a means to help regulate their blood sugars, but this was hard work. Tolbutamide was an easier option. It even seemed possible that treating high blood sugars early might prevent people from developing diabetes. In addition, supplementing insulin with tolbutamide might minimize the requirement for insulin and offer better control of blood sugars, potentially reducing problems in the longer run.
In 1961 a long-term study by the National Institutes of Health (NIH) involving over a thousand patients began in which Upjohn's tolbutamide was compared to insulin, placebo, and an amphetamine derivative developed by Smith Kline & French, phentermine, which it was thought might help because it suppressed appetite and led to weight loss. In 1967, a problem came to light: more patients were dying on tolbutamide than on phentermine, insulin, or placebo. The result ran completely against expectation, as the trial protocol excluded anyone thought to be at risk of dying, and controlling blood sugars should have reduced the risk of complications.11
Had some of the treatment centers failed to adhere to the protocol or was there some other explanation of the findings? The investigators could find no explanation aside from some possible side effect of tolbutamide, and in 1969, the study was terminated so as not to put any patients on tolbutamide at further risk. The investigators consulted with Upjohn and the FDA, and an FDA meeting to discuss the study was organized for May 1970. The results of the study were not expected to be made public until an American Diabetics Association meeting the following month.
However, the study results appeared first in the business section of the Washington Post, just prior to the FDA meeting. The study, it seemed, had implications for the health of Upjohn that were of concern to many in the commercial sector. The first that many physicians knew of the issue was when anxious patients faced them in the clinic that day with the news. Close to a million people were on tolbutamide in the United States—a lot of anxious patients. This was not the way anyone was used to things happening in medicine.
Patients in many cases found that their doctors seemed personally offended, as if their judgments in putting the patient on this treatment were being questioned. But the doctors were stymied. Their patients didn't know what was going on, but neither did the doctors.12 These doctors flooded the FDA with letters suggesting the bureaucrats had no idea how much distress this was causing patients. The FDA scrambled to respond but they too were faced with a novel situation. It had never been part of the FDA's brief to tell doctors how to practice medicine. They left the NIH academics to fight it out with academics recruited by or otherwise taking Upjohn's side.
Patients also sought out the FDA—asking the agency whether it was still safe to take tolbutamide. Some had worked out from the figures given in the media that over a decade tolbutamide may have killed more Americans than had then been killed in the Vietnam War. How could such a drug still be on the market? The FDA directed patients back to their doctor, still, the agency said, the person best placed to decide the right course of action for them.
The FDA was at pains to insist it was not involved in the practice of medicine and did not wish to subvert clinical judgment. As Senator Kefauver put it in the debate surrounding the 1962 Kefauver-Harris amendments to the FDA bill, introduced after the thalidomide crisis and aimed at controlling the pharmaceutical industry's inappropriate marketing of drugs, “It should be made very clear to Senators and to the country; this is not a Federal control bill. This is a Federal information bill…. I am not talking about regulating medicine between the physician and the patient.”13
Many clinicians simply refused to believe the findings on tolbutamide. They were not used to having data trump their common sense. Perhaps there were more dead bodies in a trial of the drug, but these doctors had given tolbutamide to hundreds of patients, few if any of whom had dropped dead. Besides, how could trials like this take into account all the people whose lives must have been saved by having their blood sugars better controlled? It didn't make sense that a drug doing something so obviously right could be causing problems, no matter what a fancy controlled trial cooked up by some academics might have shown.
Upjohn moved quickly to mobilize a coalition of experts to cast doubt on the NIH research. These company-recruited academics pointed to a number of tolbutamide trials in which, they claimed, there had been no hint of a problem. While these trials were smaller, there were many trials on one side, with only one, albeit larger, trial pointing to a serious problem on the other. The insults began to fly between the academics, with one side accusing the other of being hysterical publicity seekers, attempting only to advance their careers, and the other pointing to the conflicts of interest stemming from participation in Upjohn-convened panels.14 Tolbutamide remained on the market without warning
s until the 1980s.
In the meantime, the availability of tolbutamide had put a new premium on detecting elevations of blood sugar. Because of this, when any of us have the most basic of blood screens, the results will include blood sugar levels. Studies of those results concluded that, as of 2008, 30 percent of Western populations have either diabetes or prediabetes. In the wake of such findings, majority medical opinion supported treatment for such patients as early as possible with the hypoglycemic successors to tolbutamide.15
A further National Institutes of Health trial, however, reported in 2008 that tight control of blood sugars with tolbutamide's hypoglycemic successors led in fact to a higher death rate than was found in patients whose blood sugar control was allowed to vary more.16 By 2008 a series of blood-sugar-lowering drugs from Rezulin to Avandia either had to be withdrawn from the market or were required to carry warnings after evidence emerged that they too were linked to excess mortality. Nevertheless, the hypoglycemic market remains one of the blockbuster markets, in 2010, worth over $25 billion per annum and showing growth of 10 percent per annum. This has been a market in which the best teamwork and care in medicine has been harnessed to company sales, but if the data are to be believed, this teamwork is in fact now delivering at least some patients to an earlier death than they might have otherwise had.
The story of tolbutamide prefigures a series of crises that developed around Prozac in the 1990s and Vioxx in 2005. But in 1970, the issues were new for the regulators. Before the amendments to the US Food and Drugs Act triggered by thalidomide in 1962, such conflicts would have been straight contests between doctors and industry. It was in fact the number of doctors who spoke out, at a risk to their careers, that brought the problems with thalidomide originally to light. The public, however, attributed the discovery to the actions of the FDA and as a result for the first time saw in the FDA a third party whom they might turn to for help.