Pharmageddon
Page 7
There can be few better symbols of Pharmageddon than prescription only drugs becoming among the most consumed drugs in pregnancy in the face of strengthening warnings that they cause birth defects. The answers to how this could happen lie in great part in how the pharmaceutical companies have managed to capitalize on the very protections put in place by Senator Kefauver in his 1962 bill and in the reforms that defeated him. Prescription-only status has made doctors the targets of a marketing exercise that is far more sophisticated than placing even billions of pages of advertisements in medical journals and bribing doctors to use drugs. As outlined in chapter 1, the patent status of drugs has given companies an incentive to chase blockbuster profits—doing so regardless of patient welfare. Controlled trials have given the companies a means to persuade doctors that snake oil works so well that withholding it in pregnancy would be unethical, and also a means to make problems consequent on treatment vanish. But all of these hinge on the fact that these drugs are available by prescription only.
WHAT THE DOCTOR ORDERED
When Alfred Worcester or Richard Cabot wrote a prescription for a remedy at the dawn of the twentieth century, they were following a centuries-old tradition of asking an apothecary to take certain ingredients and mix them according to a formula (Rx = Recipe). If there was more than one ingredient in the mix, each should have a particular purpose. If the remedy worked, patients were able to take the prescription back to the pharmacy on numerous occasions asking for refills for themselves without further endorsement from the doctor. Alternatively, having once obtained something by prescription that worked, they could revisit the pharmacist and ask for the same medicines again, for family members. A prescription from a doctor was only one means by which people could access the drugs they believed they needed.
Because in Cabot’s day all medicines, including opiates, bromides, barbiturates, chloral hydrate (used for sedation), antiseptics, remedies for the gut, urinary system, and heart and respiratory system were available without recourse to a doctor, the threshold for visiting a doctor was far higher than it is today. Until the middle years of the twentieth century, there was no one being treated for latent diabetes, latent hypertension, or raised lipids. Aside from a few wealthy people engaged in psychoanalysis, no one had contact with the mental health system other than those relatively few who had psychoses and were committed to asylums.
When the US Congress passed the 1906 Food and Drugs Act, it contained no prescription requirement, only a requirement that medicine manufacturers state the contents of the product on the label. The pharmaceutical industry lobbied hard against the act, but once it was in place many enterprising manufacturers found ways of working the new situation to their advantage, for instance, by labeling their product “as approved by the Chemical Bureau.”2
There were no implications here for traditional medical practice. But another regulatory step taken soon thereafter had profound implications. The nineteenth century saw a growing concern about opiate and cocaine abuse, as well as alcoholism. These problems had been of little concern to medicine. Drug addiction, like alcoholism, was considered a social problem, except where the affected people became patients by virtue of cirrhosis or psychosis. After a variety of social approaches to treating the problems of addiction floundered, in 1914 the US Congress passed the Harrison Narcotics Act, which introduced prescription-only status for opiates and cocaine.3 The problem of addiction would be managed, or so it was thought, by making these drugs legally available only through a medical practitioner.
After the contaminated sulfanilamide tragedy of 1937, the 1938 Food, Drugs and Cosmetics Act encouraged a move toward making new drugs available by prescription only. The calculation was that the sulfa drugs were better categorized with insulin and the steroid and thyroid hormones, which were typically if not exclusively administered by doctors. After World War II, in 1951, the prescription-only status for new medicines in the United States was copper-fastened in place with the Humphrey-Durham amendments to the 1938 act, despite vigorous, sustained, and widespread opposition to the move. Critics complained that a system put in place for addicts was inappropriate for free citizens. But by the early 1950s, one of the side effects of having medicines that really worked was becoming clear—drugs that could really benefit, could really harm also. In 1952, Leo Meyler’s Side Effects of Drugs appeared, a first-ever medical compendium of drug-induced injuries.4 This new potential for harm took dramatic shape in 1961 with limbless babies born to mothers who had taken thalidomide during pregnancy as a supposedly safer hypnotic than the older barbiturates.5
When it came to his hearings in 1959, Senator Kefauver was exercised by the prescription-only status of the new drugs, a unique characteristic found in no other market. As he put it, “He who orders does not buy; and he who buys does not order.” As a consequence, when it came to drugs available by prescription only ordinary consumers could not protect themselves against the monopoly element inherent in trademarks or patents. Patients were critically dependent on their doctors to be uninfluenced by trademarks, patents, or marketing ploys. Doctors had a choice whether to give their patients the latest on-patent and branded drug or perhaps an older, more effective and less expensive drug, but patients had little choice other than to do as prescribed by their doctor.6
Thalidomide had been available over the counter in many European countries but exactly the same problems arose in the United States where the premarketing samples were available by prescription only. Indeed the problems may have come to light as quickly as they did because doctors in Germany were not inhibited in recognizing the potential for harm of an over-the-counter drug, as they might have been in the case of a drug essential to their livelihoods. But in the United States in 1962, in the face of the thalidomide disaster, retaining the prescription-only status of drugs seemed to make sense: doctors retained some patina of skepticism about drug claims due to medicine’s long-standing opposition to quackery, and doctors appeared to be the people who would be able to quarry information from drug companies about possible adverse side effects of their products.
Before 1962 prescription-only status was still something of a novelty—after 1962 it became the center of the distribution system for new drugs when companies were required not only to make their drugs available only through doctors but also to prove that their drugs worked for some medical condition in order to get FDA approval. This combination of controls must have looked pretty foolproof in 1962, but it has not turned out to be an effective way to constrain the pharmaceutical industry within a medical framework. Quite the reverse. When a pharmaceutical company gets a drug on the market for lowering cholesterol, for osteoporosis, or for erectile dysfunction, this now marks the point at which the company begins to sell the condition, the point at which they can gear up to reengineer the medical marketplace to suit their product, as Abbott did with bipolar disorder to make it Depakote-friendly. It seems extraordinary now that no one in 1962 seems to have realized that if pharmaceutical companies were restricted to marketing drugs for diseases, they might start to market diseases.
Had pharmaceutical companies not been required to demonstrate a drug’s efficacy in treating a particular disorder, we might all have ended up with a lot fewer diseases recorded in our medical records. The first antidepressants would have been marketed as tonics or stimulants. To get St. John’s wort, an herb with SSRI properties, we just have to feel stressed and buy it over the counter where it is sold as a tonic, but to get Prozac now, we have to be officially diagnosed as depressed. In a similar fashion, the statins might have been marketed on the promise of restoring inner youthfulness, or getting our arteries in shape, rather than for a supposed cholesterol disorder, or the biphosphonates might have been aimed at restoring youthful bones rather than for osteoporosis. As insurance companies reimburse in response to diagnoses, fewer diagnoses would likely have reduced our need for doctors in addition to reducing the number of diseases.
The third medical requirement of the 1962
amendments was that companies demonstrate their products worked in well-controlled clinical trials. This was smuggled into the final bill through the efforts of Louis Lasagna, a professor of pharmacology and a believer in controlled trials, who was attempting at the time to encourage some use of controlled trials, rather than trying to make them mandatory.7 Lasagna himself had undertaken the only controlled trial of thalidomide ever done, through which it sailed—an effective hypnotic free of significant side effects.
The copper-fastening of prescription-only arrangements that came out of the Kefauver hearings would alone have put doctors in the sights of pharmaceutical company marketing departments in a way they had never been before. But constraining companies to market their drugs for diseases and to demonstrate their efficacy through what was then a new medical invention, the controlled trial, made it necessary for companies not just to have doctors in their sights but to understand doctors better than doctors understood themselves.
In the case of some hugely profitable trademarked drugs, such as Marlboro, medicine has played an honorable part in bringing lethal problems to light. But what would have happened had tobacco been available by prescription only? It is clearly helpful for ulcerative colitis. In all probability it could be shown to be just as good an antidepressant as Prozac and the SSRIs—so the market might have been substantial. How quick then would doctors have been to do the independent studies that pinpointed the problems linked to smoking or to insist on the seriousness of the risks while the tobacco industry was systematically creating doubt about those risks?
Doctors don’t view themselves as consumers, subject not only to the extraordinary pressures that modern marketing can bring to bear on any consumer but also, by virtue of prescription-only arrangements, to these forces in the most concentrated form that exists anywhere on the globe. Typically, they blithely go their way without seeing the need to understand marketing. They bunker down behind a Maginot Line of what they believe are untainted controlled trials and evidence-based medicine, unaware that the tank divisions and air force of their opponents give daily thanks for that Maginot Line.
THE RISE OF THE BLOCKBUSTER
The possibilities for a new generation of branded medicines—and extraordinary sales—that opened up on the back of a regime that allowed drugs to be patented and that made these drugs available on a prescription-only basis were first revealed in the course of a battle in the 1980s between pharmaceutical giants Glaxo and SmithKline over the ulcer drugs Tagamet (cimetidine) and Zantac (ranitidine).
James Black was one of the most successful medicinal chemists ever; he was also one of the first to win a Nobel Prize while working in the pharmaceutical industry. Black had initially worked for Imperial Chemical Industries, where he had developed the concept of a beta-blocker. These drugs, which blocked the beta-adrenergic receptors on which stress hormones like epinephrine exert their effects throughout the body, turned out to be particularly useful for treating hypertension, the most rapidly growing medical market in the 1970s.
Black then moved to SmithKline, where he turned his attention to the antihistamines, helping to distinguish among two different histamine receptors, H-1 and H-2. This opened the way to develop H-2 blockers that would target histamine receptors in the gut, reducing gastric acid production, then thought to be responsible for ulcers. Tagamet was the result, a drug that embodied a genuinely novel approach to the treatment of duodenal ulcers, then one of biggest problems in internal medicine.8 Within a few years of its introduction, surgery for ulcers had become a rarity—had Tagamet been available earlier, it would have saved my mother much misery. This epitomized the best hopes of both science and industry—new and innovative products making it into healthcare and making a big difference to patients.
In the course of developing Tagamet, Black presented details of his experiments at scientific meetings, stimulating interest among chemists at Glaxo, who also determined to develop an H-2 blocker. Glaxo’s efforts led to Zantac, a drug almost identical to Tagamet. Since Tagamet had been the breakthrough compound and had come on the market in 1979, six years before Zantac, and with the prestige of Black’s endorsement, few doubted that Tagament’s sales would vastly outstrip those of Zantac.
Glaxo, far from undercutting the price of Tagamet, as might have been expected in a normal market, decided to make Zantac pricier. And it put huge resources into marketing, which focused on minor differences in the side-effect profiles of the two drugs. Much to the surprise of observers, Zantac’s revenues soon outstripped Tagamet’s, and it became the first blockbuster—a drug that makes at least a billion dollars per year.9
Glaxo and SmithKline merged at the turn of the millennium to become the biggest pharmaceutical company in the world. But before they did, Glaxo’s response to an exciting development in the science of ulcers is indicative of important shifts that were taking place in the world of medicine and corporate interest. In Australia, Barry Marshall, then a medical resident in Perth, spotted an unusual bacterium, helicobacter pylori, in tissues removed from ulcers. This led him to a series of experiments where he cultured helicobacter, drank it, produced an ulcer, and later cured his own ulcer with antibiotics.10
Marshall made overtures to Glaxo but found they had no interest in a cure for ulcers. The beauty of H-2 blockers was that once they began taking them, many patients remained on them indefinitely. Actually eliminating ulcers, the treatment of which had just become the cash cow of the pharmaceutical industry, was not what Glaxo had in mind. The decade between the contrasting scientific experiments of James Black and Barry Marshall had propelled medicine into a new world, one in which it could not be assumed that science and business were on the same side, as they had appeared to have been over the previous three decades.
Zantac was a brand like no other. It came with attention to color coding, with free pens and trinkets for doctors, and a lot of support for doctors to attend educational meetings nationally and internationally. It set a template for aggressive drug promotion. Its very success led, in reaction, to movements like No Free Lunch, a group set up by Bob Goodman to persuade doctors to remain independent of pharmaceutical companies by refusing the free pens, lunches, and the like that companies handed out so liberally. Glaxo’s aggressive marketing at the end of the 1980s also made many doctors more receptive to the idea that evidence-based medicine, which emerged in the 1990s, could be used as a way to contain the power of marketing.
But No Free Lunch and similar efforts to eliminate conflicts of interest fail to ask just what it is that would make a brand appealing to doctors. A brand is something whose value lies in the perception of the beholder—and in this case doctors repeatedly tell us that the evidence about a drug’s benefits and risks trumps the color coding of the capsule or the lunches, no matter how good they might be. And insofar as creating a brand involves building a set of exclusively positive associations and eliminating any negative associations, this is not going to be done by getting the color right.
The problem is that a brand is meant to be an uncomplicated good. It is a partial truth that seduces by directing our attention away from any messier realities. It doesn’t fart; it doesn’t have body odor. Against a background of clinical complexity it offers a point of reassurance. But it is, by this definition, incompatible with a medicine, which is—or was—understood to be a poison whose delivery involves a judicious balancing of risks and benefits.
The combination of brands like this and prescription-only privileges leads to a tragedy in the classic sense of that word—as with Hamlet, “whose virtues else be they as pure as grace as infinite as man may undergo, shall in the general censure take corruption from the particular fault.” Here’s how. Brands married to product patents have created the conditions that have made blockbusters possible, and the fortunes of pharmaceutical companies increasingly now depend on the success of these blockbusters and their branding. They have to be hyped to the max and their hazards concealed. These dynamics of brand creation are, through prescr
iption-only status, welded to an profound bias in medicine—doctors tend to attribute any benefits in a patient’s state to what they have done and couple this with a tendency to overlook any harm they might have done. Doctors have to be enthusiastic about treatment—their very enthusiasm can make the difference between success and failure. Being readily able also to spot the harms they do would likely in many cases lead to clinical paralysis.
The fortunes of pharmaceutical companies hinge on this weld holding fast. The tragedy is that there is little risk of it coming undone: both companies and clinicians are biased to attribute any harms to the disease being treated—it is depression that gives rise to suicidality in patients on antidepressants, not the drugs; it is the poor state of a person’s arteries that leads to coronary artery bypass surgery and is responsible for any confusion after the surgery rather than anything that happened on the operating table; it is schizophrenia that gives rise to a disfiguring neurological condition, tardive dyskinesia, rather than treatment with anti- psychotics. For thirty years the outcomes for lung cancer have remained almost unchanged. Millions of people have died during this period, after having radical surgery, intense radiotherapy, or intense chemotherapy. If these treatments extended the life of some yet overall life expectancy remained the same, there must also be an equal number whose lives were shortened by treatment, but you will hunt high and low to find any whose deaths are attributed to the treatment rather than the disease.
When it comes to the harms following ingestion of over-the-counter or illegal drugs, from the end of the nineteenth century the medical profession had no difficulty seeing their problems and expressing opinions through bodies such as the AMA. But once the drugs are made available by prescription only through the clinician, there is no independent voice of any standing to urge caution. Against this clinical background, the dynamics of branding produce something close to a pure toxin for medical care.