Taking the Medicine: A Short History of Medicine’s Beautiful Idea, and our Difficulty Swallowing It

Home > Other > Taking the Medicine: A Short History of Medicine’s Beautiful Idea, and our Difficulty Swallowing It > Page 20
Taking the Medicine: A Short History of Medicine’s Beautiful Idea, and our Difficulty Swallowing It Page 20

by Burch, Druin


  Outside the committee room he showed the data to a group of the cardiologists. What he offered them, however, were not the real results, which he knew they would correctly judge to be unreliable and inconclusive. Instead he gave them a report where the number of deaths had been reversed. Rather than showing a trivial advantage to patients who were sent home, now it showed that the advantage came with being kept in a CCU. Instead of recognising the flipped data as being equally inconclusive, the cardiologists leapt on them as proof of their argument. ‘They were vociferous in their abuse: “Archie,” they said, “we always thought you were unethical. You must stop the trial at once . . .”’ They wanted it halted because they were convinced that it was unfair to deny CCU care to any of the patients. ‘I let them have their say for some time and then apologised and gave them the true results, challenging them to say, as vehemently, that coronary care units should be stopped immediately. There was dead silence.’

  Later, whenever he lectured about that particular study, Cochrane found that he was always asked the same question. Despite it all, doctors wanted to know, wouldn’t he himself go into a coronary care unit in the event of having a heart attack? The eventual results of the trial reflected those at the interim meeting: a statistically non-significant advantage for those treated at home. The most reasonable interpretation was to say that the two approaches were similar, the next most likely that the CCUs harmed people but the trial had not been big enough to prove it. On average, the new CCUs offered no advantages whatsoever at a very significant cost.

  It gave Cochrane some pleasure to point out to his audiences that he had already made plans for his own heart attack, discussing it with his own doctor and asking to be treated at home. When he finally suffered one, during Christmas of 1981, that was exactly what happened.

  The point of all of these experiences is not to show that doctors were still making mistakes. It is to demonstrate that they were still making the same mistakes. The surgeon who cut out the gland from Cochrane’s armpit was more effective than his Egyptian predecessors of three to four thousand years before. He could provide significantly more effective care for the majority of his patients. His mentality, however, was unchanged. He was more useful than Imhotep, Hippocrates or Galen, but that was due to developments in technology provided for him by others. His method of thinking was no more advanced than his ancient predecessors. Cochrane’s experience of doctors, before and after the Second World War, was of men (and increasingly of women) so possessed with faith in their own powers of perception that their potential for making mistakes and overestimating themselves was exactly that of their forebears in Sumer and Thermopylae. What Cochrane called ‘the God complex’ was still in place. Medicine had advanced: medics, for the most part, had not.

  Later in Cochrane’s life, his love of his work flared up. Between the dark hours of ten in the evening and one in the morning, in his bachelor farmhouse, he sat at his desk and wrote a short book that changed the medical world. Effectiveness & Efficiency did what Austin Bradford Hill had not thought possible. The success of the streptomycin trial had shoved doctors’ noses into evidence-based medicine against their will. Hill had described how impossible he thought it was to persuade them to wake up to the shortcomings of their intuitions and clinical judgements. His solution had been to infiltrate the Medical Research Council, to push medics into something against their wishes and fundamentally without their full knowledge and free consent. By contrast, Effectiveness & Efficiency, in under a hundred pages of passionate argument, demanded that those who read it held themselves up to higher standards, that mental honesty and personal integrity be allowed to prompt them into growing up and abandoning their childish beliefs in their powers of figuring out the world without method or numbers. A love of truth and a desire to help, said Cochrane’s book, were required for those who wanted to understand the world and improve it. Just as he had shamed the German prison guards into properly looking after their inmates, demanding they live up to their country’s better examples, now Cochrane did something similar for doctors in general.

  In September 1983, his health fading, frightened that he might lose his mind to dementia, Cochrane wrote his own obituary. He mentioned porphyria, the genetic disease that affected many in his family and, certainly, in later life, himself. He also noted the medical profession’s refusal to award him the financial bonus that the NHS provided to eminent or admired doctors. As he was independently wealthy, it was not the money that mattered to him. It was probably not even the recognition, only that its lack confirmed his colleagues were failing to recognise the importance of constructive scepticism, and the value of statistically thoughtful trials. ‘He lived and died’, said Cochrane of himself, ‘a severe porphyric, who smoked too much, without the consolation of a wife, a religious belief or a merit award, but he didn’t do so badly.’ It seemed a fair judgement on an exceptionally well-lived and useful life.

  18 Thalidomide’s Ongoing Catastrophe

  FOR AS LONG as there have been governments and drugs there have been attempts at regulation. The desire to make sure they were enforceable limited their range. Most medical preparations were so muddled and useless that regulation of them was impossible. Beer was an ancient exception. Doctors prescribed it, patients felt its effects, brewers weighed and measured it, bureaucrats monitored its prices and its constituents. The Code of Hammurabi, named after the sixth king of Babylon, set out restrictions on beer’s production and sale. Four thousand years ago, brewers who cheated their customers were thrown into the river.

  Other rules came into being from the medical profession’s efforts to enforce their chosen wisdoms, and protect themselves from competition. Leading doctors decreed the rightness of their methods, and the wrongness of those who argued with them. Drug purity began as an article of faith, but, like other faiths, progressively attracted an accretion of rules and approved ceremonies. The British regulatory system began in 1518, with the foundation of the Royal College of Physicians of London. It was part of the profession’s attempt to retain power, prestige and central control by taking advantage of sixteenth-century bureaucracy. Setting itself up to regulate medications, as well as those using them, the college obtained the right to decide what was fit for purpose and who was allowed to practise.

  Centuries later, the medical profession’s modern drive for regulation was still a mix of altruism and self-interest. In 1909 and again in 1912, the British Medical Association published reports on ‘Secret Remedies’, prodding Parliament into setting up a committee to investigate quack medicines. The prospect of people dosing themselves with these remedies was horrifying to the doctors, even though their medically supported treatments were likely to be as bad as those available without a prescription. In addition, legislation in 1917 led to some restrictions on the way a drug could be promoted. For the first time, a company was not allowed to claim health benefits that were not generally accepted as being real – although only for products relating to sexual diseases and for cancer. Other conditions took longer.

  The restrictions, however, were pretty minor, since the proof of effectiveness required was no more than a proof of opinion. The Therapeutic Substances Act in 1925 tried for something limited but objective: to guarantee that descriptions of a medicine’s ingredients were accurate. By the end of the 1950s, despite this handful of measures, British systems for drug regulation were barely more advanced than those of four centuries before. The pharmaceutical industry was changed out of all resemblance to the apothecary shops of days gone by, but the legal controls were largely unaltered.

  Historically, the regulation of drugs failed to provide any benefits for patients, although it protected the status and the earnings of doctors. Both groups were under the illusion that drugs were generally helpful. What regulation there was made it more likely that people were getting the goods they were paying for. That was all you could expect of it.

  Accurately controlling the chemical compositions of treatments was impossible un
til about two hundred years ago. Only in exceptional circumstances were regulators willing to stand over people as they made something, then certify it as sound. In other situations, the lack of chemical techniques for analysing a product made it impossible to work out what was actually in it. So long as that scarcely mattered – one remedy being most likely as useless or poisonous as another – this was not an issue. But with effective drugs came effective rules. Paul Ehrlich’s introduction of Salvarsan in 1910 resulted in the British Board of Trade checking on the composition of products claiming to contain it. Advances in chemistry meant that constituents were measurable, and here was something whose concentration genuinely mattered.

  Consumers could not tell the purity of a drug by tasting it, nor could they accurately understand the effects of a tablet by swallowing one and seeing what happened. Without an equable distribution of knowledge, the manufacture and sale of drugs could clearly not be left to an entirely free market. The number of milligrams of an active ingredient in a pill, and the likely effects of that pill, were similar in this respect to the number of calories in a mouthful of food, or the origins of a pack of coffee. Medicines had become what economists call ‘credence goods’, items that an average person could not fully assess for themselves. The public had to trust those who were making and supplying them, and regulation was needed to stop that trust being abused.

  Towards the end of the nineteenth century the US Department of Agriculture (USDA) had become increasingly concerned about the way products were adulterated before sale. The emerging power of chemistry gave producers an increasing ability to alter food. A series of reports from the USDA helped drive public worries about the issue. What were people adding to food in order to change its colour, weight, smell or appearance? Were these additives safe? Harvey Washington Wiley, chief chemist at the USDA from 1883, campaigned successfully to publicise these problems. His efforts were rewarded in 1906, when Theodore Roosevelt signed into law the Food and Drugs Act. Often referred to as the Wiley Act, in the chemist’s honour, it gave the USDA the power and responsibility to examine food and drugs. The officials were empowered to look for evidence of cheating in the way those goods were made and sold.

  Curiously, the Act did not restrict the health claims people could make for their products – only their ability to lie about ingredients. Those who wrote the Act drafted it with the intention that it should do more, but they were stopped in 1911 when the Supreme Court heard a case regarding a quack medicine – ‘Dr John’s Mild Combination Treatment for Cancer’ – and ruled that the Act did not prevent any medical claims a manufacturer wanted to make. An amendment the following year was meant to fix the problem, adding ‘false and fraudulent [claims of] curative or therapeutic effect’ to the list of illegal ways of presenting a product. The courts, however, demanded that proof rest on a demonstration that someone was being intentionally false about his or her beliefs. Sincerity, in other words, was enough to justify a health claim. Accuracy of judgement was irrelevant. And insincerity was exceptionally difficult to prove in a court of law.

  From 1930, the USDA’s Bureau of Chemistry, now headed by Wiley’s successor, became the Food and Drug Administration (FDA). In that name it was asked to respond to the after-effects of the efforts by the Massengill Company, of Bristol, Tennessee, in 1937, to produce palatable sulphonamides. Rather than a powder, Massengill wanted to sell the drug as a liquid. Their chief chemist, Harold Watkins, dissolved it in diethylene glycol, a substance very closely related to antifreeze and with many of the same properties, then added fruit syrup to make it taste pleasant. The company sold the drug without testing it on animals and without paying attention to existing knowledge about diethylene glycol’s toxicity. It was on sale for a little over two months and it killed 107 people.

  When the FDA arrived at Massengill’s headquarters, it found the company had already realised its mistake. It had voluntarily withdrawn the drug. The notices it sent out, though, only asked for the drug to be returned – they made no mention of how dangerous it was. Only at the FDA’s insistence was a second round of warnings released, this time mentioning that the drug could kill.

  While the FDA co-ordinated efforts to recover the drug, Massengill’s owner denied any blame. ‘We have been supplying a legitimate professional demand, and not once could have foreseen the unlooked-for results.’ In its way, it was a brilliant phrase. It was, indeed, impossible to foresee results that you made no efforts to look for. Massengill’s chemist, Harold Watkins, was more willing to take responsibility. After reading the reports of what his drug had done, he killed himself.

  Once the bodies were all buried, it turned out that Massengill’s operations were almost entirely within the rules. Failing to check on diethylene glycol and failing to test their product was entirely legal. Putting a lethal ingredient into a medication broke no laws; incompetence was not a crime. Massengill bore no legal responsibility for the deaths that resulted from its actions. The company, like many others after, was happy to regard this as being the final conclusion to the matter. Responsibility was taken to be a matter of law, not morality.

  There was one small rule that it had broken. The drug was marketed as an elixir of sulphonamide. According to law, an elixir was a liquid containing alcohol, and Massengill used diethylene glycol instead. So it was guilty of mislabelling. For this, under the 1906 Food and Drugs Act, it was fined.

  Public outrage led to the passage of the 1938 Food, Drug and Cosmetic Act. It required that drug companies demonstrate proof of safety – not necessarily before using the drug on people, but at least before openly marketing it. To a small degree, it also made it legally more difficult for companies to claim unjustified health benefits. The Act was limited, however, by contemporary knowledge of what constituted evidence. Proof of safety, like proof of effectiveness, was not something that could be reliably established without randomised controlled trials – and these did not yet exist.

  Twenty years later, from 1959 onwards, Senator Estes Kefauver of Tennessee began pushing for greater regulation of medicines in America. What bothered him most were the prices that drug companies were charging. The shoddy standard of proof required for safety and effectiveness was also an issue, albeit a less important and less populist one. On both fronts, however, Kefauver felt that consumers needed more protection. His efforts to provide it initially went nowhere.

  At the same time, in a converted seventeenth-century copper foundry in the West German village of Stolberg, a pharmaceutical company named Chemie Grünenthal was attempting to find new drugs. Supposedly they were after antibiotics, but they invested in none of Domagk’s elaborate testing systems. The company found a compound of little obvious attraction. It was not original; a Swiss firm had already tried it out and thought it worthless. The Swiss had given it to animals and seen no effect. The Germans did the same, and saw opportunity galore.

  Chemie Grünenthal found that the compound did nothing whatsoever to rats, mice, guinea pigs, rabbits, cats or dogs. All the animals survived. The stuff seemed almost ludicrously non-toxic. To a devoted therapeutic nihilist, this might seem a promising discovery. The placebo effect has always been powerful. Medicine in the 1950s was brimming with compounds whose harms were complacently underestimated, and whose benefits were magnified beyond their deserts. A genuinely harmless placebo, coated with the allure of modern chemistry, was potentially as good for people as it was for profits. The success of sulphonamides and penicillin, however, meant that therapeutic nihilism was out of fashion.

  Two historians of Chemie Grünenthal have pointed out that the company was operating from a peculiar ethical perspective. Along with the excitement created by the discovery of antibiotics, there was also the recent experience of the Second World War. The head of research and development at Grünenthal, Heinrich Mückter, had spent the war as Medical Officer to the Superior Command of German forces occupying Krakow in Poland. His additional title was Director for the Institute of Spotted Fever and Virus Research. ‘The
German Army’, observed the historians, ‘was not renowned for missionary medical work in Poland.’ The institute’s title sounded to them like a euphemism for a unit conducting human experiments and researching ways of killing.

  The actions of Grünenthal are difficult to make sense of. One of their chemists thought that their compound was structurally similar to barbiturates, the fabulously successful but dangerous sleeping tablets. It was true that a safer sedative could promise riches as well as human benefits, but then the drug did nothing to sedate any of the animals it had been given to. Grünenthal had a drug whose primary virtue was its inactivity. Whether from some strange faith in the emerging powers of chemistry, a medieval belief in the healing abilities of any compound, or a sweeping willingness to delude people in order to take their money, Grünenthal began pushing the drug to doctors. What happened next was a clear demonstration of the weaknesses of contemporary controls on establishing the safety and effectiveness of medicines.

  Before a drug was eligible for sale in West Germany, its actions needed to be shown sufficiently clearly to convince people about what it did. Normally that meant starting off with the effects of it in animal experiments, but since the drug had not shown any, that was impossible. So instead Grünenthal gave it to doctors to try out on their patients. The anecdotal reports that came back were regarded as being sufficient to establish the drug’s powers. Grünenthal suggested to doctors that the drug might be useful in controlling epilepsy (a suggestion that seems to have been based, at best, on fanciful optimism). What the doctors told them in return was that the drug helped patients to sleep.

 

‹ Prev