Taking the Medicine: A Short History of Medicine’s Beautiful Idea, and our Difficulty Swallowing It
Page 13
As a doctor, Galton never got started. In 1844 his father died. ‘Being much upset and craving for a healthier life, I abandoned all thought of becoming a physician.’ The world was poorer for it. Partly through his work, however, statistics penetrated a little further into medical thought.
13 Antibiotics and Nazi Nobels
PROOF OF THE possible is exciting, both for inventors and their commercial backers. Now that drugs existed which killed off syphilis and trypanosomes, there was the palpable prospect of making others that destroyed germs of equal or greater menace for human life.
Cassella Dye Works were absorbed with Bayer and other companies into the I. G. Farben conglomerate in 1925. The ‘syndicate of dyestuff corporations’ – Interessen-Gemeinschaft Farbenindustrie – was a creation masterminded by Carl Duisberg. Impressed by the way American oil companies formed successful cartels, he led Germany’s emerging pharmaceutical corporations into doing the same. It helped cut competition, and keep profits high, but it did not put an end to attempts to benefit the consumer. In 1929 I. G. Farben opened an expensive and vastly well-equipped research laboratory. The man in charge was the physician Gerhard Domagk, a student of Ehrlich’s.
At the age of nineteen, while still a medical student, Domagk had served in Germany’s army on the Western Front. By Christmas of 1914 he was wounded, and spent the rest of the war helping with problems of hygiene. Cholera, typhus and dysentery, as well as medical helplessness in treating them, made deep impressions on him, along with the way in which even aseptic surgery was not enough to prevent horrific infections and gangrene.
Mankind had been worse off before, without even a knowledge of germ theory and the value of cleanliness. Yet an awareness of medical futility, along with great advances in basic sciences, prompted doctors to wonder about how badly they were doing and how much they might improve. It was a healthy impulse, not least for the scepticism and acceptance of ignorance that came with it. With science and technology rapidly advancing in many fields, complacency about medical knowledge looked increasingly old-fashioned. Of the ten million soldiers who died in the First World War, roughly half lost their lives due to infections. Even a minor wound, a scratch, was often enough. That seemed a problem that doctors should be able to do something about.
The commercial benefit of any potential bacteria-killing drug – what today we would call an antibiotic – was as clear as the medical need. I. G. Farben, knowing this, backed its workers well: ‘The management of the . . . dye factories always found ways and means of supporting us – who were engaged in scientific research – indeed, they assisted us far more than did the state’. Domagk thought that there was something remarkable about this, that neither ‘sickness funds’ nor insurance companies, despite their capital and their own financial interests in keeping members healthy, seemed to feel the responsibilities or see the opportunities that drug companies did. A lot of the credit for Farben’s enlightened thinking, felt Domagk, was owed to Duisberg.
Since the introduction of Salvarsan in 1910 – Ehrlich’s compound number 606 – chemists and doctors had been searching more seriously for compounds able to kill the common bacteria causing human disease. Domagk’s innovation was to set up a screening system, one that was both thoughtfully methodical and on an unprecedented scale. A visiting Englishman told of ‘enormous laboratories in which they did nothing but take compound after compound and test its ability to deal with infections in animals’. Using these labs, Domagk pursued Ehrlich’s inspiration about the selectivity of dyes, and the selective toxicity that might go with them.
By 1890 doctors were well aware that immunity was an important concept in human health. Smallpox, mumps and measles were diseases that you got only once. After that you were either dead or permanently immune – this was what had allowed Edward Jenner to successfully popularise vaccination against smallpox from 1796. The new serum therapy relied on the experimental observation that some degree of immunity could be transferred along with this blood fluid. Emil Behring showed in 1891 that serum taken from an animal already immune to diphtheria could help treat another in the midst of suffering from it. The first human use of serum therapy came that same year, on Christmas Day, on a child in Berlin.
For bacterial infections, attempts at serum therapy were based on injecting animals – usually horses – with the bacterium you wished to attack. Serum from the horse was then injected into a person suffering from that bacterium. Unlike the unmistakable impact of Salvarsan, the effects of these treatments were not always clear. The reaction of the initial animals to bacteria differed, as did the responses of different people to serum from those animals. Added to this, not all bacterial infections were fatal. Many people recovered, regardless of whether they were given serum therapy. And some who were given the serum died as a result, their own bodies reacting violently to it. Many others suffered milder side effects – ‘serum sickness’ was a constellation of fevers, rashes, joint pains and other problems, sometimes worse than the disease itself. Success and failure, in other words, were difficult for doctors to tell apart.
Our linguistic habit in medicine is to talk about risks versus benefits. It is a hangover from thousands of years of complacency. If you took someone with pneumonia, and gave him or her serum from a prepared horse, it was clear that there were risks. The patient might get serum sickness, and might die. The benefits were equally uncertain. That is, the balance was not between risks and benefits, the balance was between harms and benefits. Neither was certain, and any real treatment had a chance – a risk – of doing good just as it had a risk of doing harm. Speaking of ‘risks and benefits’ too easily makes it seem as though the good things were guaranteed and only the bad ones are difficult to predict.
Methodical efforts to balance harms against benefits profited greatly from the development of serum therapy, clearly dangerous yet clearly useful. There were increasing efforts to design experiments that investigated this uncertainty reliably. These were thought through in a way that the world had never seen before:
The good results of insulin on patients with diabetes or of liver treatment in pernicious anaemia are so constant that the trial of these remedies in a very few cases was enough to establish their value. With the antiserum treatment of lobar pneumonia the conditions are very different. The action of the serum is only that of a partial factor for good, and its influence may be overwhelmed by an infection that has been allowed several days to establish its dominance in the patient, or by other complicating factors that weaken the patient’s resistance. In order to measure precisely what this partial benefit may be it would be necessary to take two groups of cases of identical severity and initial history and compare the sickness and the fatality in each, the one being treated with serum and the other serving as a control. But this is impracticable . . .
The authors, members of the Medical Research Council Therapeutic Trials Committee, were writing in 1934 about their trial of serum therapy. In the British Medical Journal they argued that the creation of two such deliberately well-matched groups was impossible. They felt the number of people whose physical conditions were identical was simply too small. Reading their report, it is clear that the real reason was also apparent to them, even if they did not say it explicitly. No matter how much you tried to find cases that were identical, you never could. Even if you took identical twins and infected them at the same moment and with the same bug – an unthinkable experiment – you could not actually guarantee that your two subjects were the same. One twin might be historically weaker then the other, or currently more tired. Even those who were genetically identical still possessed some differences, the result of their environments. It was not that there were too few identical patients, it was that there were actually none whatsoever. It was impossible ever to expect that one patient’s situation should be exactly that of another.
To get around this, from 1933 a number of British hospitals tried assigning consecutive patients to different approaches. If the first pneumonia
patient on the ward got serum therapy, the next would not. In the course of things, they hoped, everything would balance out. The system of alternate allocation did away with any need for doctors to try to ‘match’ people. It meant that doctors did not need to try to assess every factor they knew of that affected someone’s health. Crucially, it meant that it did not even matter if there were important influences that the doctors were completely unaware of. Stick one person into one treatment group, the next into another, and whatever differences there were between them would be ironed out, whether you understood those differences or not. So long, that is, as you took enough people.
This allowance for ignorance and inability represented a great achievement. Doctors had always been human, always capable of making mistakes or of not knowing everything there was to know and not seeing everything there was to see. Here, rather than presuming that they could behave perfectly, they built a system that did not require them to. It did not come easily, and some doctors were horrified at the attempt. John Cowan, a Scottish doctor, wrote to the MRC to protest at the trial’s methods:
. . . serum seems to me to be proved to be beneficial . . . It should be available in consequence in ALL hospitals . . . The days of controls are no longer possible: it is not fair to them.
The intuition of doctors, he was arguing, was too reliable to need any external support. Physicians were perfectly capable of telling whether a treatment was working. Withholding a new drug from a group of people, in order to compare what happened to them with what happened to those who were given it, was cruel and unfair.
Enough of the trial doctors felt similarly to compromise its results. Alternate allocation was not robust enough, not in the face of doctors’ suspicions that they could tell who would benefit from serum and who would not. Some of those suspicions may not even have been conscious, but that did not matter. The doctors did not manage to stick with the scheme. Alternate allocation meant that doctors knew which treatment a patient would get if entered into the trial. They were able to withhold their sickest and healthiest patients, trying to match them up with the treatment they thought likely to suit them best. They agreed with the principles of doing a trial, but could not get over their feeling that, for some patients, they already knew which treatment was likely to be best.
In the end, different things happened at different hospitals. ‘The variation in results at the different centres cannot be explained,’ said the report, diplomatically alluding to the fact that the doctors were cheating. Over 1933 and 1934, doctors in Aberdeen, Edinburgh and London managed to study 530 patients with pneumonia. Of these, 241 were given serum therapy. Representing the combined experience of three hospitals and a whole group of doctors, the authors who wrote about the study were still worried that these numbers were too few to tame the play of chance. It was a heartening conclusion, and the complete opposite of John Cowan’s belief that his own personal experience, based on watching a vastly smaller number of cases, enabled him to divine exactly the risks of a new therapy.
Serum therapy was cautiously adopted by the British. The trial, published in 1934, suggested the treatment was beneficial for certain patients. Neither the trial’s methodology nor its results were marvellous, but the search for antibiotics was proving fruitless, and people were losing interest and hope in it. Many in the medical profession felt that Ehrlich’s magic bullets were simply not possible. Using a horse as a living factory to make serum gave the occasional good result, and the odd bad one. Lots of doctors decided it was the best there was.
Streptococcus pyogenes was particularly fatal at the time. It accounted for many deaths from wounds during the First World War, and in the influenza pandemic that followed. At I. G. Farben, Domagk isolated a particular strain of the streptococcus from a dead patient. He grew samples of the bug until he found one that behaved with astonishing consistency. Using mice, Domagk found that they reliably died four days after being injected with the streptococcus.
This sort of repeatability was exactly what the real world of clinical medicine lacked. With a 100 per cent death rate amongst the mice, Domagk knew that any survivors would owe their lives to whatever experimental treatment they received. It gave him an effective way of testing a large number of drugs in a short space of time.
Early experiments confirmed the wisdom of using animals in order to avoid killing humans with experimental drugs. A range of compounds were known to have some antibacterial properties, based on their actions in a culture dish or a test tube. Domagk tried some that were based on gold, a popular therapy. They helped the mice survive the streptococcus, but they killed them in other ways. The gold destroyed healthy kidneys. As a drug, its aim was not precise enough to be a magic bullet. Compounds based on dyes were safer, and in culture dishes they actually worked well, killing bacteria, but in animals they were ineffective.
Using a technique developed to make dyes more colourfast, I. G. Farben’s team presented Domagk with a new stain. In December 1932, Domagk tried it out on streptococcus cultures. It had no effect. Domagk took this apparently useless new dye and tried it out on mice all the same. On 20 December, taking twenty-six mice, Domagk injected all of them with a fatal dose of streptococcus. An hour and a half later he gave twelve a dose of the dye. Here was the difference between him and most doctors. John Cowan felt that serum therapy worked so well for pneumonia that ‘control’ patients were unnecessary, even unethical. That was in a disease where most people got better anyway, and the treatment could kill. Domagk had mice that were virtually guaranteed to die, yet he kept fourteen as controls, just to make sure. This methodological care was the result of a gradual improvement in standards of medical epistemology. Clinical medicine had always been based on scientific thinking; showing it was actually becoming scientific itself. It was learning to try and test its most valued hopes. Four days later, on Christmas Eve 1932, all of the control mice were dead. All of the ones given the dye were alive.
Prontosil rubrum – the second part of the dye’s name referring to its red colour – was kept largely secret for the next three years. Exactly why was never made clear; corporate worries about securing patent protection may have been responsible. On 15 February 1935, Domagk finally published his results.
There was remarkably little excitement, despite no drug having previously worked against this form of overwhelming sepsis that was such a common cause of death worldwide. Perhaps as a result, doctors found it difficult to imagine that any drug could work. A widespread disbelief in the possibility of effective antibiotics provoked the prejudice that Domagk’s new drug was probably not up to much.
In London, a doctor named Leonard Colebrook was in charge of research at Queen Charlotte’s Hospital. It was a maternity hospital, and Colebrook’s particular interest was in puerperal fever. The early suggestions, of Oliver Wendell Holmes and others, that puerperal fever was spread from woman to woman by the hands of those who looked after them, had by this time been accepted. Streptococcus was the cause, infecting the wounds left in a woman’s genitals after she gave birth. In 1920, a friend of Colebrook’s lost his wife to the disease. Moved, Colebrook centred his career on it from then on.
Between 1934 and 1935, Queen Charlotte’s Hospital admitted 210 women infected with puerperal fever. Forty-two of them died. That was despite the best efforts of Colebrook and his staff, all of whom understood germ theory and the importance of hygiene in preventing its spread. (For comparison, in 2000 eighty-five British women died during or in the few weeks after giving birth. That was the national total number of maternal deaths, from all causes, in or out of hospital.)
Reading about Domagk’s new therapy, and finding it more interesting than his colleagues did, Colebrook asked I. G. Farben to supply him with some Prontosil. From what Colebrook read, the drug seemed impressive, but he was uncertain if its benefits would apply to the women he cared for. Carefully, he began to try to discover if they did.
First of all he tried repeating Domagk’s experiments on mice. Colebr
ook was encouraged, even though the drug did not seem as effective as in Germany. His next step was giving the drug to women already seriously infected with the streptococcus, women so sick that there was little likelihood that even a poisonous drug could make their chances worse. When they appeared to benefit, Colebrook starting giving it to others who were less unwell. Out of a series of thirty-eight women given Prontosil by Colebrook, three died. The case fatality rate for the disease over the previous year had been 20 per cent, forty-two out of 210. With the drug, and aware that he was still tending to select out the sickest of the women, Colebrook was achieving a fatality rate of about 8 per cent.
Despite his excitement, and his desperation to find an effective treatment for puerperal fever, Colebrook was still not sure. ‘It behoves us to be very cautious in drawing conclusions’, he wrote, ‘as to the curative effect of any remedy upon puerperal infections.’ The disease was hard to diagnose and hard to predict. Domagk’s mice had definitely been infected with streptococcus and they all, reliably, died. Neither the diagnosis nor the outcome was quite so clear among women at Queen Charlotte’s.
Adding to the confusion was a remarkable suggestion by scientists at France’s Pasteur Institute. Prontosil’s effectiveness, they said, was not because it was a dye. Despite the fact that the ability of aniline dyes to stain bacteria had been the trigger for Prontosil’s development, they thought there was no relationship between the drug’s colour and its actions. The portion that made the chemical red, they argued, was actually irrelevant. What made it work was the other part of it, the remnant left when the dye part of the drug was removed. Chemically this consisted of a sulphone group connected to an amine. It was called sulphanilamide.