Book Read Free

The Inflamed Mind

Page 9

by Edward Bullmore


  To develop the new anti-TB drugs, companies like Roche followed a scientifically logical path. They screened many different molecules in the lab to see which of them was most effective at killing the germ that was known to be the root cause of the disease, called Mycobacterium tuberculosis. They reasoned correctly that if you could find a chemical that killed these bacteria in a test tube, or in an experimentally infected mouse, there was a good chance it might also kill them in the human body and cure the disease. It so happened, at the end of World War II, that a large stockpile of a chemical called hydrazine, which had been produced by the Germans as rocket fuel for planes and flying bombs, was liberated by the Allies for pharmaceutical research. Roche synthesised hundreds of new molecules based on hydrazine and screened them for efficacy in mice infected with the TB germ. They found one molecule, called iproniazid, that stopped the bacteria from reproducing and it prolonged the lifespan of infected mice. The next step was to test whether it actually worked in patients, and to do this Roche started a clinical trial.

  At the start of the 20th century, TB, known as the white plague, was the second-commonest cause of death in New York and in 1913 the city opened a vast hospital exclusively for patients with the disease on a secluded plot on Staten Island. Sea View Hospital was designed partly as a sanatorium for patients to rest, to enjoy fresh air, sunlight and uplifting views of the ocean; and partly as a prison to keep them isolated from the rest of the population while their disease remorselessly progressed, despite the scenery, to the point of death. There was no effective treatment. Patients spent their days lying listlessly on their beds, wasting away, depressed and exhausted, waiting for things to get even worse. The clinical trial of iproniazid at Sea View Hospital in 1952 burst upon this dismal scene with awesome power. Patients were immediately energised by the drug, they became much more active and sociable, much hungrier, and their lung disease was stopped in its tracks (Fig. 7). For the first time, patients began to leave Sea View Hospital while still alive, and return to a normal life in the city. The wards were almost completely deserted by the early 1960s and most of what’s left of the hospital today is a grim ruin, designated as a site of historical interest.

  The golden age

  There’s never been much doubt about the impact of iproniazid, and the other wonder drugs of its generation, on the treatment of TB. This was truly a miracle cure, as the newspapers said, built on solid science, as physicians knew. But it was not so obvious what to make of the euphoria unexpectedly caused by iproniazid. Good Cartesian doctors argued that it must be a placebo effect. Many of the early clinical trials were not properly blinded or controlled, meaning that patients knew they were going to be given a drug that they believed could cure them. And if you thought that you were going to be reprieved from a sentence of death by the white plague, well, you’d cheer up, wouldn’t you? But some doctors thought that there could be more to the dancing in the wards than just a placebo effect; they thought it might be a serendipitous clue to hitherto unsuspected actions of this anti-TB drug on the human brain.

  Like every other up-and-coming psychiatrist in New York in the 1950s, Nathan Kline would have been well aware that he was conducting his professional life in a climate of Freudianism. Psychoanalysis was close to its high tide mark as the dominant school of thought in American psychiatry at the time. But Kline was interested in the very different approach to treatment of depression suggested by the Sea View TB trials. So he talked about iproniazid as a “psychic energiser”, as if it might work by tickling up the depressed libido, just like psychoanalysis, but in a tablet not on a couch. He was the leader of a small group of psychiatrists who conducted the first clinical trials to test the drug in patients who were depressed without also having TB.32 In 1957, they reported that they had given the drug to 24 patients, about 18 of whom had shown improvements in their mood and social engagement as a result of treatment for five weeks. There was no experimental control for a possible placebo effect and most of the patients had a diagnosis of schizophrenia, not depression. Nowadays results like these would be regarded as embarrassingly flimsy evidence for anti-depressant efficacy. But less than a year later, 400,000 depressed patients had been treated with iproniazid, despite the fact that it was officially licensed at the time only for TB; and Kline had secured the personal support of the President of Roche to license and market it for depression. Seven years later, iproniazid had been joined on the market by about 10 other new anti-depressant drugs and more than 4 million patients had been treated. Kline picked up the lucrative Lasker Prize in 1964, and was tipped for the Nobel, because “more than any other psychiatrist [he] has been responsible for one of the greatest revolutions ever to occur in the care and treatment of the mentally ill”.33 One of the colleagues who’d helped Kline do the clinical trials didn’t see things quite the same way and took him to court for half of the kudos and half of the $10,000 cash. Kline paid up. The call from Stockholm never came.

  Figure 7: Scenes of joy at the dawn of anti-depressants.

  Life magazine ran a photo story in 1952, capturing the happy, smiling faces of patients who had been written off as hopeless TB cases, consigned to death at the hands of the white plague, until they were enrolled in a clinical trial of iproniazid. A wave of euphoria swept through the wards and patients were “dancing in the halls tho’ there were holes in their lungs” as they celebrated their new lease on life. They were lucky enough to be treated with one of the first effective anti-tuberculosis drugs that, surprisingly, also turned out to be the world’s first anti-depressant.

  Amid the booming sales and the vertiginous career moves of the protagonists, it is important to remember that the question of how, exactly, iproniazid worked as an anti-depressant was not settled. “Psychic energiser” might usefully mean a lot of different things to a lot of different people but scientifically it was a travesty. How could a drug, a physical thing, impart some quasi-libidinal energy to the psyche? There had to be a better explanation of how iproniazid worked in terms of the physical or chemical mechanisms of the brain, at least as far as those mechanisms were currently understood. And at the time - the early 1960s - there was a lot of excitement about the synaptic mechanisms by which nerve cells communicated with each other. The buzzwords of the day were the names of so-called neurotransmitters such as dopamine and adrenaline, together known as catecholamines. Enterprising psychiatrists, seeking a non-psychic mechanism of action for iproniazid and other anti-depressants, forged from this new neuroscience some highly influential theories about how antidepressant drugs worked and about what causes depression in the first place. But to understand where those theories came from and how they led to Prozac, we first have to step back a bit.

  We take it for granted now that in the human brain there are about 100 billion nerve cells that must work together as a system, the central nervous system. Clearly, communication between individual cells must be very important in enabling them to function as a system. But how are nerve cells connected to each other? The first person to start answering this question correctly was a Spanish contemporary of Freud’s, called Santiago Ramón y Cajal, who is now widely regarded as the founding father of modern neuroscience. He was an extremely skilled microscopist, who was able to use new staining techniques to highlight individual nerve cells, to isolate them visually from the mass of surrounding nervous tissue, so that when he looked down the microscope he could see the intricacies of each cell in minute detail. Very few people had ever seen the nervous system in this light before, one of whom was Camillo Golgi, the professor of anatomy at Padua, who had earlier invented the microscopic staining techniques that Ramón y Cajal was using.

  As well as being able to see such almost unprecedented sights, which was no mean feat in itself, Ramón y Cajal was also able to draw what he saw with great precision and artistry. And he was a workaholic. He single-handedly produced an enormous number of microscope slides and drawings of nerve cells, each executed to the highest possible standard. He published mag
isterial papers and text-books on the vertebrate brain that encompassed humans and many other animals, at all stages of development, in health and disease (Fig. 8). And his authoritative view was that nerve cells often made very close contact with each other, but they remained individually distinct, meaning there must be a space or a gap between even the closest pair of cells. This was a bold claim to make, for such a scrupulous observer of nature, because Ramón y Cajal could not actually see a gap. He reasoned that it must be too narrow to be visible at the highest magnification that 19th-century microscopes could provide.

  Figure 8: The seer and the synapse. As a boy, Santiago Ramón y Cajal most wanted to be an artist but was persuaded by his father to become a physician, which he dutifully did although he didn’t like it. As a young man, he used his artistic talent to produce stunningly beautiful and accurate pen-and-ink drawings of the nerve cells he saw clearly for the first time, using a brass microscope on an old kitchen table. He could see individual cells made very close contact with each other, to form a network of nerve cells. Where two cells contacted each other, not even he could see a gap between them. But he was convinced that a gap existed and in the 1950s, about 20 years after he died, he was vindicated. It is now taken as a matter of fact that there is a synaptic gap, bridged by neurotransmitters like serotonin, between nerve cells that are closely contiguous but not continuous (Fig. 10).

  Not everyone agreed with him: Golgi for one. When Golgi looked down the microscope, like Ramón y Cajal, he saw that there were countless densely stained nuclei in nervous tissue, and thin strands of cytoplasm forming an intricate tracery between the nuclear bodies. And like Ramón y Cajal, Golgi couldn’t see a gap or a space that marked a boundary between one cell and another. But to Golgi that meant that there wasn’t a gap. If it is was invisible it couldn’t be there. He described what he saw as a single continuous web, or syncytium, of nervous tissue. And he had some sharp questions for Ramón y Cajal. Why should we believe that there is a gap between nerve cells when we can’t see it? And if there is a gap between cells, even an invisibly narrow one, how do the cells communicate with each other? The Nobel Prize committee couldn’t decide who was right: in 1906, Golgi and Ramón y Cajal were jointly awarded the prize for their equally brilliant work and mutually contradictory theories. It was only about 40 years later, after both men had died, when electron microscopes were used for the first time to look at nerve cells with much greater magnification than the old light microscopes, that the synaptic gap between nerve cells, which Ramón y Cajal had always known was there, swam sharply into focus.

  That dealt with the first of Golgi’s objections - we didn’t have to believe in an invisible gap any more - but it made his second question all the more pressing. Now that we know a gap exists, how can nerve cells communicate across it? The synaptic gap is typically less than a thousandth of a millimetre from one nerve cell to another. That’s very narrow but it’s not nothing. It’s still a gap, filled by a watery solution of salts and molecules that will be resistant to the passage of an electric current. So the electrical impulses that carry information from one end to another of a single nerve cell can’t simply carry on through the synaptic gap, as if it wasn’t there, to electrically activate the next-door cell. Somehow the electrical signal has to be converted into a different kind of signal that can bridge the gap, or pass the baton, to communicate between two nerve cells.

  We now know that synapses bridge the gap by chemical signalling. The upstream nerve cell produces chemical messengers, called neurotransmitters, and releases them into the synaptic gap when it is electrically stimulated. These chemicals quickly diffuse across the gap and bind to receptors on the surface of the downstream cell, triggering its electrical activation. That’s how the electrical signal jumps from one nerve cell to another. In the 1950s, it was becoming clear that the brain used many different neurotransmitters for this purpose. There wasn’t a one-word answer to Golgi’s second question, about how nerve cells communicated with each other. Some synaptic gaps were bridged by adrenaline molecules, while other nerve cells used noradrenaline, or dopamine, or serotonin, as a neurotransmitter.

  When scientists started thinking about what iproniazid might be doing in the brain that could explain its euphoriant and anti-depressant effects, they realised that the drug could boost signalling across synaptic gaps between nerve cells that used adrenaline or noradrenaline as chemical messengers. Iproniazid inhibited an enzyme that broke down adrenaline after it was released into the synaptic gap, effectively turning off the chemical signal soon after it had been turned on. By inhibiting its normal breakdown, iproniazid prolonged and intensified the effect of adrenaline in the synapse. Could this be its mechanism of action as an anti-depressant drug? The answer seemed to be yes for iproniazid and yes, more generally, for all the other new drugs that had followed iproniazid into the rapidly growing market for anti-depressants. Remarkably, they all turned out, one way or another, to boost the effects of synaptic transmission mediated by adrenaline or noradrenaline, collectively known as catecholamines.

  It all seemed to fit together. In 1965, Joseph Schildkraut, later to become a professor of psychiatry at Harvard, published an influential paper that took the next step.34 His title said it all: “The catecholamine hypothesis of affective disorders”. Given that anti-depressant drugs boosted the effects of adrenaline and noradrenaline, he argued that the reason patients were depressed in the first place was that they didn’t have enough catecholamines in their brains. This might not seem like a great leap, but it is.

  Schildkraut was proposing that adrenaline and noradrenaline not only explained the mechanism of action of antidepressants - how the drugs worked - but were also the fundamental cause of depression. Drugs like iproniazid were imagined not just to increase the availability of key neurotransmitters but to restore their normal levels in the brain, to rescue depressed patients from a hitherto unrecognised condition of brain adrenaline or noradrenaline deficiency. His article was admirably nuanced. He did not over-sell the idea. He offered it as a heuristic, not a matter of fact, and he was well aware that there was very little evidence that depressed patients did indeed have reduced levels of catecholamine signalling before they were treated with anti-depressant drugs. But it must have seemed to him and his contemporaries that it was only a matter of time before that last piece of the puzzle slotted into place. What was by then known as the psycho-pharmacological revolution had moved so far, so fast, from there being no effective drugs in 1955 to dozens in 1965, that another turn of the wheel must surely be enough to drive psychiatry forward to complete enlightenment.

  Scientists working for Eli Lilly, an American pharma company, thought they knew what to do next.35 They assumed that Schildkraut’s theory was correct, so far as it went, but not complete. They knew that adrenaline and noradrenaline were not the only neurotransmitters in the brain: there was also serotonin. They started from the idea that serotonin was a plausible new target for anti-depressant drug development. Then they discovered molecules that could boost serotonin transmission, by blocking its reuptake from the synaptic gap, and called them selective serotonin reuptake inhibitors (or SSRIs for short). By the mid-1970s they were ready to put their lead molecule - their best SSRI - into a clinical trial for depression. But the senior management of the company wasn’t convinced it would work and financed only a smallscale study. It failed. The patients treated with the SSRI were no less depressed at the end of the trial than the patients treated with a placebo, an anodyne sugar pill. The scientists who had been working on it, for a decade by then, pushed on. They believed passionately that their drug must work for depression because, unlike the accidental discoveries of iproniazid and its ilk, the development of SSRIs was based on a mechanistic rationale from the outset. They thought there was a reason to believe, a reason to look through the unwelcome news of a single failed trial, and to try again. In its subsequent trials, their SSRI was significantly more effective than the placebo. By 1987, it was licensed as
a new anti-depressant and marketed under the brand name of Prozac.

  It meteorically attained a rock star status unlike any other psychiatric drug before or since. In 1990, Prozac was on the cover of Newsweek. By 1995, it was generating two billion dollars of sales worldwide and Prozac Nation36 was the title of a best-selling memoir of life with depression. By 2000, it had been prescribed to about 40 million patients and Fortune magazine had listed it as one of the products of the century. But, in the perfect light of hindsight, the launch of Prozac was the blazing sunset, not the dawn, of the golden age of anti-depressants. In the 30 years following the seminal and surprising observation that an antibiotic caused outbreaks of dancing in a TB sanatorium, industrial and academic researchers collectively produced many new drugs, and many new theories about how they worked. In the 30 years following Prozac, the field has not flourished but fizzled out. There have been no major new advances in drug treatment, or psychological treatment come to that, for depression or any other mental health disorder, since about 1990. I’ll say that again.

  When I started specialist training as a 29-year-old psychiatrist, at St George’s Hospital and then the Bethlem Royal and Maudsley Hospital, in London in 1989, it was recommended we study a couple of standard textbooks that covered all the major theories and therapeutics that were then considered important for psychiatry. To this day, in 2018, I could still safely and acceptably treat most patients with mental health disorders based solely on what was written in those textbooks. That would not be true for my contemporaries who specialised in different areas of medicine at the same time as I went into psychiatry. If I was an oncologist, or cancer physician, who treated cancer patients in 2018 within the limits of what was known about cancer biology and anti-cancer treatment in 1989, I would be struck off for malpractice. Likewise, if I was a rheumatologist treating patients today without knowledge of anti-TNF (tumour necrosis factor) antibodies, or if I was a neurologist treating patients in ignorance of recent advances in immunological treatment of multiple sclerosis.37 In most other areas of medicine, the last 35 years have witnessed sufficient scientifically driven change in theory to reveal that what was known in 1989 may not have been completely wrong but it is certainly not good enough for clinical practice in 2018. Only in psychiatry has time apparently stood still. What we had for depression when I started - SSRIs and psychotherapy - is pretty much still all that we’ve got therapeutically. They are both modestly effective on average and more markedly beneficial for some patients. They’re OK. But no major new treatments for depression - or any other mental health disorder - have been introduced since the sun of Prozac sank beneath the horizon of progress.

 

‹ Prev