We have supposedly progressed far from that, with our fMRIs and PET scans glowing gold and green. But still today, when it comes to the persistent use of these chemicals cooked up by men who hope to help (and reap the rewards of that help), we ask, What? What? It’s the same question I ask of my psychopharmacologist as he scrawls my next prescription, the handwriting, as always, illegible and full of signs and symbols that mean nothing to me. Even all these years later, I take each prescription and make of it an origami plane or plant, or, my favourite, a tiny white swan with wings enfolded, with a miniature beak, perched gracefully on my palm and delivered to the pharmacist, who, seemingly with a snap of his fingers, a wave of his wand, turns my bird into a bottle of tablets.
4
SSRIs
The Birth of Fluoxetine
The First SSRI
In the 1950s and ’60s, scientists were beginning at last to unpack the black box of the brain. The post-mortem studies of those rabbits that had been given reserpine continued to be an important benchmark, showing as they did that reserpine lowered serotonin while the tricyclics raised it. In the mid-1960s, Joseph Schildkraut, a psychiatrist and researcher at Harvard University in Massachusetts, cemented the theory that evolved into the monoamine hypothesis of depression. Monoamines, remember, are neurotransmitters such as dopamine, noradrenaline, epinephrine and serotonin. Schildkraut, building from a growing consensus among scientists, theorised that depression was the result of a deficit of some or all of these neurotransmitters. Noradrenaline, he thought, was related to alertness and energy as well as to anxiety, attention and interest in life, lack of serotonin to anxiety, obsessions and compulsions, and dopamine to attention, motivation, pleasure and reward. Schildkraut and other proponents of the monoamine hypothesis recommended that psychopharmacologists choose the antidepressant on the basis of the patient’s most prominent symptoms. Anxious or irritable patients should be treated with noradrenaline reuptake inhibitors, while those patients displaying loss of energy or lack of enjoyment in life would do best on drugs that increase dopamine.
This theory predominated for roughly a decade and then was further refined by Swedish researcher Arvid Carlsson, an eventual Nobel winner. In 1972 Carlsson, funded by the pharmaceutical firm Astra, patented zimelidine, the world’s first selective serotonin reuptake inhibitor (SSRI), a class of drugs that increase the amount of serotonin in the synaptic cleft – the space across which nerve impulses are transmitted – by hindering its absorption, or reuptake. Carlsson’s Zelmid, born in Sweden and disseminated throughout Europe, suggested that the critical monoamine in depression was serotonin. Months after it had been on the market, however, Zelmid started making some people ill with strange flu-like symptoms and, still more worrisome, Guillain-Barré syndrome, a neurological condition that can be fatal.
Astra quickly pulled its drug from the pharmacies, but not so fast that pharmaceutical giant Eli Lilly failed to get a glimpse of all the sunniness and smiles, prompting its researchers to take another look at their own serotonin compound, which had been stalled for several years with nothing more than a numerical label, LY-110140, a drug they had created but had not even bothered to name because they hadn’t yet decided what to do with it. Serotonin, after all, is present outside the brain. In fact it’s omnipresent in the body, playing a role in sleep, digestion and blood pressure, among other things. Given this potentially wide range of application for LY-110140, Lilly had solicited the opinions of leading scientists as to its possible uses. Perhaps it could be a weight-loss drug, or an anti-hypertensive, both of which seemed more lucrative to Lilly, at that time, than a drug for depression, which one scientist had suggested. Initially, Lilly executives were quick to shoot down that idea because they were not convinced that their compound would actually work as an antidepressant, or that there would be a significant market for it. With no decision made, LY-110140 had languished in the shadows until zimelidine came along, proving that serotonin-specific drugs could definitely improve and regulate mood, if only their unfortunate side effects could be curtailed.
Eli Lilly is located in Indianapolis, on a gracious campus with gleaming buildings of steel and stone. It was here, in the 1970s, that Ray Fuller, Bryan Molloy and David T. Wong, working from the compound LY-110140, created fluoxetine. After the success of zimelidine in treating depression, they were aware in every instance that what they were seeking was a chemical that would increase the amount of serotonin in the brain. The antidepressants that preceded fluoxetine came to be considered ‘dirty drugs’ because they worked on multiple neurotransmitter systems at once and therefore caused a host of unpleasant somatic side effects. By selecting only serotonin, the inventors of fluoxetine sought to cure depression while sparing patients the blurred vision, the dry mouth, the excessive sweating, the sluggishness and the weight gain that were part and parcel of prior antidepressant treatment. In 1975 the manufacturer finally gave its creation a name, fluoxetine, which would eventually also be known by its brand name Prozac, especially in the United States.
From Nerves to Tears
Drugs, however, are not mere chemical concoctions. They are capsules, tablets, liquids, what have you, released into a culture that will, inevitably, bestow meaning on them. In the 1930s, ’40s, ’50s and beyond, the culture was largely one of anxiety. When people suffered, they attributed it to their ‘nerves’, while psychoanalysis posited anxiety to be the root cause of almost all neurotic problems. Depression was seen as a fringe condition, and a deadly serious one to boot. The Diagnostic and Statistical Manual of Mental Disorders for the 1950s lists four kinds of depression, three of which include psychotic features. The depressed were often patients of the back ward who were lost to light and hope. This doesn’t mean that there weren’t milder forms of the disorder; it’s just that people were far more prone then than now to understand their wayward moods as a bad case of the jitters.
Then along came Roland Kuhn and Nathan Kline. Kline wasn’t merely a show-off. He also made it his mission to educate the public about depression, visiting family doctors and counselling them to diagnose the disorder when presented with a patient who had psychosomatic complaints. Slowly word spread that the country was suffering not from nerves but from numbness. What had been a fringe illness very gradually became commonplace as the culture let go of Freud and his theories. There is no definitive point at which this occurred; it was a slow process, with the new antidepressants and their inventors contributing to the change. When Kline won the much-coveted Lasker Prize again for his discovery of the MAOIs (he is the only person ever to win it twice), he declared that ‘more human suffering has resulted from depression than from any other single disease.’
Not long after that, Freudian adherent Aaron T. Beck broke with psychoanalytic tradition and created what is called cognitive behavioural therapy (CBT), which taught patients to identify flawed or maladaptive patterns in their behaviour or their thinking, and to replace these defective ways of behaving and thinking with patterns that were more prudent and conducive to avoiding despair. It was a mode of treatment especially suited to dealing with disorders of mood. Nervous illness waned as patients learned, through CBT, that their depressions were borne on the back of self-critical thinking and that, by reframing negative self-talk, they could lift their sunken spirits. The therapy grew and grew in popularity, until it now has millions of adherents.
Some might say that the MAOIs and the tricyclics caused the interest in and the awareness of depression, that Kline and Kuhn manufactured a disorder for the new drugs to treat. The antidepressants of the 1950s and ’60s, however, were never superstar chemicals, partly because, unlike the antipsychotic chlorpromazine, which was pushed for a multitude of off-label uses, they were never directly advertised to consumers. Their range was narrower from the beginning. Furthermore, they had whole rafts of side effects other than tardive dyskinesia, some of which were merely extremely unpleasant, while others were downright dangerous. An advertisement in a medical journal for the
tricyclic amitriptyline in 1965 suggests that the drug might replace electroconvulsive therapy, underscoring the seriousness of the condition it was meant to treat. But although the first antidepressants may not have been household names, they nevertheless started a subterranean cultural shift in our understanding of ourselves, priming us for fluoxetine, so that when the drug was finally approved for release in 1987, we were at last really ready to see ourselves as sad.
Specificity?
In the US mass-marketing campaign that accompanied fluoxetine’s eventual release, Lilly touted the supposed specificity of its drug, likening it to a magic bullet, or a Scud missile that lands with programmed precision on millimetres of neural tissue. This, however, is misleading. Although fluoxetine is called an SSRI, in reality the phrase ‘selective serotonin reuptake inhibitor’ does more to conceal than to reveal. The truth is that there is really no way to have a serotonin-specific drug because the chemical serotonin casts a wide net over the whole of the human brain, is intricately tied up with our other neurotransmitter systems, is furthermore found throughout the human corpus – especially in the gut – and beyond that, as noted earlier, is implicated in dozens of physiological functions, from sleep and appetite to pain perception and sensory integration, to name just a few. Indeed, serotonin is one of the oldest neurotransmitters on the planet. It was present on Earth millions of years ago and is found in myriad other life forms as diverse as birds, lizards, wasps, jellyfish, molluscs and earthworms. Given serotonin’s wide net, not just across species but within the human body and brain, it is virtually impossible to create a drug that acts directly on it, because serotonin not only has so many systems but also is so intimately tied up with dopamine and noradrenaline and acetylcholine and all sorts of other neurotransmitters that flicker inside our skulls.
Still, this didn’t stop Lilly from celebrating its brand-new compound as a site-specific drug that, given its putative ability to home in on a tiny target, would cause few to no side effects. In the United States, within six months of releasing fluoxetine under the brand name Prozac in January 1988, doctors had written more than a million prescriptions for it in that country alone. Annual sales reached $350 million in the first year. Two years later it appeared on the covers of both Time and Newsweek as the long-coveted cure for depression. It seemed like everyone was either talking about Prozac or taking it and, indeed, feeling fine.
Depression on the Rise
And yet something strange was happening. If fluoxetine was really the cure for depression, then why did the numbers of depressed patients suddenly start to rise in concert with the drug’s release? When anti-tubercular drugs were discovered, tuberculosis rates dropped off sharply and then finally almost disappeared altogether. When antibiotics were invented deaths from infections became less frequent. Vaccinations wiped out dreaded illnesses like measles and tetanus. Each of these treatments undoubtedly and clearly contributed to a healthier society. The opposite happened with fluoxetine. The drug was offered to society and society just got sicker, and with precisely the illness the drug was created to treat. In 1955, one in 468 Americans was hospitalised for mental illness. By 1987, however, one in every 184 Americans was receiving disability payments for mental illness. Two decades after fluoxetine’s release, there were almost 4 million disabled mentally ill US citizens receiving financial support through government programmes. In fact, reported incidences of depression in the United States have increased a thousandfold since the introduction of antidepressants. Depression on the rise isn’t an American phenomenon: a 2016 study by the Royal College of Psychiatrists found that mental health disorders have become the main reason for receiving benefits in the UK. Between 1995 to 2014 claimants increased by 103 per cent to 1.1 million, and by 2014 almost half of benefit claims were for a mental health problem. A cynic might say the tablet to cure depression was in fact causing it.
There are multiple theories to account for the astounding rise in the diagnosis of depression and its odd timing with the release of a supposedly superior antidepressant designed to treat it. The most obvious explanation is that depression has always been as terribly common as it is now, but that in past decades it was also terribly stigmatised, and that it took fluoxetine to lift that stigma and allow floods of people to come forward and claim their cure. This theory, however, cannot explain why now, thirty years after fluoxetine’s release, the rates of depression have only continued to rise. Surely the stigma is gone by now, and depression is a disorder that is almost hip to have.
Perhaps it makes more sense to look first at the society into which fluoxetine was released. The drug debuted in the United States as Prozac in the late 1980s during the end of Ronald Reagan’s presidency and became a blockbuster even before 1993, when Peter Kramer published his popular book Listening to Prozac, claiming that the drug made us better than well and that cosmetic psychopharmacology had finally arrived. The 1980s were a time of fierce individuality in a country that had always prided itself on autonomy, and now even more so. President Reagan was something of a Marlboro Man who cut funding to social service agencies and admonished American citizens to get off their sofas and earn a living, acquire a skill, do something, anything, with the ultimate goal of creating a self who is capable of surviving in a bubble. Money for benefits such as ‘welfare’ was slashed, mothers with young children were told to find day care and a job, or if not a job, then job skills training at centres set up for such a purpose. Nursing homes, day care centres, afterschool programmes, homeless shelters – all these institutions that were geared towards maintaining the fabric of a cohesive and helpful society – lost their federal funds and dwindled in size.
I remember it well. In my mid-twenties, I was the director of a small community mental health centre serving SPMI patients, those with ‘severe and persistent mental illness’, schizophrenic people felled not only by this dread disease but also by the added burdens of poverty and homelessness, the kind of street people you find muttering in alleys or talking to invisible angels. I watched as our agency’s state and federal support was halved, and then quartered, as therapy sessions that had been unlimited were reduced under Reagan’s rule to just six, as though that were adequate for penniless patients haunted by visions and voices. But meanwhile Wall Street boomed and the stock market more than doubled during Reagan’s two terms. The images of the 1980s were sleek black limousines and sleek silver skyscrapers, with the money pooling at the upper end of the social spectrum while the rest lost what little they had.
‘What does this have to do with Prozac?’ you might wonder. Everything, really, if you take a sociological view of what is usually understood as a deeply individualistic experience: depression. For a moment, step back and scan the horizon. Study after study has shown that rates of depression rise in concert with isolative societies. For the upper class, the Reagan years may have been lucrative, but for those who depended on a web of social services, Reagan’s presidency was difficult, if not destructive. Help went away. There were no more handouts, and thus, for some people, no more helping hands. Schizophrenics and others with mental illness lost their access to treatment providers.
I remember my patient Amy Wilson, a 31-year-old woman with a red seam of a scar across her face where a boyfriend had broken her nose with a baseball bat and left her beautiful features slightly askew. She had glitter-green eyes, her lashes coated with mascara thick as tar, her tapered nails painted a carmine red. Despite a stunning facade, Amy struggled with devastating depression and relied on her twice-weekly therapy sessions for succour and perspective. When her government-funded medical insurance was cut and our six sessions ran out, there was nothing I could do. I met her by accident in the supermarket one day, her three toddlers jammed into a trolley filled with cheese puffs and cheese dip, her face as pale as a pillow. Amy is just one of the thousands, maybe millions of people who suffered in the avid ‘do-it-yourself’ society that marked the Reagan years.
It would be overly simplistic, however, if not absu
rd to target Reagan as the sole cause of social breakdown and the rise in depression that may have resulted from it. Reagan, after all, inherited a presidency in a culture that had been steadily moving towards the sort of isolative individualism that breeds widespread depression. He accelerated the process, but its provenance lies in the history of that country, as far back, perhaps, as the nineteenth century, when Tocqueville, coming from France to watch Americans at work and play, remarked on the rampant and insistent autonomy that undergirds so much of what we strive for. In Asia and Africa it is not unusual for whole families to share a bedroom and a bed, while in the United States we fetishise Richard Ferber, who admonishes us to let our children cry it out in their own cots in dark rooms, a method that is also practised in the UK. We know that infant animals separated from their mothers secrete the stress hormone cortisol and that high levels of cortisol, while not causative, are implicated in depression.
In 1897 the Frenchman Émile Durkheim, whom some call the father of sociology, published Suicide, the classic text based on a study he did of suicide rates among Catholics, Protestants and Jews. His basic question was this: ‘Which religious group had the most suicides, which had the fewest, and why?’ What Durkheim discovered was that Protestants were the most likely to kill themselves and Jews the least likely, a finding that was surprising because in Judaism, as opposed to Christianity, there is no eternal punishment for killing yourself. Under church law, a person who commits suicide not only suffers the eternal fires of hell but also brings upon his surviving family shame and, in past times, even punishment; in the seventeenth and eighteenth centuries, for instance, the church regularly confiscated the entire estates of suicide victims, leaving their brethren destitute. The remaining family members had to forfeit their cows, their farming implements and all other items necessary to survival, much less prosperity. In England in the seventeenth century, a miller inflicted upon himself a fatal wound and was reported to have cried out, ‘I have forfeited my estate to the king, beggared my wife and children!’ Compare this to the Jewish response to suicide, in which the victim still gets a proper burial and a full-scale shiva, the traditional seven-day prescribed period of mourning.
The Drugs That Changed Our Minds Page 17