Let’s say that, before you were born, your father had acquired the characteristic of drinking heavily: that bad habit of his could have a detrimental effect on your genetic inheritance. According to Lamarckism, you would inherit a higher risk of becoming an alcoholic because your father had got into the habit of drinking heavily. And your alcoholism would have a similar but greater detrimental effect on the moral character and psychiatric risks of your children. A crude but common rule of thumb in 19th-century psychiatry was that alcoholism in the first generation would lead to madness in the second and idiocy in the third. The Lamarckian mechanism was supposed to drive an escalating degenerative process, so that each generation’s psychiatric, criminal and moral misbehaviours were recapitulated and magnified in the next.
You could say that it was their neglect of natural selection as a possible answer to the question - why is there mental illness? - that led Maudsley, Kraepelin and many others to make ethically unacceptable recommendations for social cleansing of psychiatrically degenerate lines or races. It was not only in Germany that eugenic thinking was strong in psychiatry and medicine between about 1880 and 1940, a period of time that is now called Darwin’s eclipse, when natural selection was largely forgotten and ideas of social selection prevailed brutally. We all know how those ideas played out politically and psychiatrically. There is no reason to go that way again.
The end of Darwin’s eclipse roughly coincided with the emergence of the modern evolutionary synthesis, in the 1940s and 50s. This is the big idea, now as close to axiomatic as it gets in biology or medicine, that evolution is entirely explained by natural selection of genes. Thinking afresh about the heritability of depression, in this neo-Darwinian context, we come back to the same question: what is the survival advantage of depression? The answer is still the same: there is none.
People with major depression, on average, live shorter lives, are more likely to have chronic medical disorders, are more likely to be unemployed, or less likely to be highly productive if they do have a job. Crucially, depressed people are likely to have fewer children, and the children of depressed parents are slower to achieve normal growth milestones. Not only is there no social or material advantage to major depression in this life, there is no obvious advantage to the next generation, and ultimately no promise of immortality for the genes involved in making these depressive behaviours unfold over untold generations. You might think, on the face of it, genes for depression should have been selected out of the population millennia ago; we should by now have reached the sunlit uplands where the shadow of melancholia never falls. But we haven’t. And I doubt we ever will. So there must be something good about being depressed; there must be some advantage to depression that accounts for its natural selection, but what is it?
A savannah survival story
We can make this question a lot easier to answer if we change the wording slightly. Instead of asking what is the survival advantage of depression, let’s ask ourselves what was the survival advantage of depression? Maybe the genes that code for depressive behaviours were naturally selected millions of years ago because being depressed then was somehow advantageous in a way that it isn’t now? We know that many human brain genes are ancient, like the gene for the serotonin receptor that goes back as far as the humble nematode worms, like C elegans, which evolved at least 500 million years ago. So it makes sense that there should be a lag time in evolution. Once a gene has been selected in a worm, or a dog, or an ancestral caveman, let’s say, it will often stick around, or be conserved, in the modern human genome. So we may find ourselves doing genetically programmed things in 2018 that made perfect sense on the ancestral savannah but don’t seem to be working out so well here and now.
OK, we don’t know much about the ancestral savannah, still less about the selection pressures on pre-human apes and mammals. We weren’t there at the time. And we can’t very easily do experiments on evolutionary processes that have been ongoing for hundreds of millions of years. We have to make up stories about what might have happened. Then try to test our best guess scientifically. As it happens, some of the most compelling (and testable) recent evolutionary theories of depression have focused on natural selection of genes controlling the immune system.9, 10
The story is generally told about tribes of early humans struggling to survive on the African plains about 150,000 years ago. It would have been a harsh challenge indeed to find enough food, to survive attacks by predators and rival tribes, to find a mate, to raise a family. There were many threats to survival but number one was infection. Life provided many opportunities for infection, like childbirth, injuries and wounds, and there was very little effective treatment. The infant mortality rate was high, the maternal death rate in pregnancy and childbirth was high, and the men involved in hunting and fighting usually died in their twenties. A lot of this attrition was due to infectious diseases that started trivially with a cut hand, or at the stump of a roughly severed umbilical cord. And there were also contagious infections, plagues, that passed from person to person, and could decimate a tribe. Obviously, anything that the body can do in terms of defence against infection is highly advantageous in this context. You can imagine that gene mutations that made the macrophages slightly angrier, or that made cytokine signalling slightly stronger, could be advantageous if they strengthened the front-line defences of the innate immune system against the bacterial killers that attack babies and young children. Genes that have randomly mutated to deliver enhanced killing power against germs will be naturally selected because people who have inherited them will be more likely to survive childhood and reach the reproductively active stage of life at puberty. In an environment like the ancestral savannah, with a high infant mortality rate due to infection, there will be a strong natural-selection pressure on genes that boost innate inflammatory response.
Those inflammatory genes could help people survive in many ways. They could increase rates of wound healing and reduce the risk of a local infection becoming global. They could also change behaviour. In much the same way as an infected animal, a wounded or sick human, like me after root canal work, shows a characteristic pattern of behaviour. Ill or invalid human beings withdraw from social contact, reduce their physical activity, eat less, and are less hedonic. They are also quietly anxious and have disturbed sleep. This is a very engrained and consistent pattern of behaviour, written into our DNA by genes that must have evolved millions of years before H sapiens. As we have seen, this sickness behaviour is strongly driven by innate inflammatory mechanisms. So the genes that are naturally selected to defeat infection by killing bugs on the front line can also be expected to drive sickness behaviour. But how could sickness behaviour be an advantage for survival on the savannah?
We can imagine that temporary withdrawal from the tribe could protect our ill ancestor, the “patient”, from demanding social obligations and competition at a time when he needs to rest and use all his resources to fight off infection. In this comforting vision, the isolated patient is protected, licensed to do very little but recover. Loss of appetite could also favour survival by preventing him from wasting energy on digestion or searching for food at a time when all the biological energy in his body must be commandeered to activate the macrophage army in the all-consuming fight against infection. This makes sickness behaviour sound like genetically programmed convalescence: a good thing for the patient, designed to hasten his recovery. But we can also imagine that sickness behaviour had another, less comfortable side to it, out there on the savannah. When night fell, and the rest of the tribe was gathered around a fire and food, the isolated patient could easily be forgotten in the marginal shadows where predators lurked. If the tribe was attacked by a rival tribe, or forced to migrate because of drought, the patient would likely be among the first casualties, exposed at the edge of the group. Isolation would increase the external threats to him. So sickness behaviours of anxiety and sleep disturbance might be advantageous to his survival by keeping him alert to danger
, even while all he wanted to do was rest and heal his septic wound.
The key sickness behaviour of social withdrawal is thus both protective and threatening for the patient. But for the tribe it is more purely protective. Contagious disease was a special threat to ancestral tribes, which were originally not much more than extended families of a few hundred, highly interrelated people. Disease could spread rapidly and the genetic similarity between tribal members would mean that if a germ proved lethal to one of them it would likely be lethal to them all. A catastrophic plague could wipe out the entire tribal gene pool. By isolating the patient, the innate immune behaviour of social withdrawal reduces the risk that currently uninfected but genetically related members of the tribe will also become infected. You can think about social withdrawal as a form of quarantine. The patient’s inflamed behaviour puts him at anxiety-provoking risk of being picked off at the margins, for the sake of making the tribe as a whole more resilient to contagious infection. You can imagine that sickness behaviour was naturally selected as much for the survival of the tribal DNA as for the patient’s individual DNA. You could say that natural selection has picked genes that will drive an infected individual to put himself at risk for the common good. The 15th-century leper colony visited by Paracelsus on the edge of Nuremberg is a more modern example of the tribe’s highly conserved instinct to protect itself from contagion, by quarantining or excluding potentially infectious invalids.
In any case, as the savannah story would have it, at some point in the deep time of human prehistory, genes were selected to increase the inflammatory response to infection, so that our ancestors, or at least our ancestral tribes, were more likely to survive. Selecting more inflammatory genes makes sense in terms of accelerating and magnifying the body’s rapid rebuttal of an actual infection. But you can imagine that it would be even more advantageous to select genes that could predict an infectious threat, as well as respond to infection aggressively once it had occurred.
If the macrophage army was revved up before the first hostile germs invaded that would give it a much better chance of wiping out the infection before the enemy started multiplying and the infection became more serious. Out on the savannah, we can imagine that infection was strongly predicted by trauma, by injuries or wounds, often sustained in hunting or fighting. Since even a minor combat wound could be complicated by fatal infection, it would make sense to select genes to detect socially competitive or dangerous situations and to alert the immune system to be prepared for an imminent risk of infection. Then the ancestral patient’s body would already be inflamed before he was injured by his tribal enemy, and before his macrophages saw their bacterial enemy for the first time.
This is a story about our evolution that could help us answer the question why. We have inherited genes that will accentuate all aspects of innate inflammation, including depressive behaviour, in response to actual or threatened infections. And the same genes that conferred a survival advantage in response to the fact or threat of infection, on the savannah, have passed down to us many generations on as apparently disadvantageous genes that make us more inflamed in response to social conflict, and more depressed in response to inflammation.
Mrs P might have been prone to experience depressive symptoms in response to the surge in cytokines kicked off by her joint disease because she had inherited genes that kept an ancestor alive after childbirth 100,000 years ago. The burnt-out teachers might have been at risk of becoming inflamed in response to the various social threats of life in the metaphorical jungle of a modern classroom because they inherited genes that had protected their ancestors against post-traumatic infection when they were fighting a rival tribe in the real jungle. One might even wonder if the stigmatisation of depression in 2018 is somehow related to the isolation of ancestral tribe members who were behaving as if they were inflamed. Could the common feeling that “we don’t know what to say” to our depressed friend conceal an ancient inherited instinct to recoil from close contact with people who are behaving as if they are inflamed and potentially infectious?
The savannah story is seductive because it seems plausible, and it is aligned with neo-Darwinian theory, as it sweeps seamlessly from wounded hunter-gatherers to stressed-depressed patients in the NHS. But it is only one of many plausible evolutionary rationales, or “just so stories” as some scientists sceptically call them, that you could make up to explain the survival value of depression. We need to test the savannah story, somehow, to be sure that it’s more than a story.
We can’t experimentally rerun human evolution in a germfree environment, from the birth of C elegans 500 million years ago, to show that without ancestral exposure to infection the genes that cause inflamed depression in H sapiens in the 21st century would not have been selected. However, that is not to say the story can’t be tested scientifically at all. If the savannah survival story is true then at least some of the genes that increase risk for depression should be genes that control the immune system, and that is a prediction we can test experimentally in the real world.
We know that depression is heritable - it runs in families - so your risk of being depressed is increased approximately three-fold if both your parents are depressed and increased approximately two-fold if one or more of your siblings is depressed. However, depression is not as strongly heritable as some other psychiatric disorders, like schizophrenia or bipolar disorder. And that may be part of the reason why the individual genes that underpin the heritability of depression have proven more difficult to identify than the genes for schizophrenia or Alzheimer’s disease. It is also likely that depression, like many other common and heritable disorders, is not determined by one or two genes with strongly adverse effects on brain and mind phenotypes, but by many genes, each contributing a small quantum of risk for depression. To find many genes of weak effect in a moderately heritable disorder means we have to test all 20,000 genes in the whole genome, not just a few genes; and that in turn means we need to collect data on very large numbers of patients. It is a numbers game and psychiatric genetics has only very recently amassed big enough numbers on depression.
Some of the first major studies that searched the whole genome for genes that increased the risk of depression drew a blank. They found nothing, no significant differences in the frequency of different DNA variations between patients with depression and healthy volunteers. But although these studies seemed big at the time, comprising data on tens of thousands of patients, it turns out that the reason they failed to find anything was that they were not big enough. Very recently, in a study that has only just been published online, a large international consortium of investigators analysed DNA from about 130,000 cases of depression and 330,000 healthy controls. They found 44 genes that were significantly associated with depression.79 At last, for the first time, in 2018, we are closing in on the genetic roots of melancholia.
What are these genes and what do they do? Many of them are genes that are known to be important for the nervous system, which is not surprising to those of us who expect mood states to be generated by the brain. More remarkably, many of them are also known to be important for the immune system. For example, the single gene most significantly associated with depression is one called olfactomedin 4. Until it emerged at the top of the risk list for depression, this gene was best known for its role in controlling the gut’s inflammatory response to dangerous bacteria.80 People who have inherited a mutation in olfactomedin 4 that makes their stomach wall more inflamed by bacterial infection might benefit from some survival advantage in terms of resistance to stomach ulceration; but they are also more likely to become depressed. This is a brand-new result, yet to be scrutinised scientifically in detail, but it is robustly based on a huge amount of data, and it is pretty much as predicted by the savannah survival story, which is maybe not so “just so” after all.
• • •
Scepticism is the first Cartesian principle of science and it keeps us honest. The history of medicine and psychiatry is lit
tered with discredited treatments that got away with it for a while because of insufficient professional scepticism. But what is left to support a sceptical position about the links between inflammation and depression?
It is now clear beyond reasonable doubt that they are linked, and that they can be causally linked. We can draw an explanatory path from bodily inflammation, across the blood brain barrier, to inflamed brain cells and networks, which ultimately cause the mood and behavioural changes of depression. We know that bodily inflammation can come from social stress, which is a well-known risk for depression. We can imagine that this linkage between stress, inflammation and depression could have been advantageous to our ancestors in their fight against infection. And there is some evidence just emerging that genes controlling the inflammatory response to infection, and presumably first selected for that reason back on the savannah, are also risk genes for depression in the modern world.
Of course, you can, if you wish, suspend judgment on the grounds that the data are not yet compelling, there are still a lot of wrinkles to iron out, what it really needs is another experiment, etc. But the shrink in me would say: are you sure your reasonable reservations are not an unconscious defence of your Cartesian blind spot? As the great man so nearly said, the more progressive philosophy is surely immuno ergo sum.
The Inflamed Mind Page 15