by Thomas Dixon
Although the details of phrenology were all wrong, the basic idea that different mental functions were correlated with particular parts of the brain turned out to be a scientifically fruitful one. Studying patients who had suffered brain damage, through disease or injury, allowed scientists to start to make more accurate statements about localization. In the 1860s, the French physician Paul Broca discovered the area – still known as ‘Broca’s area’ – in the frontal lobes of the brain that was responsible for speech production. The extraordinary case of Phineas Gage provided further insights. Gage was a railroad construction worker in the state of Vermont in the USA. In an accident in September 1848, Gage was injured by a metre-long iron tamping rod, which was driven by a dynamite blast through his cheek and out of the top of his head. Astonishingly, Gage survived with his faculties apparently intact. But it soon became clear that the damage to his frontal lobes had produced a powerful change in his personality. He had lost the ability to empathize with others, and his social behaviour became unpredictable and erratic. Gage’s story is just one very memorable example among the thousands of cases through which an ever more detailed understanding of the functions of different brain areas has emerged.
18. Illustration showing the tamping iron that went through Phineas Gage’s head in 1848, and the route that it took through his skull
The invention in more recent times of brain scanning technologies has allowed this project to be pursued with greater precision, revealing the dynamic interactions of different parts of the brain, and offering insights into the working of intact brains as well as damaged ones. Neuroscientists can even use magnetic fields to stimulate parts of the brain experimentally and study the mental effects on their intrepid subjects. These techniques have all been applied specifically to religious experiences as well as to many other mental capacities. Buddhist monks and Roman Catholic nuns these days seem to be in constant danger of being asked by neuroscientists to insert themselves into an fMRI (functional Magnetic Resonance Imaging) scanner or to don a special rubber cap wired up with electrodes, all in the service of the neuroscientific study of spiritual experiences.
Some of these studies have suggested that there are particular parts of the brain that are especially involved in religious experiences. One candidate has been the temporal lobes, partly on the basis of the apparent susceptibility to religious experiences of sufferers of temporal lobe epilepsy. The American neuroscientist Michael Persinger has taken this idea a step further by creating a device to stimulate that part of the brain in an attempt to induce religious experiences in experimental subjects. His ‘Transcranial Magnetic Stimulator’, or ‘God helmet’ as others have called it, was applied with disputed results. But many who participated in the experiment reported feelings of a numinous presence or transcendent oneness. Other studies have identified different brain areas as being especially involved in meditative states. And some recent work suggests there is no single ‘God spot’. A study of Carmelite nuns carried out in 2006 by Mario Beauregard and Vincent Paquette, for example, found that several different brain areas were simultaneously involved in their spiritual experiences.
Dualism and physicalism
What are the implications of this scientific research for religion? One newspaper report of Beauregard and Paquette’s study ran under the headline: ‘Nuns Prove God Is Not Figment of the Mind’. The somewhat tortuous idea behind the headline seemed to be that if the whole brain is involved in religious experiences then that contradicts the theory that there is one special ‘God spot’, perhaps in the temporal lobes, and with it the associated belief that religious experiences are ‘nothing but’ the activation of that one brain area. Why it would be any less religiously or theologically troubling to find that spiritual feelings were produced by the activation of many parts of the brain, rather than just one, is not clear. This is a good example of the theological and philosophical ambiguity of empirical neuroscientific studies.
The success of neuroscience in showing that there are correlations between certain states of the brain and certain associated mental experiences, including religious ones, has been interpreted by some as a direct refutation of traditional beliefs about mystical experiences and the immortality of the soul. According to this sceptical stance, an experience can be caused by the brain or by an immaterial being (God or the soul) but not both: a neurological explanation of an experience rules out a supernatural or religious one. Science has explained away the supernatural.
That might seem a reasonable and simple enough assumption. However, there are plenty of philosophers, scientists, and theologians who would deny it. To offer neurological or, for that matter, evolutionary explanations of where our religious and moral beliefs come from is an interesting scientific enterprise. It flourishes today as one part of the ambitious programme of research known as ‘Cognitive Science’. But since absolutely all our beliefs – religious, scientific, or otherwise – are, on this hypothesis, the products of the same evolved neurological apparatus, drawing attention to that fact does not get us any further forward in the philosophical endeavour of distinguishing between the true ones and the false ones.
Another response to the perceived challenge of neuroscience to religious belief has been to adopt a form of ‘dualism’ – in other words, to assert that there exist two distinct kinds of substance, or properties, the mental and the physical, which interact with each other, especially in human beings. The dualist would interpret the close correlations discovered by neuroscientists as evidence not that the mind is nothing but brain activity, but rather that the mind interacts with the brain, or uses the brain as its instrument. René Descartes’s 17th-century version of this philosophy is the one that has received most scholarly attention, but there are plenty of modern successors to his view, both among philosophers and more widely. Key problems in making sense of dualism include the question of how the physical and the non-physical can causally interact with each other, and explaining why dualism is to be preferred to the apparently simpler alternative of physicalism, according to which mental properties are properties of the brain.
Even if all mental experience is, in some sense, physical, it is still not straightforward to articulate what that sense is. Why is it that particular bits of matter (exclusively, as far as we know, complex networks of nerve cells within the brains of living animals) exhibit the properties of consciousness and others (such as rocks, vegetables, or even computers) do not? Philosophers and theologians interested in this question have, in recent years, discussed concepts such as ‘emergence’, ‘supervenience’, and ‘nonreductive physicalism’, all of which try to articulate how mental realities can be both dependent on and yet autonomous from the physical. To say that the mind is ‘emergent’ or ‘supervenient’ is to suggest it is autonomous, not in the sense of being able to exist independently of the brain, but in the sense that it exhibits properties and regularities that are not susceptible to systematic reduction to the neurological level.
Bodily resurrection and subjective immortality
For most believers, I imagine, it would be a step too far beyond the teachings of their tradition to accept an entirely physicalist reinterpretation of ‘mind’ and ‘soul’. There are, however, resources in those traditions that might support such an approach. The Hebrew Bible offers a much more embodied idea of human personhood than that developed later under the influence of Greek philosophies, which tended to emphasize the duality of body and mind. In the Book of Genesis, in punishing Adam and Eve for their disobedience in the Garden of Eden, God says to Adam, ‘By the sweat of your brow you will eat your food until you return to the ground, since from it you were taken; for dust you are and to dust you will return.’ This same language is echoed by the preacher in the Book of Ecclesiastes: ‘For the fate of the sons of men and the fate of the beasts is the same; as one dies, so dies the other. They all have the same breath, and man has no advantage over the beasts; for all is vanity. All go to one place; all are from the dust, and all turn
to dust again.’ St Paul’s writings in the New Testament also emphasize bodily resurrection more than the immortality of the soul, and the Apostles’, Nicene, and Athanasian Creeds affirm belief, respectively, in ‘the resurrection of the body and the life everlasting’, ‘the resurrection of the dead and the life everlasting’, and the view that at Christ’s second coming ‘all men shall rise again with their bodies and shall give account of their own works’.
To return to a more traditional belief in bodily resurrection rather than spiritual immortality is, in one way, an elegant religious solution to the problem of how to respond to advances in neuroscience. However, the effect really is to go from a dualistic frying pan into an apocalyptic fire. If modern science suggests that belief in an immortal soul is problematic, it might equally, to say the least, question the evidential basis for the notion that at some point in the future God will bring history to an end in a final eschatological act in which the universe will be destroyed and recreated and the dead will be brought back in bodily form to be judged by their maker. For those who prefer this one huge miracle to the problems raised by trying to find a place for an immaterial soul in the history of human evolution and in the activities of the brain, belief in physicalism and a bodily resurrection might continue to seem the most acceptable option nonetheless.
And for those who cannot believe in either a spiritual rebirth or a physical resurrection, there is perhaps some comfort in the idea of subjective immortality – the humanist notion that the selfish desire for heavenly rewards in a future life should be replaced by a more humble hope that one might live on after death through one’s friends, one’s children, or one’s work. This ancient idea was popular among secularists in the 19th century, and was expressed in the closing lines of George Eliot’s novel Middlemarch (1871–2). The narrator, speaking of the book’s heroine Dorothea, says:
the effect of her being on those around her was incalculably diffusive: for the growing good of the world is partly dependent on unhistoric acts; and that things are not so ill with you and me as they might have been, is half owing to the number who lived faithfully a hidden life, and rest in unvisited tombs.
But not everyone likes the idea. When asked if he hoped to achieve immortality through the impact of his films, Woody Allen replied: ‘I don’t want to achieve immortality through my work. I want to achieve it through not dying.’
Selfishness and altruism
As we have already seen, beliefs about the soul and the afterlife have always been closely connected with concerns about morality and social life in the here and now. That connection has sometimes been made very crudely and explicitly. A popular book of Divine and Moral Songs for Children composed in the 18th century by the Congregationalist clergyman Isaac Watts contained the following poem about the link between holy living and heavenly rewards, which would have been recited by many generations of British children:
There is beyond the sky
A heaven of joy and love;
And holy children, when they die,
Go to that world above.
There is a dreadful hell,
And everlasting pains:
There sinners must with devils dwell
In darkness, fire, and chains.
Can such a wretch as I
Escape this cursed end?
And may I hope, whene’er I die,
I shall to heaven ascend?
Then will I read and pray,
While I have life and breath,
Lest I should be cut off today,
And sent t’ eternal death.
When freethinking and anti-Christian works such as Thomas Paine’s Age of Reason (1794) started to become more widely available, one of the leading concerns of the faithful was that if people ceased to believe in heaven and hell, then they would feel free to indulge their most sensual passions and selfish appetites. Without religion, it was feared, human society would descend into animalistic anarchy. As one judge said when sentencing a London bookseller to imprisonment for selling Paine’s works, if these books were widely read and believed then the law would be deprived of ‘one of its principal sanctions – the dread of future punishments’.
Many today still echo the sentiments of this 18th-century judge and argue that religious beliefs are necessary to provide moral guidance and standards of virtuous conduct in an otherwise corrupt, materialistic, and degenerate world. Religions certainly do provide a framework within which people can learn the difference between right and wrong. An individual might consult the scriptures to discover that God has told his people to be truthful, faithful, and respectful towards their parents; and not to steal, nor commit adultery, nor worship false gods. Believers can also hope to receive moral guidance from the voice of God within, in the form of their conscience. If they follow the divine path faithfully, they will be deemed to be among the righteous rather than the wicked at the day of judgement. The unbeliever, in contrast, is supposed to be a sensuous, self-indulgent, selfish creature whose motto is ‘Let us eat and drink; for tomorrow we die.’
The alleged connection between unbelief and selfishness has been strengthened by a particular interpretation of evolution as a process driven by self-assertion and competition. Standard modern explanations of evolution have emphasized the fact that a trait or behaviour cannot evolve unless it is for the good of the individual organism. This would seem to rule out the possibility of altruism (except as a sort of enlightened self-interest). If evolution cannot produce genuine altruism, then perhaps the only explanation for the self-sacrifice displayed by saintly individuals is that they are inspired or empowered by God. Even the former director of the Human Genome Project, Francis Collins, in his book The Language of God (2006), suggests that the existence of the ‘moral law’ of love and altruism within every human heart cannot be explained by science alone.
This might be another occasion, however, where it would be wise for religious apologists to heed Henry Drummond’s warning about not placing God in supposed gaps in existing knowledge. For many, this particular alleged gap was filled some time ago. Darwin himself suggested that cooperative behaviour could arise through natural selection operating at the level of tribes or groups. A community made up of cooperative and self-sacrificing individuals would be expected to flourish at the expense of one made up of uncooperative and selfish ones. In his 1976 book The Selfish Gene, Richard Dawkins popularized an alternative evolutionary explanation of altruism – the theory of ‘kin selection’, which asserts that altruism could arise only when individuals were acting in the interests of family members. Since, according to this version of neo-Darwinism, natural selection operates at the level of the gene, we could only come to behave in an altruistic way when it was in the interest of our ‘selfish genes’. And that was only the case when we were helping to spread more copies of those genes by aiding close relatives (who share many of the same genes). A gene that inclined us to help non-relatives, on the other hand, would have no such evolutionary advantage since it would succeed only in spreading copies of unrelated, competing genes.
Of course, Dawkins did not intend to attribute any kind of actual intention – selfish or otherwise – to the genes themselves. His imaginative and metaphorical application of the term ‘selfish’ to strings of DNA molecules was, rather, designed to communicate a complex scientific theory to a wide readership. In that aim, Dawkins succeeded brilliantly. One unfortunate side-effect was the degree of confusion he thus introduced into debates about altruism. In a rhetorical flourish in The Selfish Gene, Dawkins wrote, ‘Let us try to teach generosity and altruism, because we are born selfish.’ However, the whole point of the theory of kin selection is that individuals can behave entirely altruistically (in the normal non-molecular sense of the word), but the reason they do so is because it helps to spread their genes. The real point of the book is that ‘selfish’ genes can make altruistic people. But Dawkins’s reference to the need for us to ‘rebel against the tyranny of the selfish replicators’ and to teach our c
hildren altruism rather obscured this point. In his recent book, The God Delusion (2006), Dawkins has adopted a more coherent position, arguing that the tendency of humans to behave in a universally cooperative and altruistic way is indeed quite natural, and should be seen as a ‘blessed misfiring’ of a mechanism which evolved initially to benefit only close relatives.
The flurry of discussion precipitated by The Selfish Gene partially obscured the fact that there has been a long, alternative Darwinian tradition of writers appealing to nature as teacher of sympathy, altruism, and mutual aid, rather than of struggle and self-assertion. Although Darwin’s own work is more often remembered for the vivid picture it painted of struggle and conflict in nature, The Descent of Man (1871) also emphasized the more collaborative aspects of animal life, documenting self-sacrificing and cooperative behaviour among insects, birds, and apes, culminating in that pinnacle of evolved morality – the human conscience. Since Darwin’s day, many more examples have been added, including detailed studies of the complex systems of altruism and cooperation that operate among social insects, as well as the posting of altruistic sentinels by some species of bird and mammal, who risk their own lives to warn the rest of the group of imminent danger.