Brain Buys
Page 23
Many, probably most, neuroscientists do not expect to find a single “belief center” in the brain, any more than they expect to find a single area responsible for love or intelligence. Together the cumulative evidence to date suggests that religious beliefs likely engage a distributed network of different brain areas functioning as a committee. For example, a brain-imaging study led by the neuroscientist and author Sam Harris examined patterns of brain activation in response to religious and nonreligious statements, such as “Jesus was literally born of a virgin” and “Childbirth can be a painful experience.” The study revealed that the two types of questions produced different patterns of activity across a broad network of areas throughout the brain and the patterns were similar independent of whether the subjects were believers or nonbelievers.29
It is much too early to make any conclusive statements regarding the neural basis of supernatural and religious beliefs, but it seems clear that as with most complex personality traits there will not be a single “God center,” “God gene,” or “God neurotransmitter.” Furthermore, if there is a genetic basis to our supernatural beliefs it is possible we are asking the wrong question altogether. It may be best not to ask if supernatural beliefs were programmed into the human brain, but to assume that they are the brain’s default state and that recent evolutionary pressures opened the door for nonsupernatural, that is, natural, explanations for the questions that in the past eluded our grasp. As mentioned above, the studies of Jesse Bering and others suggest that children naturally assume the existence of a manifestation that outlasts the physical body. It is indeed hard to see how children, as well as early humans, could be anything but innate dualists in the face of vast ignorance about natural laws. Additionally, the observation that surgical removal of part of the brain increases spirituality suggests that supernatural beliefs may be the default state, and that we evolved mechanisms capable of suppressing these beliefs.
Where did life come from? Replying that a god, whether Zeus, Vishnu, or the Invisible Pink Unicorn, created life is more intuitively appealing than stating that life is the product of complex biochemical reactions sculpted by natural selection over billions of years. It seems downright logical that something as complex as life would require planning on someone’s part. The brain bug is in the fact that for some reason, when told that a god created life, we do not reflexively ask, “But wait a minute, who created God?” The brain seems to naturally accept that an agent is an acceptable explanation, no more needs to be said. This fallacy is almost associative in nature; the words create and make and their proxies carry with them implicit associations about agency and intention.
If dualism is our default state perhaps we should not think of how supernatural beliefs evolved, but how we came to seek and accept natural and science-based answers to the mysteries of “life, the universe, and everything.” If other animals can be said to think at all, presumably their worldview more closely resembles our own supernatural beliefs, that is, most things are indeed indistinguishable from magic. What probably distinguishes the human brain from that of other animals is not our tendency to believe in the supernatural, but rather our ability not to believe in the supernatural. Perhaps the automatic system in the brain is the innate dualist, and through acquired knowledge and education the reflective system can learn to embrace materialist explanations for phenomena that intuitively seem to require supernatural explanations.
Regardless of the neural bases of supernatural and religious beliefs, we return to the fact that they hold immense sway on our lives. In my view, too much sway to be merely piggybacking on other faculties. I suspect that religious beliefs do benefit from a privileged and hardwired status, which translates into increased negotiating power with the more rationally inclined parts of the brain. Like most complex traits, this special neural status would not have emerged in a single step, but might have evolved through a multiple step process:
First, millions of years ago, in the earliest days of the expansion of the hominin cortex, a proclivity to label questions as either tractable or intractable may have provided a means to prioritize the use of new computational resources. At this early stage the ability to compartmentalize thoughts into natural and supernatural categories would have proven adaptive to individuals: those who could distinguish between answerable and unanswerable questions were more likely to have applied their problem-solving abilities toward endeavors that increased reproductive success.
Second, as proposed by group selection theory, once genes that favored supernatural beliefs were in the pool, they may have been further shaped and selected for because ancestral religions provided a platform for a quantum leap in cooperation and altruism.
Third, within the past 10,000 years the genetically encoded traits from stages one and two were finally co-opted to usher in the transition from primitive belief systems to modern religions that were well suited to better organize and control the increasingly large populations that emerged after the advent of agriculture. The multifaceted nature of modern religions is an outcome of the complexity of cognitive abilities that they co-opted, including the primordial distinction between natural and supernatural phenomenon, as well as cognitive abilities in place for reasons entirely unrelated to religion, as suggested by the by-product theory.
In 2009 a national debate erupted in Brazil over the case of a nine-year-old girl from a small town in the northeast who became pregnant with twins after being raped by her stepfather. Under medical advice—due to the potential risk to the life of a nine-year-old carrying twins to term—the mother decided her daughter should have an abortion (an illegal procedure in Brazil, except in cases of rape or when the mother’s life is in danger; both conditions were met in this case). Upon learning of the case, the Archbishop of the city of Recife did everything in his power to prevent the procedure from being performed. Failing to do so he invoked Canon law (the rules and codes governing the Catholic Church) to administer the most severe punishment within his limited jurisdiction. He excommunicated the mother and the members of the medical team that performed the abortion. The stepfather, however, remained in good standing with the Catholic Church. In an interview, the Archbishop masterfully illustrated why blind faith can be a brain bug: “If a human law contradicts a law of God, in this case the law that allowed the procedure, this law has no value.”30 Many people subscribe to the notion that religion is the source of moral guidance, but, when people are relieved of the burden of religious teachings, in this case it would seem that the only rational conclusion is that the more severe moral transgression was that of the stepfather, not of the medical team.31
The paleontologist Stephen Jay Gould believed that science and religion represented two “nonoverlapping magisteria,” one having nothing to say about the other.32 Perhaps supernatural and natural categories of belief initially evolved precisely to convince Gould (and the rest of us) that this is the case. The built-in acceptance of two nonoverlapping magisteria exempted our ancestors from trying to understand a wide range of natural phenomena beyond their cognitive grasp and allowed them to focus their neocortical powers toward the more tractable problems required for survival. And given the large body of historical and contemporary data establishing faith’s veto power over reason and basic instincts alike, it seems probable that supernatural beliefs are not merely a by-product of other mental abilities. Rather, they may be programmed into our neural operating system, where their privileged status makes it difficult for us to recognize them as a brain bug.
9
Debugging
The eternal mystery of the world is its comprehensibility.
—Albert Einstein
In 1905 a recently minted Swiss citizen who worked in a patent office published four papers in the Annals of Physics. The first solved a mystery related to the properties of light by suggesting that the energy of photons was packaged in discrete quantities, or quanta. The second paper proved on theoretical grounds that small specks of matter in water would exhibit observable
random movement as a result of the motion of water molecules—this work confirmed that matter was made of atoms. The fourth paper established an equivalency between mass and energy, and is eternalized as E = mc2. But it was the third paper that ventured into a realm so surreal and counterintuitive that it is difficult to comprehend how a computational device designed by evolution could have conjured it. The human brain was developed under pressure to provide its owners with reproductive advantages in a world in which macroscopic stuff mattered, things like stones, bananas, water, snakes, and other humans. The human brain was certainly not designed to grasp that time and space are not as they seem. And yet Einstein’s third paper in 1905 asserted that space and time were not absolute: not only would a clock slow down when it traveled at speeds approaching that of light, but it would also physically shrink.1
If you’ve wondered whether the brain that laid the foundations for modern physics was somehow fundamentally different from the other brains on the planet, you would not be the only one. Einstein’s brain was rescued from cremation and preserved for future study; of course it was made of the same computational units—neurons and synapses—as all other brains on the planet, and by most accounts did not stand out anatomically in any obvious way.2
Throughout history some brains have propelled science and technology into new realms, while others continue to embrace astrology, superstition, virgin births, psychic surgery, creationism, numerology, homeopathy, tarot readings, and many other fallacious beliefs that should have been extinguished long ago. The fact that the same computational device, the human brain, is the source of both wondrous genius and creativity on one hand, and foolishness and irrationality on the other, is not as paradoxical as it may seem. Like a great soccer player who is a mediocre gymnast, genius is often constrained to a rather narrow domain. Although a wise man by any measure, Einstein’s insights into philosophy, biology, medicine, or the arts did not warrant great distinction, and even within his home turf of physics Einstein was wrong on a number of important accounts. A determinist at heart, he believed that the behavior and position of a subatomic particle could be determined with certitude, but close to a century of quantum physics theory and experiments have taught us otherwise. Another great man of science, Isaac Newton, is known for his groundbreaking contributions to classical physics and mathematics, but by some accounts these endeavors were his hobby, and much of his intellectual energy was directed toward religion and alchemy.
We are all experts at applying logic and reason in one area of our lives while judiciously avoiding it in others. I know a number of scientists who are unequivocal Darwinists in the lab but full-hearted creationists on Sundays. We enforce rules with considerable leeway, so that what applies to others often does not seem to apply to ourselves. With little justification to do so we treat some people with respect and kindness, but others with contempt or hatred. The perceived best solution to a problem does not only vary from one individual to another, but can change from day to day for any one individual. Because our decisions are the result of a dynamic balance between different systems within the brain—each of which is noisy and subject to different emotional and cognitive biases—we simultaneously inhabit multiple locations along the irrational-rational continuum.
BRAIN BUG CONVERGENCE
A paradox of human culture is that many of the technological and biomedical breakthroughs that revolutionized how and how long we live have been vehemently opposed at their inception. This is true not only of those who may not understand the science behind each breakthrough, but of scientists—a fact alluded to by the physicist Max Planck: “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”3 Most of us are alive today only because we have benefited from the innumerous advances in public health and medicine over the last century, from vaccines and antibiotics to modern surgical techniques and cancer therapies. Yet most transformative biomedical advances have been met with significant resistance, from vaccines to organ transplants and in-vitro fertilization, and today the same holds true for stem-cell research. A healthy suspicion of new technologies is warranted, but, as was illustrated by sluggish response to Ignaz Semmelweis’s findings on the causes of puerperal fever, our hesitancy to embrace change far exceeds rational cautiousness.
Consider the popular belief throughout the first decade of the twenty-first century that autism was caused by vaccines. This particular hypothesis was triggered by a scientific paper, published in 1998, in which 10 of 13 authors later retracted their contribution—and it was later determined that the data was faked. Dozens of scientific papers world-over have since carefully looked for any connection between autism and vaccines, and concluded that there are none.4 Yet, because of this alleged link the vaccination rates in some countries went down, raising the risk of children’s dying from once-vanquished diseases. We do not know why certain notions are fairly immune to facts. But in the case of the autism-vaccine link it seems likely that a number of brain bugs are to blame. The concepts of both autism and of vaccines are familiar ones—particularly to those with family members with the disease—and are thus likely well-represented within our neural circuits, which facilitates the formation of a robust association between the “autism” and “vaccine” nodes. We have seen that one characteristic of human memory is that there is no convenient way to delete information. Once established at the neural level the association between “autism” and “vaccines” has unconscious staying power. But even if this link could be deleted, what would take its place as the cause of autism? That autism is a polygenic developmental disorder that may be dependent on environmental factors?5 Some fallacies persevere precisely because of their simplicity—they provide an easily identifiable target that resonates with the way the brain stores information. It’s easier to remember and understand a headline that suggests a link between autism and vaccines than one that suggests a link between autism and some not-yet-identified set of genes and environmental factors.
The autism-vaccine movement also likely endured because of the, presumably innate, fear of having foreign substances in our body. In addition to a physical barrier (the skin), we have evolved innumerous behaviors to decrease the chances that foreign substances will breach the body’s surface. Whether it is a needle or the “dead” viruses in a vaccine we are leery of things entering our bodies. Indeed, antivaccination movement groups have been protesting the use of vaccines for over 200 years.6 Like the monkeys that are innately prepared to accept evidence that snakes are dangerous, we seem overly eager to embrace any evidence that unnatural things in our body are bad.
Our obstinate adherence to fallacious and irrational beliefs is but one domain in which our brain bugs converge with serious consequences. Another is in the political arena. Winston Churchill famously said that “democracy is the worst form of government except all the others that have been tried.”7 Democracy is predicated on the notion that we the people are capable of making reasonable judgments as to the competence, intelligence, and honesty of candidates, as well as whether their views resonate with our own. But as easy as checking a box is, picking who is truly the best leader is a tricky venture.
The building blocks of the brain ensure that it is highly adept at recognizing patterns, but poorly prepared for performing numerical calculations. Where would voting fall among the brain’s strengths and weaknesses? In many countries citizens must reach the age of eighteen to be granted both the rights to vote and to drive a car. Which of these acts carries more responsibility? It would appear to be driving, since it is the one that requires a formal licensing process. Whether driving or voting carries more responsibility is not an objective question; one is an apple, the other a papaya. Yet perhaps we readily see the logic of requiring drivers, but not voters, to pass a test because it is easy to visualize the dangers of an incompetent driver. The election of an incompetent leader can, of course,
lead to far more tragic consequences than automobile fatalities—ranging from wars to catastrophic governmental policies. For example, President Thabo Mbeki of South Africa maintained that AIDS is not caused by HIV, and that it could be treated with natural folk cures; his views shaped an AIDS policy that is estimated to be responsible for hundreds of thousands of deaths.8 We elect inept leaders all the time; the question is whether we learn from our mistakes.
At the outset the democratic process is hindered by the apathy that results from the difficulty in grasping the consequences of one’s single vote cast in a sea of millions. But, additionally, consider what was referred to as delay blindness in Chapter 4: the fact that animals and humans have difficulty connecting the dots when there are long delays between our actions and their consequences. If every time we voted we magically found out the next day whether we made the right or wrong choice we would probably all be better voters. But if it takes years to discover that our elected representatives are utterly incompetent, the causal relationship between the fact that we elected them and the current state of the nation is blurry at best. The passage of time erases accountability and voids the standard feedback between trial and error that is so critical to learning. Our thirst for immediate gratification also stacks the cards against rational behavior in the voting booth. In the domain of politics our shortsightedness is expressed as an appetite for “immediate” rewards. This bias partly explains the eternal campaign promise of tax cuts; but short-term benefits in the form of tax cuts can come at the expense of the long-term investments in education, research, technology, and infrastructure, the very things that lead to a healthy economy and a powerful nation. Our short-term mindset also feeds the expectation that the government should solve complex problems in a short timeframe, which in turn drives politicians to enact short-term solutions that can be disastrous in the long run.