Faith Versus Fact : Why Science and Religion Are Incompatible (9780698195516)

Home > Other > Faith Versus Fact : Why Science and Religion Are Incompatible (9780698195516) > Page 23
Faith Versus Fact : Why Science and Religion Are Incompatible (9780698195516) Page 23

by Coyne, Jerry A.


  There are also moral instincts that appear nearly universal but can be revealed only by posing “moral hypotheticals”: situations that never really occur but evoke instant and apparently instinctive moral judgments. Perhaps the most famous is Judith Jarvis Thompson’s “trolley problem,” in which people are asked whether they’d throw a switch to derail a runaway train onto a sidetrack, killing one person walking on that track but saving five on the main track. Most people see throwing the switch as a moral act. In contrast, they strongly disapprove of an apparently similar act: throwing a nearby fat person onto the track to stop the train (you’re assumed to be too light to stop the train yourself). Yet both situations achieve the same end: sacrificing one life to save five. Surveys show that there are many situations like this in which people’s judgments concur regardless of their nationality, gender, ethnic group, or religion. All of this adds up to the idea that feelings of morality are widespread and that some moral judgments do seem innate, if by “innate” you mean “felt and exercised automatically.” But that says nothing about the cause of the innateness, much less that it’s God. For there are two naturalistic explanations as well: evolution and learning.

  Now, by “universal” I mean “nearly universal,” for there are clearly individuals who lack empathy, who cheat, or who feel no sense of shame. And, of course, large parts of some societies have engaged in wholesale immorality, such as Nazi Germany and parts of the antebellum United States that practiced slavery. Westerners consider genital mutilation and honor killings immoral, while many in the Middle East have no problem with these but see as immoral Western traditions like educating women and allowing them to dress as they want. Further, much of the behavior that we see today as unquestionably immoral—the disenfranchisement of gays, women, and members of other ethnic groups, the use of child labor, the torture of both humans and animals for amusement—were once an accepted part of Western society. In The Better Angels of Our Nature, Steven Pinker makes a strong case that since the Middle Ages most societies have become much less brutal, due largely to changes in what’s considered moral. So if morality is innate, it’s certainly malleable. And that itself refutes the argument that human morality comes from God, unless the moral sentiments of the deity are equally malleable.

  The rapid change in many aspects of morality, even in the last century, also suggests that much of its “innateness” comes not from evolution but from learning. That’s because evolutionary change simply doesn’t occur fast enough to explain societal changes like our realization that women are not an inferior moiety of humanity, or that we shouldn’t torture prisoners. The explanation for these changes must reside in reason and learning: our realization that there is no rational basis for giving ourselves moral privilege over those who belong to other groups.

  But some of our moral behaviors, if not sentiments, almost certainly evolved. Evidence for that comes from finding parallels between the behavior of our own species and that of our relatives. The primatologist Frans de Waal and his colleagues have described many such parallels between primates and humans. Chimps, for instance, have tried to rescue other chimps drowning in the moats around their enclosures, and capuchin monkeys seem to show notions of “fairness,” throwing a fit when they’re given a cucumber but observe a monkey in the next cage getting a more desirable grape. Indeed, even rats have been shown to have a rudimentary empathy, freeing other confined and distressed rats by unlocking their cages—without receiving a reward. Curiously, rats do not show helping behavior toward strains that they are unfamiliar with, including, if they were fostered by members of a different strain, members of their own strain. This kind of in-group versus out-group discrimination is seen in humans and other primates.

  Still, these parallels do not show that we share the same genes for moral feelings with our relatives, genes that would presumably have been passed down from our common ancestor. In experimental tests, for instance, capuchin monkeys show more “prosocial” (i.e., helping) behaviors than do chimps, though we’re much more closely related to chimps than to capuchins. Orangutans are more closely related to humans than are capuchins, yet humans and capuchins, but not orangutans, show negative responses to unequal treatment. (In fact, even dogs and crows are more averse to inequities than are orangutans!) When you overlay animal “morality” on the known family tree of mammals, you find that behaviors that look “moral” have evolved independently in several lineages. This makes sense, for even closely related species can have very different social systems (bonobos and chimpanzees are one example), and different social systems select for different behaviors. Orangutans, for instance, are far more solitary than chimpanzees, and so would experience little natural selection for behaviors promoting group harmony.

  This evolutionary convergence of “premoral” behaviors in unrelated lineages makes it more likely that some moral behavior evolved independently in our ancestors, particularly because we spent most of our evolutionary history living in the kind of small bands that would select for that behavior. That same independent evolution also militates against the God hypothesis—unless you think that God also instilled morality in rats, monkeys, dogs, and crows.

  There is another way to determine how much of human morality, if any, is in our genes. If we have a hardwired and inborn morality, infants brought up without moral training will develop it automatically. Such experiments are, of course, unethical, but we can approximate them by observing the behavior of small infants who have had almost no moral instruction. And those infants show a modicum of empathy, but only toward familiar people, especially parents. They also show some rudiments of justice and fairness, but again directed mostly toward those in their in-group. To everyone else infants are selfish. The work of the child psychologist Paul Bloom and others has shown that infants are spiteful and do not even tolerate equality with strangers. They will, for instance, choose to receive one cookie while a nearby infant gets none, rather than the alternative in which both infants get two cookies. In other words, infants sacrifice their own well-being just to flaunt their superiority in acquiring goods. Bloom concludes that infants have limited innate empathy but little compassion and no altruism, traits that must be inculcated by parents and peers:

  There is no support for the view that a transcendent moral kindness is part of our nature. Now, I don’t doubt that many adults, in the here and now, are capable of agape. . . . When you bring together these observations about adults with the findings from babies and young children, the conclusion is clear: We have an enhanced morality but it is the product of culture, not biology. Indeed, there might be little difference in the moral life of a human baby and a chimpanzee; we are creatures of Charles Darwin, not C. S. Lewis.

  It’s no fluke that among the cultural universals listed by Pinker is socialization by elders.

  So what about altruism? This is a tricky subject, because “altruism” has both a common and a strict biological definition. The common definition is simply “helping someone without immediately expecting a reward.” That would include, for instance, giving money to charity, helping an old person cross the street, taking care of your children, or, in the most extreme cases, sacrificing your life for someone else: the save-a-drowning-child, volunteer-fireman, and falling-on-the-grenade scenarios. The question is whether any or all of these defy explanation by culture, biology, or both.

  The answer is no. While we’re not exactly sure about the mix of culture and biology that determines our “moral” feelings and actions, we at least have plausible nonreligious explanations for all forms of altruism, from the least onerous to the most sacrificial.

  For one thing, although some acts seem purely altruistic, they actually redound to the status of the person who does them. We benefit from having a reputation for generosity and by being public (and honest) about it. There’s a reason why most wealthy donors insist that the art galleries, museums, and other institutions they endow prominently bear their names.

&nbs
p; Further, some altruism is part of a tit-for-tat strategy in which you expect to be repaid someday. From mutual grooming in primates to helping a friend move, your act may seem altruistic at the time, but those relationships wouldn’t last long if you were always a taker but never a giver. In fact, one can show from evolutionary game theory that such “reciprocal altruism” can easily evolve, especially in small, stable groups in which individuals can recognize and remember one another. It’s no coincidence that that is precisely the scenario under which the vast bulk of human evolution took place.

  We have, then, both cultural and evolutionary explanations for these less extreme forms of sacrifice. Altruism might have evolved as a reciprocal phenomenon that was adaptive for individuals in small groups (explaining why babies show preference for people that they recognize), and it might also be a way of burnishing one’s reputation, explaining why we usually don’t hide our generosity under a bushel. In fact, many aspects of cooperation and altruism are precisely those we’d expect if their rudiments had evolved. Altruism toward others is reciprocated most often when many people know about it, but often isn’t when you can get away with free riding. Humans have sensitive antennae for detecting violations of reciprocity, they choose to cooperate with more generous individuals, and they cooperate more when it enhances their reputation. These are signs not of a pure, God-given altruism, but of a form of cooperation that would evolve in small bands of human ancestors. Three other “cultural universals” that support this idea are concern for one’s self-image (and the hope that it’s positive), feeling shame for having transgressed, and recognizing other individuals from their faces.

  And around that evolutionary nucleus could accrete, via culture alone, altruism that truly gets no reciprocation: giving to charities or the homeless anonymously, administering CPR to a fallen stranger, or simply waving someone ahead in traffic. As Peter Singer explains in his book The Expanding Circle, improvements in communication and transportation among populations have hijacked our “be nice to acquaintances” module, for our circle of acquaintances has widened, and we’ve learned that people are pretty much the same everywhere. (We’ve also learned that reciprocity goes beyond our band, as we now exchange goods and services with people across the globe.) And you can hardly demand that others treat you morally unless you do the same for them, for there’s no way you can convince those others that, with all else equal, you deserve to be treated better than they are. From evolutionary roots, then, can grow a tree of altruism fertilized by culture.

  Explaining altruism toward relatives is not a problem: since the 1960s evolutionists have understood how that works. Helping relatives is often not “true” altruism, for you stand to gain something from your sacrifice: the propagation of your genes. Behaviors that promote helping relatives can be favored by natural selection simply because they preserve copies of the genes that are carried in those relatives. If your expected genetic benefit—discounted by the degree of relatedness to those you’re saving—exceeds the genetic cost of your “altruism,” the behavior will evolve. In other words, evolution would promote your sacrificing your life with certainty if you could save more than two of your children, each of whom shares half your genes. And if your chance of dying (or losing future reproduction) were less than certain, then you might risk your life to save even a single child.

  Such “kin selection” explains why parents care more about their own children than other people’s. It explains why we care more about our children and our siblings than about our aunts, uncles, and cousins (we share fewer genes with more distant relations). And it’s a good explanation for why Thomas Vander Woude would try to save his child. He didn’t know that he would die, but simply had the impulse to save his child—something that’s certainly built into us by natural selection. (Damon Linker argues that because Vander Woude’s child had Down syndrome, and probably wouldn’t reproduce, such altruism has no genetic payoff, but that’s a profound misunderstanding of how evolution works. The evolutionary cue for helping is seeing your child in danger, not a genetic calculus of whether that child would reproduce—something our ancestors would never have known.)

  The hardest cases for evolution are the ones I call true biological altruism. These are instances in which you make huge sacrifices, up to certain death (in evolutionary terms, you lose all future offspring), to help those who are unrelated. Such sacrifices reduce the expected number of offspring you’ll have. These acts include the soldier-on-the-grenade scenario, the jobs of volunteer firefighters, who don’t even get paid to risk their lives for others, and the altruism of people like Lenny Skutnik, who dived into icy waters in 1982 to save a drowning woman after a plane crashed in the Potomac River. Such scenarios cannot be explained by evolution alone, for behaviors that could reduce your reproductive output, and are not passed on through relatives you saved, would be pruned from the population. How can we explain these purely sacrificial acts?

  There are several ways. One is simply the “expanding circle”: a feeling of innate empathy we develop at the plight of strangers. In many cases, such as those of the firefighters and Skutnik, the altruist doesn’t know for sure he’ll be killed, for otherwise there’s no reason to attempt a rescue. In cases like the grenade scenario, everyone is going to be killed anyway, and the sacrificial act may simply have piggybacked on our evolved tendency to help those with whom we’re intimately familiar. It’s no accident that soldiers who put their lives on the line for their fellows often call them “brothers.”

  The nonadaptive hijacking of sentiments that have evolved for other reasons is not rare. Animals that have their own litters will often adopt members of another species. I’ve just seen a video of a mother farm cat suckling a brood of ducklings along with her own litter. It’s gone viral because it’s so adorable, but there’s a biological lesson here: when maternal hormones kick in, you might foster an animal that’s not even of your own species, much less your own brood. This happens because the “adoption” option simply isn’t common in nature, and natural selection has operated to promote the suckling of infants that happen to be nearby—which are almost invariably your own.

  Such hijacking also occurs in wild species. Cuckoo birds are “nest parasites” that lay their eggs in the nests of other species. The young cuckoos then proceed to kill the host’s own offspring, and the unrelated foster parents continue to feed the young cuckoos until they fledge. This grisly tactic is clearly adaptive for cuckoos: by never having to feed their own young, they get permanent babysitters and can have dozens of offspring, all raised by others. But it’s very maladaptive for the host birds, who gain no benefit from raising a member of another species, and indeed, lose all their own offspring. The host’s maternal instincts have simply been hijacked by cuckoos, and haven’t counterevolved to recognize their strange offspring. If this happened in humans—and it does, in the case of people who adopt unrelated children—it would be seen as a case of extreme biological altruism. Yet nobody has argued that the “altruism” of cuckoo hosts, or the phenomenon of cats suckling ducklings, is inexplicable by science and therefore constitutes evidence for God.

  In the end, there are ample secular explanations for altruism. While some of our moral sentiments surely derive from evolution in our ancestors, they are refined and expanded through culture: learning and communication. The genetic evidence comes from comparative work on other species, as well as studies of human infants. The cultural evidence, on the other hand, comes from seeing how many moral sentiments are learned, how variable they are across societies (even though readily instilled into infants adopted cross-culturally), and how much they have changed in just the past few centuries. In many ways human morality resembles human language: we’re born with the propensity to acquire both, but the specific moral views we adopt, like the specific language we learn to speak, depend on the culture in which we’re raised.

  So while morality may seem “innate,” at least in adults, that is easily explai
ned as a result of genetic endowment modified by cultural indoctrination. Given that this “Moral Law,” as Francis Collins puts it, does not defy science and psychology, there is no need to invoke some divine tinkering with human behavior.

  But the God hypothesis for morality and altruism has its own problems. It fails, for example, to specify exactly which moral judgments were instilled in people by God and which, if any, might rest on secular reason. It doesn’t explain why slavery, torture, and disdain for women and strangers were considered proper behaviors not too long ago, but are now seen as immoral. For if anything is true, God-given morality should remain constant over time and space. In contrast, if morality reflects a malleable social veneer on an evolutionary base, it should change as society changes. And it has.

  The Argument for God from True Beliefs and Rationality

  A more sophisticated argument for natural theology involves our ability to hold beliefs that happen to be true, ranging from “I’d better stay away from that lion” to “Tomorrow morning the Sun will ‘rise.’” Some theologians argue that this ability can be understood only as a gift of God, and this argument has gained some traction in natural theology. To an evolutionary biologist, however, the “argument from true beliefs” seems so clearly wrong that one wonders why it’s so popular. Possibly one reason is that theologians, who make this argument most frequently, either don’t understand or don’t accept the ability of both culture and evolution to give us a propensity to detect the truth. But the argument has also been promulgated by a highly respected philosopher of religion, Alvin Plantinga, and seems impressive because it’s framed in arcane language, formal logic, and probability theory. But one need not know math or much evolution to see its problems.

 

‹ Prev