The Gap

Home > Other > The Gap > Page 24
The Gap Page 24

by Thomas Suddendorf


  Even young children are swift to inform others about the norms they have learned. When two-year-old Timo learned the rule that feet are not to be put on the table, he quickly started berating me in moments of lazy reclining. Guests were also duly reprimanded—he would not rest until all feet were back on the floor. Children adopt norms, such as how a game should be played, even in the absence of any explicit instruction by adults. And they are keen to teach them to others. This tendency is part of the general desire we have encountered so frequently: to link our minds. It enhances the spread and standardization of norms, making us support those who follow the rules and impose costs on those who violate them. Virtue, honor, and decency are central to most people’s lives, and many invest heavily in the pursuit of nobility (or at least in the public perception thereof). In our groups, morals matter.

  Cooperation with strangers from another tribe is riskier, as the same group pressures need not apply. People may murder or steal from outsiders, even when these acts are forbidden within their own group. There are hundreds of studies showing how humans treat members of their own group differently from those of another group. Even when group membership is arbitrarily assigned ad hoc (e.g., according to T-shirt color) people instantly become more prosocial to their in-group and more antisocial to the out-group. Although these days most of us are members of many different groups (your village, your sports team, your political party, or your assigned group in a social psychology experiment), throughout much of prehistory we would be primarily stuck with our immediate tribes. So rituals, ethnic signaling, and other indicators that groups share basic values and agree on a code of conduct were important in encouraging trust in interactions with other groups.

  A key factor facilitating the standardization of moral rules within and across groups in human history has been religion. In most societies, fundamental cooperative rules are absolute and unquestionable by virtue of being presented as divine commands. God, religions promise, will reward adherers and punish transgressors. In a sense this is the ultimate form of indirect reciprocity. Religion reduces the need for policing because believers are to some extent policing themselves through their conscience—to avoid divine, rather than secular, punishment. Of course, people can derive and follow a moral code without, or in spite of, these threats and promises. Nevertheless, the religious approach has proven immensely successful in keeping people in line (although exceptions spring to mind). Followers of the same religion can assume that they share a basic code of conduct. If you have the same God, there is no hiding, and you will be judged by the same rules.

  While helping and hurting are the most fundamental moral domains, norms frequently extend to questions of authority, loyalty, obedience, and purity, both bodily and spiritual. There is some debate about what qualifies as moral. A common distinction is made between moral and conventional norms. Morals are typically seen as prescriptive and universally enforceable because violations lead to harm, whereas conventions do not. For instance, there may be a norm about what clothes one wears to a particular occasion, but violating that norm does not harm anyone. Stealing the clothes from someone else, on the other hand, violates the owner’s rights and is therefore morally wrong. Even preschool children make this distinction quite readily. However, in some cultures the most apparently arbitrary conventions can be moral by virtue of a spiritual logic that links the act to harm. For example, in one study the anthropologist Richard Shweder and colleagues asked Hindi children in Bhubaneswar to rank a list of breaches of conventions in terms of their seriousness. According to the children, the most serious was: “The day after his father’s death, the eldest son had a haircut and ate chicken.” These acts were considered worse than incest between a brother and a sister or a husband beating his wife. Norm violations, such as eating the wrong food, may be thought to cause immense harm in the afterlife. Thus spiritual ideas can create powerful pressure for people to conscientiously conform to social norms, and religions have accordingly proved to be great catalysts in the rise of civilizations, enabling ever-larger numbers of people to conform and cooperate.

  Many guiding moral principles and norms advocate loyalty, trust, and caring—essentials for large-scale cooperation. One of the most famous principles is the Golden Rule: “Do to others as you would have them do to you” (or “Do not do to others what you do not want done to yourself”). This rule encapsulates the crucial relationship between empathy and reciprocity that is fundamental to human morality and cooperation. Versions of this rule can be found in the early writings of the civilizations of Babylon, China, Greece, India, Judea, and Persia. By spreading the same moral code across many tribes, people could increasingly work together in the building of civilizations. Moral communities expanded. Yet the flip side of in-group cooperation, as we have seen, can be antisocial behavior to the out-group. In fact, conflicts between followers of different religions have provoked some of the most abominable wars and persecutions in history.

  With the Enlightenment, European societies started to adopt a more civil, more rational, and more compassionate stance than was evident in the Middle Ages. Torture and cruel capital punishment, for instance, became increasingly objectionable, and these moral norms spread. Changing views about cruelty did not end intergroup conflict and warfare. However, the circle of sympathy generally expanded to become more inclusive. For some people this is still restricted to immediate relatives; for others it extends to a select group of members of a gang, religion, nation, or “race.” Darwin foreshadowed that civilization will eventually lead us to extending sympathy to all humanity:

  As man advances in civilisation, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the nation, though personally unknown to him. This point being once reached, there is only an artificial barrier to prevent his sympathies extending to the men of all nations and races. If, indeed, such men separated from him by great differences in appearance and habits, experience unfortunately shows us how long it is, before we look at them as our fellow-creatures.

  Following the Holocaust, humans of all nations eventually sat down to agree on this. The United Nations published the Universal Declaration of Human Rights. The first article reads: “All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.” The declaration is an appeal to extend our morality to all humans, to stop slavery and abuse, to give everyone equal rights—in other words, to treat all humans like relatives. Over the course of history, human cooperation has grown on a progressively larger scale. We are finally putting group pressure on all humans to follow the same basic moral rules to prevent harm and encourage helping. In spite of recurring conflicts, cooperation and respect among humans of all cultures is now, for the first time in our history, a real possibility. The declaration is of course silent on animal rights, but we shall get to that later.

  DE WAAL’S THIRD LEVEL OF human morality is our capacity for self-reflective judgment and reasoning. We regulate our own behavior based on our moral assessments. We can reflect on why we do what we do and want what we want—and we can decide to change tack. We think about what “ought” to be the case. We can inform others about our views and judge them. We can try to establish an internally consistent framework, and reflect on others’ systems (even those proposed 2,500 years ago). From weekly religious sermons to Emmanuel Kant’s categorical imperative, we ponder right and wrong, and how to derive the principles that distinguish between them. Moral reasoning is not just a pastime of priests and philosophers. We frequently argue with family, friends, and colleagues about our own dilemmas, and we debate the choices of others.

  Early research by Jean Piaget and later by Lawrence Kohlberg examined how children begin to defend their moral choices. Children were presented with a moral dilemma and then asked about the reasons for their judgment. Kohlberg
found that young children focus on avoiding punishment, whereas older children, with more social experience, increasingly demonstrate an understanding that rules have to be followed for the greater good. Eventually some of the children justify their choices by reference to internally consistent theories about moral principles.

  Given that we differ in our moral reasoning, we may expect to find many stark differences in our moral judgments. However, recent studies suggest that some assessments about fairness, harm, and cooperation are almost universal. For instance, imagine a situation in which you are the driver of a trolley that is about to run over five people, and you can flick a switch to turn the trolley onto a side track, where it will instead kill only one person. It is generally regarded as morally correct to save the five at the cost of one. Yet most people agree that it is not permissible to save five people in need of organ transplants by killing one person with healthy versions of all those organs. We instantly know which is right and which is wrong, even if we cannot actually articulate the rule that underlies this judgment. In fact, judgments about moral responsibility are complicated, usually involving distinctions between intended and unintended outcomes and between actions and omissions.

  Research suggests that people’s moral intuitions often precede their explicit moral reasoning. We tend to have instant affective reactions to scenes of moral violations.7 The reliability of these responses have led some researchers to suggest that humans may possess a universal moral grammar that is partly innate (in a way that is analogous to Chomsky’s notion of a universal grammar of language). But it is also possible that this is culturally transmitted. In any case, it is clear that we can override our moral intuitions: although we might be inclined to seek information that confirms our intuitions, we can also go against our gut reaction and revise first impressions. When people decide to become vegetarians for ethical reasons, their emotional reactions may change as a result. We even rationally employ intuition as a tool. For example, in school I sometimes found myself unable to decide which of two essay questions to pursue in an exam. I flipped a coin, only to monitor my gut reaction as to the outcome. If it was one of relief, I’d go with the coin; otherwise I’d override the toss.

  Emotional responses to mental scenarios can be powerful and are essential to our conscience. We can experience shame and humiliation as a result of imagining how others see us, and we sometimes express this through blushing. Likewise, we can experience embarrassment or pride when imagining certain past and potential future events. Our current decisions can therefore be motivated by anticipated or “pre-experienced” emotional responses to hypothetical dilemmas, past misdemeanors, and foreseen events. For instance, anticipated regret stops us from pursuing many things we would thoroughly enjoy in the moment but we anticipate we will be embarrassed about after the fact. We generally do not go out to an expensive restaurant when we know we won’t be able to pay the bill.

  Our everyday actions are profoundly guided by our capacity for such self-reflective reasoning and planning. We can simulate both the affective and the practical consequences of our actions for our future selves (as well as for others and for the greater good). Yet recent research suggests that this reasoning is marred by certain biases. For example, we tend to systematically exaggerate our anticipated emotions. We tend to anticipate that we will feel happier achieving a goal than we actually will when we reach it. And when we fail, we tend to feel less unhappy than we anticipated. Dan Gilbert and colleagues suggest that part of the reason for these biases is that we anticipate the gist and ignore the details of future events. We might create a mental scenario simulating the pleasure of a vacation without imagining the nuisances of transportation and bad service. Another reason for such biases may be that exaggerating positive and negative outcomes helps us get motivated to choose future-directed actions in the first place. After all, the future is uncertain, and the present pressing. Consideration of the future and the moral consequences of one’s choices needs to compete with more immediate, and more certain, pleasures. If the fear of future failure and the anticipation of a future reward are somehow magnified, it may make it easier to pursue prudent future-directed actions.

  In general, for self-reflective moral reasoning to compete with more ancient, immediate urges, we needed to acquire a certain level of “executive functioning” (such as the executive power, discussed in Chapter 5, to decide which of several options to pursue). We need self-control: being able to inhibit one impulse in favor of another. For example, in reciprocal altruism we need to resist the temptation to cheat and secure short-term benefits because there is a greater future price to pay in form of a prison sentence, fractured trust, or a diminished reputation.

  Children initially have great difficulty with such executive control. In a study known as the marshmallow test, the psychologist Walter Mischel examined under what circumstances young children became capable of controlling their impulses in simple situations. Children were given the option between having a small reward (such as a marshmallow) immediately and waiting for a larger reward later. He found that by age four many children demonstrate some capacity to delay gratification. Whether children delay depended on various factors, such as the nature of the reward and the length of the delay. Another important factor was whether the reward was present or not; delaying gratification is more difficult in the face of temptation. Merely thinking about the reward reduced the time children could delay. Looking at a picture of the reward enabled children to delay longer than when they had the real reward in front of them. Even imagining that a real reward was just a picture increased children’s capacity to delay. When we become aware of such effects, we can deploy strategies to increase self-control. Differences in children’s self-control predict outcomes dozens of years later, including numerous measures of health, wealth, and success.

  Adults, as we saw in Chapter 5, can delay gratification for hours, years, and even lifetimes. Thus our self-reflective moral reasoning can gain control of our actions, desires, and thoughts.8 We can override biological urges—even the will to live and reproduce—with our moral convictions. We can create moral philosophies, pursue noble causes, and follow high ideals. We can make deliberate choices and may be said to have free will. The price we pay for these powers is that others hold us responsible for our freely willed actions.

  THOUGH THE WORD “PERSON” IN everyday language refers to any human being, in law and philosophy a “person” is usually a self-conscious entity that is able to choose its actions. Persons are recognized as having rights and duties. In this sense, an infant is not a person. If the entity cannot choose—for instance, because it does not have the executive control to inhibit certain action—it cannot be held morally responsible. If your actions are forced by someone else (e.g., you are pushed over a ledge), it is not an act of free will, and you are not personally responsible for the consequences that follow (e.g., for what you damage in your fall), as you would be if you had elected to jump. Similarly, if you cannot engage in self-reflective reasoning about your choices and their consequences, this lack of self-consciousness has critical implications for moral and legal responsibilities. If you were drugged, for example, your action might not be regarded as the product of your free will. However, if people think you had control and should have foreseen the bad consequences of an action, they tend to demand retribution or penance.

  Consider the inventor Thomas Midgely, who introduced the idea of adding lead to gasoline to stop engines from knocking. He later helped develop commercial chlorofluorocarbon (CFC) for use in refrigeration. For many decades lead and CFC were used in cars and refrigerators. Both turned out to be among the worst pollutants the world has ever seen. But is Midgely himself guilty for having caused more pollution than anyone else in the twentieth century? I do not know if he could have foreseen the consequences of his inventions. The same action and outcome can lead to quite different assessment of moral responsibility, depending on your judgments about the person’s foresight, control, and
intentions.9

  Even preschoolers distinguish intentional from unintentional acts. Yet, as you might recall, mind reading can become rather difficult. For instance, as Donald Rumsfeld observed in 2002, there exist “unknown unknowns, the [things] we don’t know we don’t know.” Most people would grant that you cannot be morally responsible for unknown unknowns. But you may well be held responsible for things you know you do not know—you could have made a greater effort to find out.

  Establishing moral responsibility can be a complicated matter, as can be observed in virtually any court. Naturally there is great incentive for people not to be found guilty, so everyone tries to construe situations to their advantage, sometimes through deception and lies. What is worse, the accused may not only deceive others but also deceive themselves. Deception is common in nature, but humans can go one step further and self-deceive. For instance, we avoid unwelcome information. People generally stop searching for new information quickly when they like what they have found so far, but they search significantly longer when they do not like it. In a medical saliva test, if changing color is said to indicate illness, people finish the test quickly, but when it is said to indicate health, people wait much longer. Robert Trivers and my colleague Bill von Hippel reviewed many studies that show that people search for, attend to, and remember information in apparently self-deceptive ways. Recall, for instance, that people remember their own good behavior better than their bad behavior, but show no such bias when recalling the behavior of others. Thus it’s no surprise that perpetrators and victims tend to remember events in ways that are biased in favor of their situation. The perpetrator thinks he was well-intended, reasonable, and justified, whereas the victim recalls the perpetrator’s action as malicious, irrational, and unwarranted.

 

‹ Prev