Book Read Free

Behave: The Biology of Humans at Our Best and Worst

Page 19

by Robert M. Sapolsky


  The neurobiology of kid empathy makes sense. As introduced in chapter 2, in adults the anterior cingulate cortex activates when they see someone hurt. Ditto for the amygdala and insula, especially in instances of intentional harm—there is anger and disgust. PFC regions including the (emotional) vmPFC are on board. Observing physical pain (e.g., a finger being poked with a needle) produces a concrete, vicarious pattern: there is activation of the periaqueductal gray (PAG), a region central to your own pain perception, in parts of the sensory cortex receiving sensation from your own fingers, and in motor neurons that command your own fingers to move.* You clench your fingers.

  Work by Jean Decety of the University of Chicago shows that when seven-year-olds watch someone in pain, activation is greatest in the more concrete regions—the PAG and the sensory and motor cortices—with PAG activity coupled to the minimal vmPFC activation there is. In older kids the vmPFC is coupled to increasingly activated limbic structures.13 And by adolescence the stronger vmPFC activation is coupled to ToM regions. What’s happening? Empathy is shifting from the concrete world of “Her finger must hurt, I’m suddenly conscious of my own finger” to ToM-ish focusing on the pokee’s emotions and experience.

  Young kids’ empathy doesn’t distinguish between intentional and unintentional harm or between harm to a person and to an object. Those distinctions emerge with age, around the time when the PAG part of empathic responses lessens and there is more engagement of the vmPFC and ToM regions; moreover, intentional harm now activates the amygdala and insula—anger and disgust at the perpetrator.* This is also when kids first distinguish between self- and other-inflicted pain.

  More sophistication—by around age seven, kids are expressing their empathy. By ages ten through twelve, empathy is more generalized and abstracted—empathy for “poor people,” rather than one individual (downside: this is also when kids first negatively stereotype categories of people).

  There are also hints of a sense of justice. Preschoolers tend to be egalitarians (e.g., it’s better that the friend gets a cookie when she does). But before we get carried away with the generosity of youth, there is already in-group bias; if the other child is a stranger, there is less egalitarianism.14

  There is also a growing tendency of kids to respond to an injustice, when someone has been treated unfairly.15 But once again, before getting carried away with things, it comes with a bias. By ages four through six, kids in cultures from around the world respond negatively when they are the ones being shortchanged. It isn’t until ages eight through ten that kids respond negatively to someone else being treated unfairly. Moreover, there is considerable cross-cultural variability as to whether that later stage even emerges. The sense of justice in young kids is a very self-interested one.

  Soon after kids start responding negatively to someone else being treated unjustly, they begin attempting to rectify previous inequalities (“He should get more now because he got less before”).16 By preadolescence, egalitarianism gives way to acceptance of inequality because of merit or effort or for a greater good (“She should play more than him; she’s better/worked harder/is more important to the team”). Some kids even manage self-sacrifice for the greater good (“She should play more than me; she’s better”).* By adolescence, boys tend to accept inequality more than girls do, on utilitarian grounds. And both sexes are acquiescing to inequality as social convention—“Nothing can be done; that’s the way it is.”

  Moral Development

  With ToM, perspective taking, nuanced empathy, and a sense of justice in place, a child can start wrestling with telling right from wrong.

  Piaget emphasized how much kids’ play is about working out rules of appropriate behavior (rules that can differ from those of adults)* and how this involves stages of increasing complexity. This inspired a younger psychologist to investigate the topic more rigorously, with enormously influential consequences.

  In the 1950s Lawrence Kohlberg, then a graduate student at the University of Chicago and later a professor at Harvard, began formulating his monumental stages of moral development.17

  Kids would be presented with moral conundrums. For example: The only dose of the only drug that will save a poor woman from dying is prohibitively expensive. Should she steal it? Why?

  Kohlberg concluded that moral judgment is a cognitive process, built around increasingly complex reasoning as kids mature. He proposed his famed three stages of moral development, each with two subparts.

  You’ve been told not to eat the tempting cookie in front of you. Should you eat it? Here are the painfully simplified stages of reasoning that go into the decision:

  Level 1: Should I Eat the Cookie? Preconventional Reasoning

  Stage 1. It depends. How likely am I to get punished? Being punished is unpleasant. Aggression typically peaks around ages two through four, after which kids are reined in by adults’ punishment (“Go sit in the corner”) and peers (i.e., being ostracized).

  Stage 2. It depends. If I refrain, will I get rewarded? Being rewarded is nice.

  Both stages are ego-oriented—obedience and self-interest (what’s in it for me?). Kohlberg found that children are typically at this level up to around ages eight through ten.

  Concern arises when aggression, particularly if callous and remorseless, doesn’t wane around these ages—this predicts an increased risk of adult sociopathy (aka antisocial personality).* Crucially, the behavior of future sociopaths seems impervious to negative feedback. As noted, high pain thresholds in sociopaths help explain their lack of empathy—it’s hard to feel someone else’s pain when you can’t feel your own. It also helps explain the imperviousness to negative feedback—why change your behavior if punishment doesn’t register?

  It is also around this stage that kids first reconcile after conflicts and derive comfort from reconciliation (e.g., decreasing glucocorticoid secretion and anxiety). Those benefits certainly suggest self-interest motivating reconciliation. This is shown in another, realpolitik way—kids reconcile more readily when the relationship matters to them.

  Level 2: Should I Eat the Cookie? Conventional Reasoning

  Stage 3. It depends. Who will be deprived if I do? Do I like them? What would other people do? What will people think of me for eating the cookie? It’s nice to think of others; it’s good to be well regarded.

  Stage 4. It depends. What’s the law? Are laws sacrosanct? What if everyone broke this law? It’s nice to have order. This is the judge who, considering predatory but legal lending practices by a bank, thinks, “I feel sorry for these victims . . . but I’m here to decide whether the bank broke a law . . . and it didn’t.”

  Conventional moral reasoning is relational (about your interactions with others and their consequences); most adolescents and adults are at this level.

  Level 3: Should I Eat the Cookie? Postconventional Reasoning

  Stage 5: It depends. What circumstances placed the cookie there? Who decided that I shouldn’t take it? Would I save a life by taking the cookie? It’s nice when clear rules are applied flexibly. Now the judge would think: “Yes, the bank’s actions were legal, but ultimately laws exist to protect the weak from the mighty, so signed contract or otherwise, that bank must be stopped.”

  Stage 6: It depends. Is my moral stance regarding this more vital than some law, a stance for which I’d pay the ultimate price if need be? It’s nice to know there are things for which I’d repeatedly sing, “We Will Not Be Moved.”

  This level is egoistic in that rules and their application come from within and reflect conscience, where a transgression exacts the ultimate cost—having to live with yourself afterward. It recognizes that being good and being law-abiding aren’t synonymous. As Woody Guthrie wrote in “Pretty Boy Floyd,” “I love a good man outside the law, just as much as I hate a bad man inside the law.”*

  Stage 6 is also egotistical, implicitly built on self-righteousness that trumps conventional petty
bourgeois rule makers and bean counters, The Man, those sheep who just follow, etc. To quote Emerson, as is often done when considering the postconventional stage, “Every heroic act measures itself by its contempt of some external good.” Stage 6 reasoning can inspire. But it can also be insufferable, premised on “being good” and “being law abiding” as opposites. “To live outside the law, you must be honest,” wrote Bob Dylan.

  Kohlbergians found hardly anyone consistently at stage 5 or stage 6.

  —

  Kohlberg basically invented the scientific study of moral development in children. His stage model is so canonical that people in the business dis someone by suggesting they’re stuck in the primordial soup of a primitive Kohlberg stage. As we’ll see in chapter 12, there is even evidence that conservatives and liberals reason at different Kohlberg stages.

  Naturally, Kohlberg’s work has problems.

  The usual: Don’t take any stage model too seriously—there are exceptions, maturational transitions are not clean cut, and someone’s stage can be context dependent.

  The problem of tunnel vision and wrong emphases: Kohlberg initially studied the usual unrepresentative humans, namely Americans, and as we will see in later chapters, moral judgments differ cross-culturally. Moreover, subjects were male, something challenged in the 1980s by Carol Gilligan of NYU. The two agreed on the general sequence of stages. However, Gilligan and others showed that in making moral judgments, girls and women generally value care over justice, in contrast to boys and men. As a result, females tilt toward conventional thinking and its emphasis on relationships, while males tilt toward postconventional abstractions.18

  The cognitive emphasis: Are moral judgments more the outcome of reasoning or of intuition and emotion? Kohlbergians favor the former. But as will be seen in chapter 13, plenty of organisms with limited cognitive skills, including young kids and nonhuman primates, display rudimentary senses of fairness and justice. Such findings anchor “social intuitionist” views of moral decision making, associated with psychologists Martin Hoffman and Jonathan Haidt, both of NYU.19 Naturally, the question becomes how moral reasoning and moral intuitionism interact. As we’ll see, (a) rather than being solely about emotion, moral intuition is a different style of cognition from conscious reasoning; and (b) conversely, moral reasoning is often flagrantly illogical. Stay tuned.

  The lack of predictability: Does any of this actually predict who does the harder thing when it’s the right thing to do? Are gold medalists at Kohlbergian reasoning the ones willing to pay the price for whistle-blowing, subduing the shooter, sheltering refugees? Heck, forget the heroics; are they even more likely to be honest in dinky psych experiments? In other words, does moral reasoning predict moral action? Rarely; as we will see in chapter 13, moral heroism rarely arises from super-duper frontal cortical willpower. Instead, it happens when the right thing isn’t the harder thing.

  Marshmallows

  The frontal cortex and its increasing connectivity with the rest of the brain anchors the neurobiology of kids’ growing sophistication, most importantly in their capacity to regulate emotions and behavior. The most iconic demonstration of this revolves around an unlikely object—the marshmallow.20

  In the 1960s Stanford psychologist Walter Mischel developed the “marshmallow test” to study gratification postponement. A child is presented with a marshmallow. The experimenter says, “I’m going out of the room for a while. You can eat the marshmallow after I leave. But if you wait and don’t eat it until I get back, I’ll give you another marshmallow,” and leaves. And the child, observed through a two-way mirror, begins the lonely challenge of holding out for fifteen minutes until the researcher returns.

  Studying hundreds of three- to six-year-olds, Mischel saw enormous variability—a few ate the marshmallow before the experimenter left the room. About a third lasted the fifteen minutes. The rest were scattered in between, averaging a delay of eleven minutes. Kids’ strategies for resisting the marshmallow’s siren call differed, as can be seen on contemporary versions of the test on YouTube. Some kids cover their eyes, hide the marshmallow, sing to distract themselves. Others grimace, sit on their hands. Others sniff the marshmallow, pinch off an infinitely tiny piece to eat, hold it reverentially, kiss it, pet it.

  Various factors modulated kids’ fortitude (shown in later studies described in Mischel’s book where, for some reason, it was pretzels instead of marshmallows). Trusting the system mattered—if experimenters had previously betrayed on promises, kids wouldn’t wait as long. Prompting kids to think about how crunchy and yummy pretzels are (what Mischel calls “hot ideation”) nuked self-discipline; prompts to think about a “cold ideation” (e.g., the shape of pretzels) or an alternative hot ideation (e.g., ice cream) bolstered resistance.

  As expected, older kids hold out longer, using more effective strategies. Younger kids describe strategies like “I kept thinking about how good that second marshmallow would taste.” The problem, of course, is that this strategy is about two synapses away from thinking about the marshmallow in front of you. In contrast, older kids use strategies of distraction—thinking about toys, pets, their birthday. This progresses to reappraisal strategies (“This isn’t about marshmallows. This is about the kind of person I am”). To Mischel, maturation of willpower is more about distraction and reappraisal strategies than about stoicism.

  So kids improve at delayed gratification. Mischel’s next step made his studies iconic—he tracked the kids afterward, seeing if marshmallow wait time predicted anything about their adulthoods.

  Did it ever. Five-year-old champs at marshmallow patience averaged higher SAT scores in high school (compared with those who couldn’t wait), with more social success and resilience and less aggressive* and oppositional behavior. Forty years postmarshmallow, they excelled at frontal function, had more PFC activation during a frontal task, and had lower BMIs.21 A gazillion-dollar brain scanner doesn’t hold more predictive power than one marshmallow. Every anxious middle-class parent obsesses over these findings, has made marshmallows fetish items.

  CONSEQUENCES

  We’ve now gotten a sense of various domains of behavioral development. Time to frame things with this book’s central question. Our adult has carried out that wonderful or crummy or ambiguous behavior. What childhood events contributed to that occurring?

  A first challenge is to truly incorporate biology into our thinking. A child suffers malnutrition and, as an adult, has poor cognitive skills. That’s easy to frame biologically—malnutrition impairs brain development. Alternatively, a child is raised by cold, inexpressive parents and, as an adult, feels unlovable. It’s harder to link those two biologically, to resist thinking that somehow this is a less biological phenomenon than the malnutrition/cognition link. There may be less known about the biological changes explaining the link between the cold parents and the adult with poor self-esteem than about the malnutrition/cognition one. It may be less convenient to articulate the former biologically than the latter. It may be harder to apply a proximal biological therapy for the former than for the latter (e.g., an imaginary neural growth factor drug that improves self-esteem versus cognition). But biology mediates both links. A cloud may be less tangible than a brick, but it’s constructed with the same rules about how atoms interact.

  How does biology link childhood with the behaviors of adulthood? Chapter 5’s neural plasticity writ large and early. The developing brain epitomizes neural plasticity, and every hiccup of experience has an effect, albeit usually a miniscule one, on that brain.

  We now examine ways in which different types of childhoods produce different sorts of adults.

  LET’S START AT THE VERY BEGINNING: THE IMPORTANCE OF MOTHERS

  Nothing like a section heading stating the obvious. Everybody needs a mother. Even rodents; separate rat pups from Mom a few hours daily and, as adults, they have elevated glucocorticoid levels and poor cognitive skills, are anxious, and
, if male, are more aggressive.22 Mothers are crucial. Except that well into the twentieth century, most experts didn’t think so. The West developed child-rearing techniques where, when compared with traditional cultures, children had less physical contact with their mothers, slept alone at earlier ages, and had longer latencies to be picked up when crying. Around 1900 the leading expert Luther Holt of Columbia University warned against the “vicious practice” of picking up a crying child or handling her too often. This was the world of children of the wealthy, raised by nannies and presented to their parents before bedtime to be briefly seen but not heard.

  This period brought one of history’s strangest one-night stands, namely when the Freudians and the behaviorists hooked up to explain why infants become attached to their mothers. To behaviorists, obviously, it’s because mothers reinforce them, providing calories when they’re hungry. For Freudians, also obviously, infants lack the “ego development” to form a relationship with anything/anyone other than Mom’s breasts. When combined with children-should-be-seen-but-not-heard-ism, this suggested that once you’ve addressed a child’s need for nutrition, proper temperature, plus other odds and ends, they’re set to go. Affection, warmth, physical contact? Superfluous.

  Such thinking produced at least one disaster. When a child was hospitalized for a stretch, dogma was that the mother was unnecessary—she just added emotional tumult, and everything essential was supplied by the staff. Typically, mothers could visit their children once a week for a few minutes. And when kids were hospitalized for extended periods, they wasted away with “hospitalism,” dying in droves from nonspecific infections and gastrointestinal maladies unrelated to their original illness.23 This was an era when the germ theory had mutated into the belief that hospitalized children do best when untouched, in antiseptic isolation. Remarkably, hospitalism soared in hospitals with newfangled incubators (adapted from poultry farming); the safest hospitals were poor ones that relied on the primitive act of humans actually touching and interacting with infants.

 

‹ Prev