by Adam Alter
There’s good evidence now that oxytocin’s effects are far from straightforward, but the hormone’s power is hard to question. In one classic study, researchers sprayed either a small dose of oxytocin or an inactive placebo into the noses of male university students in Zurich. Both sprays were odorless, and the only difference between them was the presence of the hormone in the oxytocin spray. After the students inhaled one of the sprays, they played an economic game that measured how strongly they trusted a series of strangers. According to the rules of the game, the students were given a small sum of money, which they could keep or give to a stranger whom they had not met before the experiment began. Any money handed over to the stranger was tripled, and the stranger had the opportunity to reward the original student by sharing some or all of that newly multiplied money. Handing over the money was risky, though, because roguish strangers might keep it all for themselves, so the students were forced to trust the stranger if they were going to hand over their money. The students who inhaled oxytocin were more trusting, transferring 17 percent more money to the strangers than did the other group of students who inhaled the placebo spray. Merely inhaling a small quantity of oxytocin was enough to weaken the students’ natural suspicions, encouraging them to trust strangers who might otherwise have triggered suspicion.
If a small dose of inhaled oxytocin promotes trust in strangers, you can imagine how oxytocin affects new mothers when it floods their brains in larger, naturally occurring doses. Breast-feeding mothers experience such dramatic reductions in stress that they barely release cortisol—a hormone that normally responds quickly to stress—when they’re exposed to strong physical stressors. They also become calmer, more interactive, and less anxious than their usual selves, more willing to both protect and bond with their newborn babies.
Oxytocin and, by extension, Liquid Trust sound like panaceas designed to overcome the ills of a chronically distrustful modern world, but the hormone doesn’t always promote the same warm responses. Most of the early research on oxytocin considered how people respond to their newborns and lovers, so the hormone seemed to universally inspire love and affection. More recently, though, researchers have turned their attention to more distant social acquaintances, and the results are very different. Although oxytocin promotes positive responses toward in-group members—people of the same race, ethnicity, nationality, or religion—it produces weaker or even negative responses toward people who inhabit social out-groups. In a recent experiment, social psychologists found that Dutch students were quicker to link positive words with Dutch names and negative words with German or Arab names when they inhaled a small dose of oxytocin. In other experiments, the students were given a classic philosophical dilemma: Would they save five anonymous people who were stuck in a cave by detonating a bomb that would kill one person who was stuck in the cave’s entrance? In some cases, the person blocking the exit had a typical Dutch name, like Maarten, and in others he had a typical Arab name (Mohammed) or German name (Markus). The students who inhaled the placebo were equally likely to sacrifice the Maartens, Mohammeds, and Markuses, but those who inhaled oxytocin were less likely to sacrifice Maartens than they were to sacrifice Mohammeds or Markuses. Oxytocin led them to value the life of a fellow Dutchman above the life of an Arab or a German. Instead of promoting indiscriminate affection, oxytocin engendered warmth toward in-group members but not toward out-group members.
Loved ones are the ultimate in-group members, especially capable of fulfilling Maslow’s affiliation motive, but sometimes they aren’t around to deliver a timely dose of oxytocin. The good news is that romantic partners don’t need to be physically present to act as psychological painkillers. As the old film trope goes, soldiers prize nothing more than photos of their loved ones when they go to war, and recent research suggests that people are wise to gaze at these photos during difficult times.
In one experiment, a UCLA neuroscientist tested whether women could withstand pain more effectively when they were looking at photos of their long-term romantic partners. The experimenter applied a series of “thermal stimulations”—painfully warm probes—to the forearms of twenty-eight women who were in romantic relationships that were more than six months old. During some of the probes, the women looked at photos of their romantic partners; for others they looked at photos of a male who was a stranger but belonged to the same ethnic group and was just as attractive as their partner; for others still they stared at objects, like a chair, or at a small black shape on the computer screen. The probes were always a bit painful, but they were rated as 5 percent less painful when the women looked at photos of their partners. In fact, the photos dulled the pain slightly more effectively than did actually holding their partner’s hand, which suggests that imagined social support sometimes dulls painful experiences just as effectively as does real, live social support.
Photos of loved ones are powerful painkillers because they activate two critical regions in the brain. The first region, known as the ventromedial prefrontal cortex (VMPFC), sits just above the eyes at the front of the brain. The VMPFC has attracted plenty of attention among neuroscientists recently, and their understanding of its function continues to grow. In the context of pain reduction, the VMPFC signals safety and the absence of risk—much like the hormone oxytocin—which to some extent overrides bodily experiences of pain. Although the physical experience at the probe site doesn’t differ, the VMPFC dampens the sensation of pain by metaphorically whispering that everything is going to be fine. Meanwhile, pictures of loved ones also activate reward centers in the brain, which distract us from otherwise painful experiences. Activated together, the VMPFC and these reward centers diminish visceral pain by inducing security, conveying the absence of risk, and producing a generalized sense of well-being.
Maslow was right to emphasize the importance of love, affection, and friendship, not just because they’re responsible for psychological well-being but also because they affect us on a deeper, biological level. Oxytocin engenders the sort of trust necessary to form social bonds between mothers and babies, and sometimes pushes wary adversaries to overcome an intractable détente. Meanwhile, merely seeing or thinking about loved ones activates brain regions that dull the sting of physical pain. Having discussed the importance of love and affection, Maslow turned to the upper echelons of his model: the motivation to feel morally virtuous, and to fulfill whatever defines our own personal potential.
The Top of Maslow’s Hierarchy
For children are innocent and love justice, while most of us are wicked and naturally prefer mercy.
— G. K. CHESTERTON
Apart from Albert Einstein and a few esteemed acquaintances and colleagues, Maslow had a hard time identifying people who were self-actualized. He believed that it takes a lifetime to experience the sort of self-acceptance and moral clarity that defines self-actualization, so he might have been surprised by the results published in a paper sixty years later. While Maslow focused on middle age and maturity, the researchers who published the paper recognized that the innocence of childhood is among our purest symbols of moral clarity.
In some of the experiments, one group of participants wrote about a pleasant childhood memory in as much detail as they could muster. Some of the memories focused on playing with friends or learning to ride a bike, and in each case the memories evoked warm images of childhood. The remaining participants also invoked memories of the past but focused instead on pleasant memories from high school. The researchers reasoned that memories of high school are no more or less pleasant than fond memories of childhood, but they don’t evoke the same sense of innocence that diminishes as we enter adolescence. Later in the experiment, the researchers asked the participants whether they wanted to donate a chunk of their pay for completing the experiment to a charity for Japanese earthquake survivors. Many of the participants were generous, but overall they donated far more to the charity when they had earlier recalled childhood memories—40 percent of
their pay, rather than just 24 percent when they had recalled memories of high school.
In other studies, people who remembered their childhood were more willing to help the experimenter with a task after the experiment officially ended, and they also became more critical of immoral behavior in others—a sign that their own moral standards were elevated when they recalled the innocence of childhood.
The researchers also wanted to show that these differences were driven by thoughts of innocence and virtue, so they asked all the students to complete a series of word fragments. For example, they had to form the first words that came to mind when they saw the three fragments: P _ R _, M _ R _ _, and V _ R T _ _. Students who were focused on innocence, having been primed by childhood, should have been more likely to think of the words PURE, MORAL, and VIRTUE, rather than alternatives like PORE, MURKY, and VORTEX, and that’s exactly what the researchers found. Those who focused on childhood memories completed 65 percent of the fragments with innocence-related words, whereas those who focused on high school completed only 42 percent of the fragments with innocence-related words. In other studies, the researchers showed that these effects persist among people who think back on childhood as a difficult period in their lives. It wasn’t just the pleasantness of childhood, but rather the innocence, that gave them the sort of moral clarity that Maslow described when he imagined the state of self-actualization.
Childhood inspires moral clarity because it allows us to think back to a time before morality became complicated. As we mature, our moral decisions acquire the baggage of compromise and conflicting principles. Children know that stealing is wrong, so they won’t absolve a sick pauper who steals medicine for his wife—but the decision is far more complicated for an adult. Popular culture might suggest that you look inward to decide what’s truly right, and researchers have found that people are indeed more honest when they’re forced to stare at their own mirror images. When you’ve behaved badly, your mirror image judges your moral bankruptcy. In Oscar Wilde’s Picture of Dorian Gray, Dorian is a handsome man who remains youthful as he commits increasingly immoral acts. Meanwhile, a portrait of Dorian that sits in his attic magically grows more and more hideous as though reflecting his progressively uglier soul. Like Dorian Gray’s portrait, the image that looks back at us from a mirror makes us introspective and self-possessed, and when we commit immoral acts, our mirror image seems to judge us.
In the mid-1970s, two social psychologists asked students at a large university to spend five minutes completing a brief test designed to measure the complexity of their thoughts. The students had to unscramble a series of anagrams, but there was no way they could finish the entire series within the allotted five-minute period. The researcher told the students that a bell would ring after five minutes, and they shouldn’t continue working past the bell, since that would be cheating. Some of the students completed the test across from a large mirror and heard themselves speaking through a tape recorder, whereas others couldn’t see themselves while they worked on the anagrams and heard someone else’s voice on the tape recorder. Meanwhile, the experimenter looked through one-way glass and counted how many of the students continued to work past the five-minute bell. The results were staggering: only 7 percent of the students who saw themselves in the mirror cheated, whereas a massive 71 percent cheated when they weren’t forced to look at themselves as they decided whether to behave honestly. When people consider behaving badly, their mirror images become moral policemen.
It’s hard to know whether the students were more honest because they heard their own voices or because they saw themselves in the mirror, but other researchers have shown that people behave more virtuously after simply looking in the mirror. A group of social psychologists ran a series of studies in the late 1990s designed to show that people claim to behave more morally than they actually do. Their approach was simple and elegant. Students were told that they were going to assign themselves and a student partner, whom they hadn’t met, to two different tasks. One of the tasks was appealing because it entailed a possible reward, and the other was less appealing since there was no reward. Having learned of the two tasks, the students were asked whether they or their partner should be assigned to complete the appealing task. Obviously the students preferred to assign themselves to the appealing task and their partners to the unappealing task, but they also recognized that it would be fairer to toss a coin to decide who would undertake each task. The laws of probability state that if the students were using the coins fairly, roughly half of them should have been assigned to the positive task, and the other half should have been assigned to the negative task.
Though all of the students tossed the coin, the researchers found that 85 percent of the students assigned themselves to the positive task, suggesting that the coin was merely a prop that allowed them to defend the fairness of the desired outcome. Since they weren’t supervised, you can imagine how the students interpreted a negative outcome: if they lost the first toss, perhaps they decided that the outcome should rest on a best-of-three scenario.
The researchers tried the task again, this time placing the students in front of a large mirror. Forced to stare at their reflections as they tossed the coin, the students were perfectly fair, assigning their phantom partner to the positive task exactly 50 percent of the time. Incredibly, the students claimed they reached their decision fairly in both situations, but only the students who sat in front of a mirror actually obeyed the outcome of the coin toss.
The people we encounter every day vary along countless dimensions, though many of them enable us to fulfill the motives that Maslow identified in his hierarchy. Some are strangers, some are acquaintances, and some are so deeply entwined with our identity that it’s hard to imagine being the same person without them. Some are in-group members, sharing our race, ethnicity, religious affiliation, and linguistic background, and others occupy groups that differ from our own. When these people are present, either physically or merely because they’re occupying our minds in passing, we think, behave, and feel differently. The deeply stored associations we’ve formed between people and character traits linger. Running alongside these associations, the same encounters inspire a galaxy of biological responses. Some of those responses are helpful, but others place awkward hurdles on the path toward our goals and desires. These responses are miraculous, and they start with hormones and brain processes that begin well below the surface of conscious awareness.
Each of the social interactions discussed in the past two chapters also exists within a larger, overarching cultural context. Cultures are groups that share a common set of views, values, goals, and practices—from huge global regions to sports teams and knitting circles—and each culture has its own idiosyncrasies. Some cultural contexts cradle us gently, providing social support and a sense of camaraderie, and others help us to understand the world by casting it through a culturally tinted lens. This cultural lens influences what we see so profoundly that it goes on to shape how we think about almost everything imaginable, from objects and people, to abstract concepts like mathematics, honor, and art.
6.
CULTURE
Seeing Objects and Places Through a Cultural Lens
In the late 1800s, German psychiatrist Franz Müller-Lyer designed one of the world’s most famous visual illusions. The illusion became popular because it was easy to re-create and very difficult to shake. It began with a simple question: Which of the following two vertical lines is longer?
If you’re like almost everyone whom Müller-Lyer tested, Line B will appear longer than Line A. In fact, the two lines are identical in length, as this doctored version of the illusion shows:
For decades, vision researchers assumed that the illusion told us something fundamental about human vision. When they showed the illusion to people with normal vision, they were convinced that the line with the inward-pointing arrows would always seem longer than the line with the outward-pointing arrows
. That assumption wasn’t really tested before the 1960s, because until then almost everyone who had seen the illusion was WEIRD—an acronym that cultural psychologists have coined for people from Western, Educated, Industrialized, Rich, and Democratic societies.
In the early 1960s, three researchers remedied that oversight when they showed the illusion to two thousand people from fifteen different cultural groups. The illusion deceived the first few groups. Adults living in Evanston, Illinois, perceived Line B to be on average 20 percent longer than Line A, while students at nearby Northwestern University and white adults in South Africa similarly believed that Line B was between 13 percent and 15 percent longer than Line A. Then the researchers journeyed farther afield, testing people from several African tribes. Bushmen from southern Africa failed to show the illusion at all, perceiving the lines as almost identical in length. Small samples of Suku tribespeople from northern Angola and Bete tribespeople from the Ivory Coast also failed to show the illusion, or saw Line B as only very slightly longer than Line A. Müller-Lyer’s eponymous illusion had deceived thousands of people from WEIRD societies for decades, but it wasn’t universal.
How was it that African Bushmen and tribespeople were immune to the illusion, when they shared the same visual and neutral anatomy as the Westerners who couldn’t shake the sense that Line B was longer than Line A? In the absence of biological differences, the answer was, of course, cultural. In contrast to most Western societies, the Bushmen, Suku, and Bete lived in worlds with very few straight lines. Their houses, often made of thatch, were either rounded or devoid of the hard lines that dominate Western interiors, and they spent most of their time gazing at natural scenes of grassland, trees, and water that similarly lacked geometric angles.