Book Read Free

You're Not Listening

Page 13

by Kate Murphy


  When did you last sing to yourself? To someone else?

  If you were able to live to the age of ninety and retain either the mind or body of a thirty-year-old for the last sixty years of your life, which would you want?

  After this reciprocal listening exercise, the paired strangers reported intense feelings of closeness, much more so than subjects paired for the same amount of time to solve a problem or accomplish a task. Indeed, two pairs of the subjects in the experiment later got married. The research received little notice when it was published more than twenty years ago, but it got an enormous amount of attention when it resurfaced in a 2015 essay in The New York Times headlined TO FALL IN LOVE WITH ANYONE, DO THIS. Aron’s questions, subsequently renamed, “The 36 Questions That Lead to Love,” have become an internet meme, as people continue to use them to spark new romantic relationships and reignite existing ones.

  Good listeners are good questioners. Inquiry reinforces listening and vice versa because you have to listen to ask an appropriate and relevant question, and then, as a consequence of posing the question, you are invested in listening to the answer. Moreover, asking genuinely curious and openhearted questions makes for more meaningful and revelatory conversations—not to mention prevents misunderstandings. This, in turn, makes narratives more interesting, engaging, and even sympathetic, which is the basis for forming sincere and secure relationships.

  You can’t have meaningful exchanges with people, much less establish relationships, if you aren’t willing to listen to people’s stories, whether it’s where they come from, what their dreams are, what led them to do the work they do, or how they came to fear polka dots. What is love but listening to and wanting to be a part of another person’s evolving story? It’s true of all relationships—romantic and platonic. And listening to a stranger is possibly one of the kindest, most generous things you can do.

  People who make an effort to listen—and respond in ways that support rather than shift the conversation—end up collecting stories the way other people might collect stamps, shells, or coins. The result is they tend to have something interesting to contribute to almost any discussion. The best raconteurs and most interesting conversationalists I have ever met are the most agile questioners and attentive listeners. The exceptional listeners highlighted in this book, named and unnamed, kept me enthralled with their stories. It’s in part because they’ve collected so much material but also, they seemed to have consciously or subconsciously learned the tones, inflections, cadences, pauses, and turns of phrase that rivet your attention.

  Many celebrated writers, including Tom Wolfe, John McPhee, and Richard Price, have said that listening is the generative soul of their work. Pulitzer Prize–winning author Elizabeth Strout told an interviewer, “I have listened all my life. I just listen and listen and listen.” One of her characters, Jim Burgess, in her novel The Burgess Boys, says, “People are always telling you who they are.” Strout said she loved giving him that line because people really do tell you who they are, often in spite of themselves. “If you listen carefully, you can really get an awful lot of information about other people,” she said. “I think most people just aren’t listening that much.”

  The stories we collect in life define us and are the scaffolding of our realities. Families, friends, and coworkers have stories that bind them together. Rivals and enemies have narratives that keep them apart. All around us are people’s legends and anecdotes, myths and stark realities, deprecations and aggrandizements. Listening helps us sort fact from fiction and deepens our understanding of the complex situations and personalities we encounter in life. It’s how we gain entrée, gather intelligence, and make connections, regardless of the social circles in which we find ourselves.

  13

  Hammers, Anvils, and Stirrups

  Turning Sound Waves into Brain Waves

  The passenger pickup zone at Houston’s George Bush Intercontinental Airport was pandemonium. Cops bellowed and blew whistles, diverting traffic around construction. Workers in hard hats and orange vests jackhammered concrete. A belching backhoe sent heaps of ashy rubble crashing into the bed of a rumbling dump truck. Shuttle buses idled and hissed. Cars honked. Drivers rolled down their windows and yelled expletives.

  I saw my father exit the terminal about one hundred yards from where I was stuck in a line of cars. He was dragging a roller bag that roused a flock of pigeons pecking on the pavement. I stood on the running board of my car and called out, “Dad!” My voice was lost in the surrounding din. And yet, my father snapped his head in my direction. He waved and strode determinedly to the car. “You can always hear your puppies,” he said.

  Certainly, there are animals that have better hearing than humans. A dog, for example, could hear its puppy yelp from a much greater distance than my dad could hear me, his grown daughter. Elephants’ hearing is so sensitive they can hear approaching clouds. But humans are particularly adept at discriminating between and categorizing sounds, and—perhaps most important—we imbue what we hear with meaning.

  When my dad exited the airport, he plunged into a roiling sea of sound waves, undulating at various frequencies and amplitudes. But it was the unique sonic properties of my voice that got his attention. My voice triggered a cascade of physical, emotional, and cognitive reactions that made him take notice and respond. It’s easy to take for granted our ability to perceive and process auditory information in this way. We do it all day, every day. Nevertheless, it’s a feat that’s astounding in its specificity and complexity.

  There’s been extensive research over the years on where in the brain we make sense of auditory information. The processes that underlie the recognition and interpretation of sound have been studied in a variety of species (monkeys, mice, rabbits, harpy eagles, sea lions, dogs, etc.), and you can read hundreds of papers about everything from the neural pathways auditory signals take to how your genes respond depending on the input. And yet, there’s still little understanding of just how we listen and connect with one another during a conversation. Processing what someone says, it turns out, is one of the most intricate and involved things we ask our brains to do.

  What we do know is that each side of your brain has an auditory cortex, conveniently located near your ears. If it is injured or removed, you will have no awareness of sound, although you might have some reflexive reaction to it. You’ll flinch at a clap of thunder, but you won’t know why. Critical to the comprehension of speech is Wernicke’s area, located in the brain’s left hemisphere. It’s named for the German neurologist Carl Wernicke, who, in 1874, published his discovery that stroke patients with lesions in that area could still hear and speak but were unable to comprehend what was said to them. It’s unknown exactly how many other areas of the brain are recruited in speech comprehension or how much variability there is between humans, though it’s reasonable to suspect a fantastic listener who is picking up every nuance in a conversation is firing off more neurons in more parts of the brain than a bad listener.

  But it’s not only words our brains are processing when we listen to people. It’s also pitch, loudness, and tone as well as the flow of tone, called prosody. In fact, human beings can reliably interpret the emotional aspect of a message even when the words are completely obscured. Think of the various ways a person can say, “Sure.” There’s the peppy, higher-pitched “Sure!” said when someone is eager to help with a request. There’s the tentative and lower-pitched “Suuuure” that stretches out for a couple of beats when someone is somewhat ambivalent or reluctant to help with the request. And then there’s the clipped, level-pitched “Sure” that precedes a “but” when someone is probably going to argue about how to help with the request, or is not going to help at all.

  Researchers are just now discovering that specialized clusters of neurons in the brain are responsible for detecting those slight changes in pitch and tone. The more practiced a listener you are, the better these neurons get at perceiving the kinds of sonic variations that carry the em
otional content, and much of the meaning, of what people say. For example, musicians, whose art depends on detecting differences in pitch and tone, more readily pick up on vocal expressions of emotion than nonmusicians, lending some truth to the notion that musicians tend to be more sensitive souls. Perhaps not surprisingly, musicians unfamiliar with Mandarin Chinese also tend to be better than nonmusicians at discerning the language’s subtle tonal differences, which can change the entire meaning of a word in addition to signaling emotion.

  There’s also evidence that you use different parts of your brain depending on how you interpret what you hear. Uri Hasson, the neuroscientist who showed us how listeners’ and speakers’ brain waves sync when there is understanding, conducted another intriguing fMRI experiment in his Princeton lab that showed the mind-altering effects of prejudicial information. He and his colleagues had subjects listen to an adapted version of the J. D. Salinger short story “Pretty Mouth and Green My Eyes,” which describes a telephone conversation between Arthur and Lee. Arthur tells Lee he suspects his wife is having an affair while an unidentified woman lies in bed next to Lee. Before hearing the story, half the subjects were told the woman in bed with Lee is Arthur’s wife. The other cohort was told Arthur is paranoid and the woman is Lee’s girlfriend.

  That one differing detail was enough to significantly change the subjects’ brain patterns while listening to the story so that Hasson could easily tell who thought the wife was a two-timer and who thought she was faithful. If that was all it took to separate people into neurally distinct groups, just think what’s happening in the brains of people habitually listening to, say, Fox News versus CNN. If you tell both factions the exact same thing, their brains will measurably hear it differently, as the signals are routed through distinct pathways depending on what they had previously heard. “It will reshape your mind,” Hasson told me. “It will affect the way you listen.” It’s an argument for listening to as many sources as possible to keep your brain as agile as possible. Otherwise, your brain becomes like a car that’s not firing on all cylinders or a computer circuit board where electrical impulses run through a limited number of channels, wasting its full capacity.

  Another interesting aspect of how we process auditory information is the right-ear advantage. Our language comprehension is generally better and faster when heard in the right ear versus the left. It has to do with the lateralization of the brain so that what one hears in the right ear is routed first to the left side of the brain, where Wernicke’s area is located. There’s a left-ear advantage when it comes to the recognition of emotional aspects of speech as well as the perception and appreciation of music and sounds in nature. The opposite may be true for left-handed people whose brain wiring may be reversed.

  So, you may be better at picking up on the meaning of speech versus the emotional feelings that underlie speech depending on which ear you use. This finding comes from studies of subjects listening to voices piped into either the left or right side of headphones as well as studies of patients who have had brain damage in the right or left side of the brain. Those with right-side injuries, for example, had the most trouble picking up on emotions.

  There was also an ingenious study by Italian researchers that showed, in noisy nightclubs, people more often offered their right ear when someone walked up and tried to talk to them and were also more likely to give someone a cigarette when the request was made in the right ear versus the left ear. It was a clever way of demonstrating the right-ear advantage in a natural setting, since there are not many environments where it’s possible to make a request into only one ear and have it not seem totally weird.

  This may have implications for which ear you want to incline toward a speaker or which ear you use to talk on the phone. For talking to your boss, tilt your head to the left so your right ear is up. If you’re having trouble figuring out whether your romantic partner is upset, switch your phone to the left ear. Do the reverse if you are left-handed. But you probably subconsciously choose the most advantageous ear already. For example, a left-handed female executive who works in the male-dominated, take-no-prisoners oil industry in Houston told me she always holds the phone to her left ear—which, for lefties like her, is the more logical, less emotional ear. “When I put a phone to my right ear, it seems like I can’t hear,” she said. “That’s not true, of course, but it feels that way.”

  Naomi Henderson, the focus group moderator, told me she’s noticed that when people tilt their heads to the right so their left ear is up, it usually signals that they are tapping into more emotional parts of themselves, which is the kind of information that is most valuable to her clients. So when she sees someone cock their head right, lifting that left ear, it prompts her to zero in and inquire what memories or images the product or issue they were discussing brought to mind. She discovered this through experience rather than a scientific experiment, but it makes sense given the left ear is usually the more emotional ear.

  Which ear do you use to talk on the phone? Which ear do you put forward when you’re straining to hear something? Do you use a different ear in different circumstances or with different people? It’s an interesting experiment and might indicate how you are processing information, or rather, what aspects of the information are taking precedence at that moment. Equally fascinating is to notice which ear others incline toward you and how that may change depending on the topic of the conversation.

  * * *

  We should probably back up at this point to talk about the actual mechanics of hearing, which is the necessary precursor to listening. We’ve talked about how auditory information is processed once inside the brain, but it’s worth taking a moment to appreciate how it gets in there. Let’s pause to consider the miracles that are our ears, the openings on either side of our heads that help us not only hear but also maintain our physical balance. You could say our ears help us get our bearings both physically and emotionally.

  The earliest vertebrates had inner ears, which were the beginnings of the vestibular—or balance—system. People who have had vertigo know all too well the importance of a functioning vestibular system. It senses the body’s acceleration and orientation in space and sends signals to the musculoskeletal system to keep us upright. Our slithering forebears’ primitive vestibular systems not only sensed which way was up but could also vibrate in response to pressure, first underwater and then in the open air. This was the beginning of hearing because what are sound waves but compressions of air? A Bach sonata, a garbage truck backing up, a mosquito’s whine—they’re all just air particles being scrunched together at regular intervals, kind of like an invisible inchworm moving up-down, up-down through space.

  When sound waves reach our ears, the air compressions are funneled down the stiff, fleshy outside part of the ear known as the pinna, increasing the relative acoustic pressure by up to twenty decibels by the time it reaches our ear canal. The nerve endings in there are incredibly dense. David Haynes, a professor of otolaryngology, neurosurgery, and hearing and speech sciences at Vanderbilt University in Nashville, Tennessee, told me that there are more nerve tendrils reaching into the ear per square centimeter than just about anywhere else in the body. “It developed that way over time to make us more protective because our ears are super important real estate,” he said. And those sensory nerves can refer sensations throughout the body, including to internal organs and erogenous zones, which is why people persist in sticking Q-tips in their ears despite dire warnings from doctors like Haynes that it can be harmful. It just feels so darn good to root around in there. They don’t call it “eargasm” for nothing.

  At the other end of the ear canal, about an inch inside your head, the sound waves strike the tympanic membrane—your beautiful, pearlescent little eardrum—which vibrates neighboring bones with wonderfully descriptive names like the hammer, the anvil, and the stirrup. From there, the waves spiral around the fluid-filled cochlea, which looks like a snail shell (cochlea is Greek for snail). The cochlea is lined with
tiny hair cells, each tuned to a different frequency. Given how important communication and cooperation have been to our species’ survival, it should come as no surprise that the hair cells tuned to the frequencies of human sounds are the most sensitive.

  Protruding from each hair cell is a bundle of bristles, called stereocilia, with each strand only as wide as the smallest wavelength of visible light. When sound waves nudge these filaments back and forth, it tickles nerve endings to set all sorts of cognitive and emotional processes in motion. So in the midst of all that ruckus at the airport, tiny hair cells tens of microns long registering infinitesimally small changes were how my dad recognized and responded to my voice.

  Most hearing loss comes from damage to those hair cells caused by loud noises. Viewed through an electron microscope, healthy stereocilia look like soldiers, standing at attention in precise formation. But when exposed to sounds as loud as an ambulance siren, they look like they’ve suffered an enemy attack, bent and flopped over.

  Your hair cells might recover if the noise wasn’t too loud and didn’t last too long. A typical conversation occurs at sixty decibels and doesn’t cause damage, but listen to music through earbuds at high volume, which is around one hundred decibels, and you’ll have permanent damage after just fifteen minutes. Lower the volume to a more moderate eighty-eight decibels and you’ll have damage in four hours. A jackhammer or jet engine can cause damage in less than thirty seconds.

  A distressing number of everyday activities can damage your precious stereocilia, including drying your hair, using a blender, going to a rock concert, vacuuming, watching a movie in a movie theater, eating at a noisy restaurant, riding a motorcycle, and operating a power tool. Over time, the noisy insults can add up to significant hearing loss. This, of course, inhibits your ability to listen and disconnects you from the world. But audiologists say inserting a cheap pair of foam earplugs into your ears in noisy situations can go a long way toward preserving your hearing.

 

‹ Prev