Book Read Free

I Can Hear You Whisper

Page 30

by Lydia Denworth


  To better assess how people with hearing loss function in the real world, audiologists routinely test them “in noise” in the sound booth. The first time Lisa Goldin did that to Alex was the only time he put his head down on the table and refused to cooperate. She was playing something called “multitalker babble,” which sounded like simultaneous translation at the United Nations. Even for me, it was hard to hear the words Alex was supposed to pick out and repeat. Lisa wasn’t trying to be cruel. An elementary school classroom during lunchtime or small group work can sound as cacophonous as the United Nations. Even then, hearing children are learning from one another incidentally. Like so much else in life, practice would make it easier for Alex to do the same—though he’ll never get as much of this kind of conversation as the others—and the test would allow Lisa to see if there was any need to adjust his sound processing programs to help.

  Hearing in noise is such a big problem—and such an intriguing research question—that it has triggered a subspecialty in acoustic science known as the “cocktail party problem.” Researchers are asking, How does one manage to stand in a crowd and not only pick out but also understand the voice of the person with whom you’re making small talk amid all the other chatter of the average gathering? Deaf and hard-of-hearing people have their own everyday variation: the dinner table problem. Except that unlike hearing people at a party, deaf people can’t pick out much of anything. Even a mealtime conversation with just our family of five can be hard for Alex to follow, and a restaurant is usually impossible. His solution at a noisy table is to sit in my lap so I can talk into his ear, or he gives up and plays with my phone—and I let him.

  “If we understand better how the brain does it with normal hearing, we’ll be in a better position to transmit that information via cochlear implants or maybe with hearing aids,” says Andrew Oxenham, the auditory neuroscientist from the University of Minnesota. Intriguingly, understanding the cocktail party problem may not only help people with hearing loss but could also be applied to automatic speech recognition technology, too. “We have systems that are getting better and better at recognizing speech, but they tend to fail in more complicated acoustic environments,” says Oxenham. “If someone else is talking in the background or if a door slams, most of these programs have no way of telling what’s speech and what’s a door slamming.”

  The basic question is how we separate what we want to listen to from everything else that’s going on. The answer is that we use a series of cues that scientists think of as a chain. First, we listen for the onset of new sounds. “Things that start at the same time and often stop at the same time tend to come from the same source. The brain has to stream those segments together,” says Oxenham. To follow the segments over time, we use pitch. “My voice will go up and down in pitch, but it will still take a fairly smooth and slow contour, so that you typically don’t get sounds that drastically alter pitch from one moment to the next,” says Oxenham. “The brain uses that information to figure out, well, if something’s not varying much in pitch, it probably all belongs to the same source.” Finally, it helps to know where the sound is coming from. “If one thing is coming from the left and one thing is coming from the right, we can use that information to follow one source and ignore another.”

  The ability to tell where a sound is coming from is known as spatial localization. It’s a skill that requires two ears. Anyone who has played Marco Polo in the pool as a child will remember that people with normal hearing are not all equally good at this, but it’s almost impossible for people with hearing loss. This became obvious as soon as Alex was big enough to walk around the house by himself.

  “Mom, Mom, where are you?” he would call from the hall.

  “I’m here.”

  “Where?”

  “Here.”

  Looking down through the stairwell, I could see him in the hall one floor and perhaps fifteen feet away, looking everywhere but at me.

  “I’m here” wouldn’t suffice. He couldn’t even tell if I was upstairs or downstairs. I began to give the domestic version of latitude and longitude: “In the bathroom on the second floor.” Or “By the closet in Jake’s room.”

  To find a sound, those with normal hearing compare the information arriving at each ear in two ways: timing and intensity. If I am standing directly in front of Alex, his voice reaches both of my ears simultaneously. But if he runs off to my right to pet the dog, his voice will reach my right ear first, if only by a millionth of a second. The farther he moves to my right, the larger the difference in time. There can also be a difference in the sound pressure level or intensity as sounds reach each ear. If a sound is off to one side, the head casts a shadow and reduces the intensity of the sound pressure level on the side away from the source.

  Time differences work well for low-frequency waves. Because high-frequency waves are smaller and closer together, they can more easily be located with intensity differences. At 1,000 Hz, the sound level is about eight decibels louder in the ear nearer the source, but at 10,000 Hz it could be as much as thirty decibels louder. At high frequencies, we can also use our pinna (the outermost part of the ear) to figure out if a sound is in front of us or behind. Having two ears, then, helps with the computations our brain is constantly performing on the information it is taking in. We can make use of the inherent redundancies to compare and contrast information from both ears.

  Hearing well in noise requires not just two ears but also a level of acoustic information that isn’t being transmitted in today’s implant. A waveform carries information both in big-picture outline and in fine-grained detail. Over the past ten years, sound scientists have been intensely interested in the difference, which comes down to timing. To represent the big picture, they imagine lines running along the top and bottom of a particular sound wave, with the peaks and troughs of each swell bumping against them. The resulting outline is known as the envelope of the signal, a broad sketch of its character and outer limits that captures the slowly varying overall amplitude of the sound. What Blake Wilson and Charlie Finley figured out when they created their breakthrough speech processing program, CIS, was how to send the envelope of a sound as instructions to a cochlear implant.

  The rest of the information carried by the waveform is in the fine-grained detail found inside the envelope. This “temporal fine structure” carries richness and depth. If the envelope is the equivalent of a line drawing of, for example, a bridge over a stream, fine structure is Monet’s painting of his Japanese garden at Giverny, full of color and lush beauty. The technical difference between the two is that the sound signal of the envelope changes more slowly over time, by the order of several hundred hertz per second, whereas “fine structure is the very rapidly varying sound pressure, the phase of the signal,” says Oxenham. In normal hearing, the fine structure can vary more than a thousand times a second, and the hair cells can follow along.

  An implant isn’t up to that task. So far, researchers have been stymied by the limits of electrical stimulation—or more precisely by its excesses. When multiple electrodes stimulate the cochlea, in an environment filled with conductive fluid, the current each one sends spreads out beyond the intended targets. Hugh McDermott, one of the Melbourne researchers, uses an apt analogy to capture the problem. He describes the twenty-two electrodes in the Australian cochlear implant as twenty-two cans of spray paint, the neurons you’re trying to stimulate as a blank canvas, and the paint itself as the electrical current running between the two. “Your job in creating a sound is to paint something on that canvas,” says McDermott. “Now the problem is, you can turn on any of those cans of paint anytime you like, but as the paint comes out it spreads out. It has to cross a couple meters’ distance and then it hits the canvas. Instead of getting a nice fine line, you get a big amorphous blob. To make a picture of some kind, you won’t get any detail. It’s like a cartoon rather than a proper painting.” In normal hearing, by contrast, the signals sent by hair cells, while also electrical, are as
controlled and precise as the narrowest of paintbrushes.

  So while it seems logical that more electrodes lead to better hearing, the truth is that because of this problem of current spread, some of the electrodes cancel one another out. René Gifford, of Vanderbilt University, is working on a three-way imaging process that allows clinicians to determine—or really improve the odds on guessing—which electrodes overlap most significantly, and then simply turn some off. “Turning off electrodes is the newest, hottest thing,” says Michael Dorman of Arizona State University, who shared Gifford’s results with me. Gifford is a former member of Dorman’s laboratory, so he’s rooting her on. Half of those she tested benefited from this strategy. Other researchers are working on other ideas to solve the current-spread problem. Thus far, the best implant manufacturers have been able to do is offer settings that allow a user to reduce noise if the situation requires it. “It’s more tolerable to go into noisy environments,” says Dorman. “They may not understand anything any better, but at least they don’t have to leave because they’re being assaulted.”

  • • •

  In a handful of labs like Dorman’s, where Ayers and Cook are hard at work, the cocktail party problem meets the cochlear implant. “You need two ears of some kind to solve the cocktail party problem,” says Dorman, a scientist who, like Poeppel, enjoys talking through multiple dimensions of his work. For a long time, however, no one with cochlear implants had more than one. The reasons were several: a desire to save one ear for later technology; uncertainty about how to program two implants together; and—probably most significant—cost and an unwillingness on the part of insurance companies to pay for a second implant. As I knew from my experience with Alex, for a long time it was also uncommon to use an implant and hearing aid together. Within the past decade, and especially the past five years, that has changed dramatically. If a family opts for a cochlear implant for a profoundly deaf child, it is now considered optimal to give that child two implants simultaneously at twelve months of age—or earlier. In addition, as candidacy requirements widen, there is a rapidly growing group of implant users with considerable residual hearing in the unimplanted ear. Some even use an implant when they have normal hearing in the other ear.

  Dorman’s goal has been to put as many people as possible with either two cochlear implants (“bilaterals”) or with an implant and a hearing aid (“bimodals”) through the torture chamber of the eight loudspeaker array to look for patterns in their responses, both to determine if two really are better than one and, if so, to better understand how and why.

  For John Ayers, Cook doesn’t play the restaurant noise as loud as it would be in real life. With the click of a computer mouse, she can adjust the signal-to-noise ratio—the relative intensity of the thing you are trying to hear (the signal) versus all the distracting din in the background (the noise). She makes the noise ten decibels quieter than the talker, even though the difference would probably be only two decibels in a truly noisy restaurant. She needs first to establish a level at which Ayers will be able to have some success but not too much, so as to allow room for improvement. Noise that’s so loud he can’t make out a word or so quiet he gets everything from the start doesn’t tell the researchers much. Eventually, Cook settles on a level that is six decibels quieter than the signal. Ayers repeats the test with one implant, then the other, then both together—each time trying different noise conditions, with the noise coming from just one loudspeaker or from all of them. From the computers where I sit, it’s hard to see him through the observation window, so he’s a disembodied voice saying things like, “He was letting Joe go,” when it should have been, “He went sledding down the hill.”

  The sentences Cook asks Ayers to repeat were created in this very lab in an effort to improve testing by providing multiple sentence lists of equivalent difficulty. Known as the AzBio sentences, there are one thousand in all recorded by Dorman and three other people from the lab. They’re widely used. That meant, back home in New York City, I could still hear Dorman’s deep, sonorous voice speaking to me when I observed a test session in Mario Svirsky’s laboratory. To relieve the tedium of the sound booth, Dorman and colleagues intentionally made some of the sentences amusing.

  “Stay positive and it will all be over.” Ayers got that one.

  “You deserve a break today.” Ayers heard it as: “You decided to fight today.”

  “The pet monkey wore a diaper.” A pause and then Ayers says, incredulously: “Put the monkey in a diaper?”

  Cook scores the sentences based on how many words Ayers gets right out of the total. With only one implant, Ayers scored between 30 and 50 percent correct. With both implants together, he scored as high as 80 percent.

  Dorman and Cook use the same loudspeaker array to test the ability of cochlear implant users to localize sound, which is restored by two implants but only to a degree, as implants can work with intensity cues but not timing. Hearing aids, on the other hand, can handle timing cues, since the residual hearing they amplify is usually in the low frequencies. The average hearing person can find the source of a sound to within seven degrees of error. Bilateral implant patients can do it to about twenty degrees. “In the real world, that’s fine,” says Dorman. It works because the bilateral patients have been given the gift of a head shadow effect. “If you have two implants, you’ll always have one ear where the noise is being attenuated by the head,” says Dorman. He sees patients improve by 30 to 50 percent.

  With both a hearing aid and a cochlear implant, Alex uses two ears, too, so it seemed he ought to have had an easier time localizing sound than he did. During my visit to Arizona, I finally understood why localizing was still so hard for him. Bimodal patients—those with an implant and a hearing aid—do better than people with just one usable ear, who can’t localize at all, but the tricks that the brain uses to analyze sound coming into two different ears require something bimodal patients don’t have: two of the same kind of ears. “Either will do,” says Dorman. “For this job of localizing, you need two ears with either good temporal cues or good intensity cues.” A hearing aid gives you the first, an implant gives you the second, but the listener with one of each is comparing apples to oranges.

  The work with bilateral and bimodal patients is a sign of the times. The basic technology of implants hasn’t actually changed much in twenty years, since the invention of CIS processing. Absent further improvements in the processing program or solutions to the problem of spreading electrical current, the biggest developments today have less to do with how implants work and more with who gets them, how many, and when. Just because the breakthroughs are less dramatic these days, says Dorman, that doesn’t mean they don’t matter. He has faith in the possibilities of science and says, “You have to believe that if we can keep adding up the little gains, we get someplace.” One of the projects he is most excited about is a new method that uses modulation discrimination to determine if someone like Alex would do better with a hearing aid or a second implant. “It allows you to assess the ability of the remaining hearing to resolve the speech signal. So far, it’s more useful than the audiogram.” The project is still in development so won’t be in clinical use for several years, but the day they realized how well the strategy worked was a happy one. “You keep playing twenty questions with Mother Nature and you usually lose,” says Dorman. “Every once in a while, you get a little piece of the answer, steal the secret. That’s a good day.”

  25

  BEETHOVEN’S NIGHTMARE

  Alex waved with delight, thrilled to see me in the middle of a school day. Head tilted, lips pressed together, big brown eyes bright, he wore his trademark expression, equal parts silly and shy. His body wiggled with excitement. I waved back, trying to look equally happy. But I was nervous. Alex and the other kindergartners at Berkeley Carroll were going to demonstrate to their parents what they were doing in music. Three kindergarten classes had joined forces, so there were nearly sixty children on the floor and at least as many
parents filling the bleachers of the gym, which doubled as a performance space.

  It had been almost exactly three years since Alex’s implant surgery. Now he was one of this group of happy children about to show their parents what they knew about pitch, rhythm, tempo, and so on. Implants, however, are designed to help users make sense of speech. Depending on your perspective, music is either an afterthought or the last frontier. Or was. Some of the same ideas that could improve hearing in noise might also make it possible for implant users to have music in their lives. I was thrilled to know that people were out there working on this, but they couldn’t help Alex get through kindergarten. Music appreciation and an understanding of its basic elements were among the many pieces of knowledge he and the other children were expected to acquire. I feared—even assumed—music was one area where his hearing loss made the playing field too uneven.

  Music is much more difficult than speech for the implant’s processor to accurately translate for the brain. As a result, many implant recipients don’t enjoy listening to music. In her account of receiving her own implant, Wired for Sound, Beverly Biderman noted that for some recipients, music sounded like marbles rolling around in a dryer. After she was implanted, Biderman was determined to enjoy music and worked hard at it. (Training does help, studies show.) For every twenty recordings Biderman took out of the library to try, eighteen or nineteen sounded “awful,” but one or two were beautiful and repaid her effort.

  Speech and music do consist of the same basic elements unfolding over time to convey a message. Words and sentences can be short or long, spaced close together or with big gaps in between—in music we call that rhythm. The sound waves of spoken consonants and vowels have different frequencies and so do musical notes—that’s pitch. Both spoken and musical sounds have what is known as “tonal color,” something of a catchall category to describe what’s left after rhythm and pitch—timbre, the quality that allows us to recognize a voice or to distinguish between a trumpet and a clarinet.

 

‹ Prev