In keeping with the biological focus of the Salk Institute and with the explosion of interest in the brain in the 1980s and 1990s, Bellugi and Klima also dug into the neurobiological foundations of sign language. The lab is credited with the groundbreaking discovery that in native speakers—deaf children of deaf parents—the brain organization of spoken and sign languages is remarkably similar. “This confirms at a neurological level that Sign is a language and is treated as such by the brain,” wrote Sacks. Grammar and semantics are processed separately, for instance, just as they are in spoken languages. Not everything in the brain is the same, however. Helen Neville has found that an area in the right hemisphere does play a unique role in sign language, showing activity that isn’t seen in spoken language. It’s still unclear exactly why this area is recruited; one theory is that it enables the use of space for grammar. And there’s a catch. This right-hemisphere activity is seen only in native signers, those who learn from a very early age. Like other aspects of language processing, it has a sensitive period and is not seen in anyone who learns ASL after puberty.
Research into ASL has always had a political cast, because ASL itself has such potency. Barbara Kannapell, who founded Deaf Pride in 1972, wrote: “ASL is the only thing we have that belongs to Deaf people completely.” Not surprisingly, this filter colored some of the scientific work in the early days. I caught up with Carol Padden and Karen Emmorey at a conference in Boston, where both were presenting on their work, to talk about the trajectory of sign language research over the years. “Initially, it was about showing how it was the same [as spoken languages],” says Padden, who won a MacArthur “genius” grant in 2010 for her study of a Bedouin community where a high incidence of deafness led to the development of a new sign language currently in use by a fourth generation. “Now it’s okay to work on how it’s different from spoken language.” It’s also okay to embrace aspects of ASL like gesture and iconicity, the pictorial representations that do in fact exist in some ASL signs, that hewed too closely to the negative view of sign language in the past, says Emmorey. There is pantomime and gesture in sign language, she says, signing DANCE and swaying at the same time by way of example, but her work has shown that such movements use different brain areas from those employed to produce an ASL verb.
The way ASL is learned may change as well. “There’s a lot of romanticism about learning through ASL,” says Padden. “A lot of people say [because] they’re closing the deaf schools, ASL won’t have a context for people to learn the language.” But she has met enough mainstreamed deaf children who’ve learned to sign as a second language that she’s less worried on that front. “I think it’s reorganizing,” she says. “ASL is going to be learned in different ways. We’re paying less attention to geography and more to identity.”
For Emmorey, who is hearing, an interest in sign language had nothing to do with politics and everything to do with the brain. With a PhD in psycholinguistics from UCLA, she says, “I got hooked in terms of thinking about sign language as a tool to ask questions about language and the mind.” When she started, she was a complete beginner at ASL, studying how sign language was organized in the brain by day and taking ASL lessons at night at a community college, the only place she could find a course. Today, at her neurobiology lab at the University of California, San Diego, all communication is in ASL. “I knew I’d arrived when I gave a lecture on cognitive neuroscience in ASL at Gallaudet … and people got it,” she says with a laugh.
• • •
Reliable statistics on the number of people in the United States who use ASL don’t really exist. The US Census asks only about use of spoken languages other than English. There are more than two million deaf people nationally, of whom between a hundred thousand and five hundred thousand are thought to communicate primarily through ASL. That equals less than a quarter of 1 percent of the national population. People have begun to throw around the statistic that ASL is the third—or fourth—most common language in the country. For that to be true, there would have to be something approaching two million users, which seems unlikely. Anecdotally, interest in ASL does seem to be growing. It is far more common as a second language in college and even high school. After Hurricane Sandy, New York City’s mayor Michael Bloomberg was accompanied at every press conference by an interpreter who became a minor celebrity for her captivating signing. But people have to have more than a passing acquaintance with signing to qualify as users of the language, just as many who have some high school French would be hard-pressed to say more than bonjour and merci in Paris and can’t be considered French speakers.
Still, bilingualism is the hope of the deaf community. Its leaders agree that Americans need to know English, the language of reading and writing in the United States, but they also value sign language as the “backbone” of the Deaf world. “The inherent capability of children to acquire ASL should be recognized and used to enhance their cognitive, academic, social, and emotional development,” states the National Association of the Deaf. “Deaf and hard of hearing children must have the right to receive early and full exposure to ASL as a primary language, along with English.”
The case for bilingualism has been helped by Ellen Bialystok, a psychologist at York University in Toronto, and the most high-profile researcher on the subject today. Her work has brought new appreciation of the potential cognitive benefits of knowing two languages. “What bilingualism does is make the brain different,” Bialystok told an interviewer recently. She is careful not to say the bilingual brain is “categorically better,” but she says that “most of [the] differences turn out to be advantages.”
Her work has helped change old ideas. It was long thought that learning more than one language simply confused children. In 1926, one researcher suggested that using a foreign language in the home might be “one of the chief factors in producing mental retardation.” As recently as a dozen years ago, my friend Sharon, whose native language is Mandarin Chinese, was told by administrators to speak only English to her son when he started school in Houston. It is true that children who are bilingual will be a little slower to acquire both languages and furthermore, that they will have, on average, smaller vocabularies in both than a speaker of one language would be expected to have. Their grammatical proficiency will also be delayed. However, Bialystok has found that costs are offset by a gain in executive function, the set of skills we use to multitask and sustain attention and engage in higher-level thinking—some of the very skills Helen Neville was looking to build up in preschoolers and that have been shown to boost academic achievement.
In one study, Bialystok and her colleague Michelle Martin-Rhee asked young bilingual and monolingual children to sort blue circles and red squares into digital boxes—one marked with a blue square and the other with a red circle—on a computer screen. Sorting by color was relatively easy for both groups: They put blue circles into the bin marked with a blue square and red squares into the box marked with a red circle. But when they were asked to sort by shape, the bilinguals were faster to resolve confusion over the conflicting colors and put blue circles into the box with the red circle and red squares into the bin with the blue square.
When babies are regularly exposed to two languages, differences show up even in infancy, “helping explain not just how the early brain listens to language, but how listening shapes the early brain,” wrote pediatrician Perri Klass in The New York Times. The same researchers who found that monolingual babies lose the ability to discriminate phonetic sounds from other languages before their first birthday showed that bilingual babies keep up that feat of discrimination for longer. Their world of sound is literally wider, without the early “perceptual narrowing” that babies who will grow up to speak only one language experience. Janet Werker has shown that babies with bilingual mothers can tell their moms’ two languages apart but prefer both of them over other languages.
One explanation for the improvement is the practice bilinguals get switching from one language to the ot
her. “The fact that you’re constantly manipulating two languages changes some of the wiring in your brain,” Bialystok said. “When somebody is bilingual, every time they use one of their languages the other one is active, it’s online, ready to go. There’s a potential for massive confusion and intrusions, but that doesn’t happen… . The brain’s executive control system jumps into action and takes charge of making the language you want the one you’re using.” Bialystok has also found that the cognitive benefits of bilingualism help ward off dementia later in life. Beyond the neurological benefits, there are other acknowledged reasons to learn more than one language, such as the practical advantages of wider communication and greater cultural literacy.
It’s quite possible that some of the bias still found in oral deaf circles against sign language stems from the old way of thinking about bilingualism. It must be said, though, that it’s an open question whether the specific cognitive benefits Bialystok and others have found apply to sign languages. Bialystok studies people who have two or more spoken languages. ASL travels a different avenue to reach the brain even if it’s processed similarly once it gets there. “Is it really just having two languages?” asks Emmorey. “Or is it having two languages in the same modality?” Bits of Spanglish aside, a child who speaks both English and Spanish is always using his ears and mouth. He must decide whether he heard “dog” or perro and can say only one or the other in reply. “For two spoken languages, you have one mouth, so you’ve got to pick,” says Emmorey. A baby who is exposed to both English and sign language doesn’t have to do that. “If it’s visual, they know it’s ASL. If it’s auditory, they know it’s English. It comes presegregated for you. And it’s possible to produce a sign and a word at the same time. You don’t have to sit on [one language] as much.” Emmorey is just beginning to explore this question, but the one study she has done so far, in collaboration with Bialystok, suggests that the cognitive changes Bialystok has previously found stem from the competition between two spoken languages rather than the existence of two language systems in the brain.
Whether ASL provides improvement in executive function—or some other as yet unidentified cognitive benefit—Emmorey argues for the cultural importance of having both languages. “I can imagine kids who get pretty far in spoken English and using their hearing, but they’re still not hearing kids. They’re always going to be different,” she says. Many fall into sign language later in life. “[They] dive into that community because in some ways it’s easy. It’s: ‘Oh, I don’t have to struggle to hear. I can just express myself, I can just go straight and it’s visual.’” She herself has felt “honored and special” when she attends a deaf cultural event such as a play or poetry performance. “It’s just gorgeous. I get this [experience] because I know the language.”
Perhaps the biggest problem with achieving bilingualism is the practical one of getting enough exposure and practice in two different languages. When a reporter asked Bialystok if her research meant that high school French was useful for something other than ordering a special meal in a restaurant, Bialystok said, “Sorry, no. You have to use both languages all the time. You won’t get the bilingual benefit from occasional use.” It’s true, too, that for children who are already delayed in developing language, as most deaf and hard-of-hearing children are, there might be more reason to worry over the additional delays that can come with learning two languages at once. The wider the gap gets between hearing and deaf kids, the less likely it is ever to close entirely. When parents are bilingual, the exposure comes naturally. For everyone else, it has to be created.
• • •
I didn’t know if Alex would ever be truly bilingual, but the lessons with Roni were a start. In the end, they didn’t go so well, through no fault of hers. It was striking just how difficult it was for the boys, who were five, seven, and ten, to pay visual attention, to adjust to the way of interacting that was required in order to sign. It didn’t help that our lessons were at seven o’clock at night and the boys were tired. I spent more time each session reining them in than learning to sign. The low point came one night when Alex persisted in hanging upside down and backward off an armchair.
“I can see her,” he insisted.
And yet he was curious about the language. I could tell from the way he played with it between lessons. He decided to create his own version, which seemed to consist of opposite signs: YES was NO and so forth. After trying and failing to steer him right, I concluded that maybe experimenting with signs was a step in the right direction.
Even though we didn’t get all that far that spring, there were other benefits. At the last session, after I had resolved that one big group lesson in the evening was not the way to go, Alex did all his usual clowning around and refusing to pay attention. But when it was time for Roni to leave, he gave her a powerful hug that surprised all of us.
“She’s deaf like me,” he announced.
24
THE COCKTAIL PARTY PROBLEM
To my left, a boisterous group is laughing. To my right, there’s another conversation under way. Behind me, too, people are talking. I can’t make out the details of what they’re saying, but their voices add to the din. They sound happy, as if celebrating. Dishes clatter. Music plays underneath it all.
A man standing five feet in front of me is saying something to me.
“I’m sorry,” I call out, raising my voice. “I can’t hear you.”
Here in the middle of breakfast at a busy restaurant called Lou Malnati’s outside Chicago, the noise is overpowering. Until it’s turned off.
I’m not actually at a restaurant; I’m sitting in a soundproof booth in the Department of Speech and Hearing Science at Arizona State University. My chair is surrounded by eight loudspeakers, each of them relaying a piece of restaurant noise. The noise really was from Lou Malnati’s, but it happened some time ago. An engineer named Lawrence Revit set up an array of eight microphones in the middle of the restaurant’s dining room and recorded the morning’s activities. The goal was to create a real-world listening environment, but one that can be manipulated. The recordings can be played from just one speaker or from all eight or moved from speaker to speaker. The result is remarkably real—chaotic and lively, like so many restaurants where you have to lean in to hear what the person sitting across from you is saying.
The man at the door trying to talk to me is John Ayers, a jovial eighty-two-year-old Texan who has a cochlear implant in each ear. Once the recording has been switched off, he repeats what he’d said earlier.
“It’s a torture chamber!” he exclaims with what I have quickly learned is a characteristic hearty laugh.
Ayers has flown from Dallas to Phoenix to willingly submit himself to this unpleasantness in the name of science. Retired from the insurance business, he is a passionate gardener (he brought seeds for the lab staff on his last visit) and an even more passionate advocate for hearing. After receiving his first implant in 2005 and the second early in 2007, he has found purpose serving as a research subject and helping to recruit other participants.
“Are you ready?” asks Sarah Cook, the graduate student who manages ASU’s Cochlear Implant Lab and will run the tests today.
“Let me at it!” says Ayers.
After he bounds into the booth and takes his seat, Cook closes the two sets of doors that seal him inside. She and I sit by the computers and audiometers from which she’ll run the test. For the best part of the next two hours, Ayers sits in the booth, trying to repeat sentences that come at him through the din of the restaurant playing from one or more speakers.
• • •
Hearing in noise remains the greatest unsolved problem for cochlear implants and a stark reminder that although they now provide tremendous benefit to many people, the signal they send is still exceedingly limited. “One thing that has troubled me is sometimes you hear people in the field talking about [how people have] essentially normal hearing restored, and that’s just not true,” says Don Eddington
of MIT. “Once one is in a fairly noisy situation, or trying to listen to a symphony, cochlear implants just aren’t up to what normal hearing provides.”
It wasn’t until Alex lost his hearing that I properly heard the noise of the world. Harvey Fletcher of Bell Labs described noise as sounds to which no definite pitch can be assigned, and as everything other than speech and music. Elsewhere, I’ve seen it defined as unwanted sound. The low hum of airplane cabins or car engines, sneakers squeaking and balls bouncing in a gym, air conditioners and televisions, electronic toys, a radio playing, Jake and Matthew talking at once, or tap water running in the kitchen. All of it is noise and all of it makes things considerably harder for Alex. Hearing aids aren’t selective in what they amplify. Cochlear implants can’t pick and choose what sounds to process. So noise doesn’t just make it harder to understand what someone is saying; ironically, it can also be uncomfortably loud for a person with assistive devices. Some parents of children with implants or hearing aids stop playing music completely at home in an effort to control noise levels. Many people with hearing loss avoid parties or restaurants. We haven’t gone quite that far, but I was continually walking into familiar settings and hearing them anew.
I Can Hear You Whisper Page 29