Book Read Free

The First Word: The Search for the Origins of Language

Page 16

by Christine Kenneally


  Today, like the study of language evolution itself, the field of gesture studies is undergoing a small revolution. More and more people are engaging in experimental studies of gesture, and researchers are discovering how complicated and interesting it can be. Conference organizers in the last few years have been surprised at the number of scholars who want to attend meetings about gesture. This mini-boom is part of the general trend to reconsider what used to be called the epiphenomena of language. In a relatively short amount of time, researchers have shown that speech and gesture, as well as gesture and thought, interact as language is being learned and even after it has been fully acquired.

  Traditionally, developmental psychologists thought that children gestured simply because they saw their parents do so. They believed that infants acquired language separate from any gesturing and in a predictable pattern. There was a one-word stage, followed by a two-word stage, and once a child crossed a critical threshold into a three-word stage, her three words very rapidly became many structured sentences. Seen this way, language acquisition was quite miraculous: children went from one word to many in the space of two years.

  Experts now agree the picture is more complicated. Strictly speaking, there is no one-word stage. The first sign of language is usually a gesture, which infants will make at about ten months. The best way to think about this process is that it begins with a one-element stage, and that element may be a word or a gesture, such as pointing. If you have ever seen a baby sit and whack his high chair table imperiously, demanding his lunch, you have witnessed the origins of language in the individual. Following the first one-element stage, there is a two-element stage, when word and gesture appear together. This combination can function like a sentence, as when a child says “eat” and points at a banana at the same time. Gesture-and-speech combinations increase between fourteen and twenty-two months. Children also show a three-element stage using both gesture and speech before producing three-elements in speech alone.10 Following this stage, speech starts to emerge as the prime method of communication.

  These findings suggest that gesture doesn’t simply precede language but is fundamentally tied to it.11 In fact gesture and speech are so integral to each other in children that researchers are able to predict a child’s language ability at three years of age based on its gesturing at one year. They can also diagnose delays or problems that children might be having with language by examining their gestures.

  For a long time the trend was to regard infants, much like animals, as mute and unthinking. Until they learned their first few words, it was thought that not a lot was going on inside their heads. And certainly, if you removed gesture from the language acquisition picture, children did seem eventually to pull language out of thin air. But when you take gesture into account, you can see the preliminary scaffolding of language even before a child has spoken a word, and the acquisition of language, while still incredible, looks a little less mysterious.

  Developmental psychologists now talk about the cross-modality of language, meaning that language is expressed in various ways. Instead of the image of a brain issuing language to a mouth, from which it emerges as imperfect speech, think, rather, of language emerging in the child as an expression of its entire body, articulating both limbs and mouth at the same time.

  Before the teaching of sign language became widespread, and more recently the use of cochlear implants, the fate of deaf children was contingent on their family situation. Most children who are born without hearing now receive systematic education in schools designed to help them, but there are still rare cases where children who are born deaf do not receive sign language instruction. Whether the reasons are socioeconomic or otherwise, these children are generally spoken to by their parents using normal language and gesture, and they must invent their own ways to express what they want. Susan Goldin-Meadow, who investigates gesture at her laboratory in Chicago, has studied a number of these children. The gestural language they invent is called homesign. Goldin-Meadow’s work on homesign and other gestures reveals a great deal about the way the ancient platform of gesture works in modern humans.

  The versions of homesign used by each of these children share a number of traits, including the fact that they generally feature a stable list of words and a kind of syntax. Certain words will appear in a particular spot in a sentence depending on the role they take. There is structure in homesign words, as well as in homesign sentences. The symbols that homesigning children invent are not specific to a particular situation or time. For example, they might use a “twist” gesture to ask someone to open a jar, or to indicate that a jar has been twisted open, or to observe that it is possible to twist a jar open. Homesign symbols are also like words in that the number that can be invented appears to be limitless, as well as stable.12 Even though these children are exposed to a normal combination of gesture and speech by their parents, their own homesign doesn’t resemble their parents’ gesturing. Children who develop homesign pass through stages of development similar to those of hearing children who are learning speech. Moreover, the linear ordering of elements in a homesign utterance appears to be universal, regardless of the language community the children are born into. Interestingly, if hearing people gesture without speaking, their gestures start to look like the signs of homesigners.

  How is it possible that these homesign children who are spoken to (even if they can’t hear the words) and gestured at end up gesturing communicatively in the absence of a sign education? Where does this facility for structure and words come from? Goldin-Meadow believes that sentence-and word-level structure are inherent.

  Altogether, Goldin-Meadow’s studies show that gesture is highly versatile. It is used both with speech and without, and it differs depending on whether it is used with the spoken word. It takes a backseat when it accompanies language, and it becomes much more mimetic when it is used alone. When gesture carries the full burden of communication, says Goldin-Meadow, it becomes much more segmented. She likens it to beads on a string.

  Homesign may represent an extreme example of the way that gesture and speech interact, but other recent experiments have demonstrated how speech and gesture can depend on each other. It’s been shown that adults will gesture differently depending on the language they are speaking and the way that their language encodes specific concepts, like action. For example, experimenters have compared the idiosyncratic way that Turkish and English speakers describe a cartoon that depicts a character rolling down a hill. Asli Özyürek, a research associate at the Max Planck Institute for Psycholinguistics, compared the performance of children and adults in this task. She showed that initially children produce the same kinds of gestures regardless of the language they are speaking. It takes a while for gesture to take on the characteristic forms of a specific language. When it does, people change their gestures depending on the syntax of the language they are speaking. At this stage, instead of gesture’s providing occasional, supplementary meaning to speech without being connected to it in any real way, language and gesture appear to interact online in expression.

  In another experiment Goldin-Meadow asked children and adults to solve a particular type of math problem.13 After they completed the task, the participants were asked to remember a list of words (for the children) and letters (for the adults). Subjects were then asked to explain at a blackboard how they had solved the problem. Goldin-Meadow and her colleagues found that when the experimental subjects gestured during their explanation, they later remembered more from the word list than when they did not gesture. She noted that while people tend to think of gesturing as reflecting an individual’s mental state, it appears that gesture contributes to shaping that state. In the case of her subjects, their gesturing somehow lightened the mental load, allowing them to devote more resources to memory.

  Gesture interacts with thought and language in other complicated ways. In another experiment Goldin-Meadow asked a group of children to solve a different kind of problem.14 She then videotaped th
em describing the solution and noted the way they gestured as they answered. In one case, the children were asked if the amount of water in two identical glasses was the same. (It was.) One of the glasses was then poured into a low and wide dish. The children were asked again if the amount of water was the same. They said it wasn’t. They justified their response by describing the height of the water, explaining it’s different because this one is taller than that one. As they spoke, some of the children produced what Goldin-Meadow calls a gesture-speech match; that is, they said the amounts of water in the glass and the bowl were unequal, and as they did, they indicated the different heights of the water with their gesture (one hand at one height, the other hand at the other height). Other children who got the problem wrong showed an interesting mismatch between their gesture and their speech. Although these children also said that the amount of the water was different because the height was different, gesturally they indicated the width of the dishes. “This information,” said Goldin-Meadow, “when integrated with the information in speech, hints at the correct answer—the water may be higher but it’s also skinnier.”

  The mismatch children suggested by their hand movements that they knew unconsciously what the correct response was. And it turned out that when these children were taught what the relationship between the two amounts of water was after the initial experiment, they were much closer to comprehension than those whose verbal and gestural answers matched—and were wrong.

  Gestures also affect listeners. In another experiment children were shown a picture of a character and later asked what he had been wearing. As the researcher posed the question, she made a hat gesture above her head. The children said that the character was wearing a hat even though he wasn’t.

  Such complicated dependencies and interactions demonstrate that speech and gesture are part of the same system, say Goldin-Meadow and other specialists. Moreover, this system, made up of the two semi-independent subsystems of speech and gesture, is also closely connected to systems of thought. Perhaps we should designate another word entirely for intentional communication that includes gesture and speech. Whatever it should be, Goldin-Meadow and others have demonstrated that this communication is fundamentally embodied.15

  The most important effect of this research is that it makes it impossible to engage with the evolution of modern language without also considering the evolution of human gesture. Precisely how gesture and speech may have interacted since we split from our common ancestors with chimpanzees is still debated. Michael Corballis, who wrote From Hand to Mouth: The Origins of Language, has suggested that quite complicated manual, and possibly facial, gesture may have preceded speech by a significant margin, arising two million years ago when the brains of our ancestors underwent a dramatic burst in size. The transition to independent speech from this gesture language would have occurred gradually as a result of its many benefits, such as communication over long distances and the ability to use hands for other tasks, before the final shift to autonomous spoken language. Other researchers stress how integral gesture is to speech today, arguing that even as the balance of speech and gesture may have shifted within human communication, it is unlikely that gesture would have evolved first without any form of speech. David McNeill, head of the well-known McNeill Laboratory Center for Gesture and Speech Research at the University of Chicago, and colleagues propose that from the very beginning it is the combination of speech and gestures that were selected in evolution. What about the other side of the coin—what about speech? It is not as ancient as gesture, but when did it evolve? And how closely related is speech to the vocal communication of other animals?

  8. You have speech

  Even though more research has been conducted on primate vocalizations than on primate gesture, it has been considerably less productive. Vocalization in nonhuman animals is much less flexible than gesture. Most vocalizations, like alarm calls, seem to be instinctive and specific to the species that produces them. Many kinds of animals that are raised in isolation or fostered by another species still grow up to produce the calls of their own kind. Researchers at the Neurosciences Institute in San Diego transplanted brain tissue from the Japanese quail to the domestic chicken; the resulting birds, called chimeras, spontaneously produced some quail calls as they matured.1 And unlike human talkers, vocalizing animals seem to be pretty indifferent to their listeners. Vervets, for example, typically produce alarm calls whether there are other monkeys around or not. Even though we still have a lot to learn about calls in the wild, it appears that there are relatively few novel calls in ape species. What’s more, apes don’t seem to make individually distinctive calls, even though other monkeys—which are more distantly related to us—do.

  One of the biggest differences between ape gesture and vocalizing is that many communicative gestures appear to be voluntary and intentional in a way that sound is not. Still, the involuntary nature of animal vocalizations has been somewhat exaggerated. It is said, for example, that when apes make a sound it is always an emotional response and not really generated by choice (in contrast with gesture, which is demonstrably voluntary). In recent years, this position has had to shift to accommodate some interesting findings about the rudiments of control in the vocal domain. Evidence exists, for example, that chimps can suppress calls in dangerous situations where a loud noise would draw attention to them. Some orangutans make kissing sounds when they bed for the night. Kissing is not instinctive, it’s volitional—one of those cultural traditions that distinguish groups within a species from one another.

  In a recent experiment Katie Slocombe and Klaus Zuberbühler (who earlier demonstrated the ability of zoo chimpanzees to distinguish between types of food with wordlike calls) found that wild chimpanzees seem to adjust their screams based on the role they play in a fight. The researchers looked at two different types of screams in the wild chimpanzees of the Budongo Forest in Uganda. In a conflict situation, the animals typically produce a victim scream, in which the pitch is very consistent, and an aggressor scream, where the pitch varies, with a fall at the end. Other chimps appear to use this information, said Slocombe. The researcher witnessed one exchange in which a young male was harassing a female chimp that was giving loud victim screams in response. At one point, said Slocombe, the female had clearly had enough and began instead to make aggressor screams back at the young male. She was then joined by another female in retaliating against the male. The second female appeared from out of sight, so she must have used the information in the first female’s scream to make her decision. “Normally,” said Slocombe, “chimpanzees will see parts of the fight, and therefore it is impossible to tell if they are attending to the information in the screams or just what they see.”

  Slocombe was interested in establishing whether any particular information about a given situation was reliably communicated by the chimpanzee screams. She recorded examples of victim screams and noted the circumstances in which they occurred. An analysis of her recordings showed that it was possible to distinguish from the screams alone between high-risk situations and low-risk ones. In the first case, the screams tended to be long and high-pitched, whereas in low-risk situations the screams were shorter and lower in pitch.

  There are other intriguing connections between the way we use our mouths and the way other apes do. Researchers have noted a peculiar feature of gesture that appears to be shared between humans and chimpanzees. Imagine a child learning how to write, his hand determinedly grasping the pencil and his tongue sticking out of the side of his mouth. Or visualize a seamstress biting her lips as she sews a small thread. Such unconscious mouth movements often accompany fine hand movement in humans. Of course, mouth and hand movements co-occur with speech and gesture, but in this case it seems that the mouth movement follows the hands (not the other way around). Experiments have shown that fine motor manipulation of objects by chimps is often accompanied by sympathetic mouth movements. The finer the hand movements are, the more chimps seem to move their mouths. Da
vid Leavens suggests that the basic connection between mouth and hand in primates could date back at least fourteen million years, to the common ancestor of human and orangutan.

  Despite such new insights into the utterances of other apes, a vast gap remains between the apparent vocal abilities of all primates and the speech abilities of human beings. Speech starts simply enough with air in the lungs. The air is forcefully expelled in an exhalation, and it makes sound because of the parts of the body it blows over and through—the vibrating vocal cords, the flapping tongue, and the throat and mouth, which rapidly opens and closes in an odd, yapping munch. It’s easy to underestimate the athletic precision employed by the many muscles of the face, tongue, and throat in orchestrating speech. When you talk, your face has more moves than LeBron James.

  It takes at least ten years for a child to learn to coordinate lips, tongue, mouth, and breath with the exacting fine motor control that adults use when they talk. To get an idea of the continuous and complicated changes your vocal tract goes through in the creation of speech, read the next paragraph silently, letting your mouth move but making no sound—just feel the process.

  What’s amazing about speech is that when you’re on the receiving end, listening to the noise that comes out of people’s mouths, you instantaneously hear meaningful language. Yet speech is just sound, a semicontinuous buzz that fluctuates rapidly and regularly. Frequencies rise and fall, harmonics within the frequencies change their relationships to one another, air turbulence increases and dies away. It gets loud, and then it gets quiet.

 

‹ Prev