The Gap

Home > Other > The Gap > Page 9
The Gap Page 9

by Thomas Suddendorf


  Fortunately, language is open-ended, even though the building blocks of language are limited. How can this be? Languages are based on finite sets of arbitrary units, symbols such as sounds and words. Grammar rules govern how these units are combined and recombined to generate innumerable expressions. A phoneme is the smallest unit of speech that can change meaning, and the English language has some forty-four of them. The difference between the word “car” and “bar” is one phoneme. Across all human languages there are only about 150 different sounds in use. Some languages, such as the click languages of the bushmen of the Kalahari, have over a hundred phonemes. Others, like the Maori in New Zealand, have little more than a dozen. In any language, phonemes can be combined through a branch of grammar rules, known as phonology, to express any meaning. Languages with fewer phonemes need to employ more repetitions to create new words (in Maori there are words like “whakawhanaungatanga”), but there is no limit to the meaning that can be expressed in any language.

  The smallest units of meaning are called morphemes, and they are the building blocks of words. They include stems (e.g., joy, man), prefixes (e.g., after-, anti-), and suffixes (e.g., -able, -ful), which are combined into words (e.g., joyful). There are also functional morphemes, known as inflections, that have a grammatical function but little meaning in themselves. For instance, ending a noun with an s in English indicates plural, and ending a verb with an ed indicates past tense. English has few inflections, but other languages use them a lot. The grammar rules that govern all this are called morphology.

  Finally, syntax rules govern how we combine these words into phrases and sentences. You may remember some of them from school (or you may remember that you have forgotten them). Even if you are not able to explain them, you still know when they violated being are—unless the order of the last three words seems correct to you. We use these rules to produce and decode novel sequences generated from the same limited set of units. For instance, instead of having millions of words for every possible concept, we have thousands of words that can be combined. Instead of having single words for “big table” and for “small table,” we have words for concepts like “table” and attributes like “big” and “small” that we can then also use in conjunction with other words. Perhaps the best illustration of the generative power of human language is the quest for the longest ever sentence. Whatever monumental sentence you might produce, one can always add a relative clause and extend it further. You could add at the start: “You think the longest sentence is. . . .” Or you may go further and add: “I am not convinced by it, but you think the longest sentence is. . . .” Which may draw the retort: “But I insist it is true that, although you are not convinced by it, the longest sentence really is. . . .” There is no end to the possibilities; the generative nature of language allows us to continually expand.

  The same is true of the search for the highest number. I remember as a child arguing about the concept of infinity with a playmate who, not easily swayed, plainly countered that he knew a higher number: infinity plus one. Though technically incorrect, the reply nicely illustrates the mechanism that gets us to produce ever-higher numbers. The trick is another variant of nested thinking called “recursion.” In mathematics, a formula is said to be recursive when the next term in a sequence is based on a preceding term. So in counting, our ten Arabic numerals are combined and recombined on the basis of a simple set of recursive rules that allow us to continue building larger terms (0, 1, 2 . . . 9, 10, 11, 12 . . . 99, 100, 101 . . .). There is no natural end to counting. Nonetheless, we can reason about infinity (typically given this symbol: ∞). You can create endless other sequences with different kinds of recursive rules. The following series (1, 1, 2, 3, 5, 8, 13, 21 . . . ), for instance, is determined by a recursive rule that states that each new number is the sum of the previous two [Fn= Fn-1 + Fn-2].8 Recursion is a procedure in which output and input are linked, creating open-ended loops. It enables us to generate novel combinations from finite resources.

  Recursion is considered a key property of grammar in language. A relative clause can be defined as a relative clause plus an optional further relative clause. Therefore, relative clauses can be strung together, or be embedded, such as this one, virtually indefinitely (though practically speaking, there are limits to what one can follow). Grammar rules allow us to point back to previous parts, and these can be merged into bigger structures. For instance, in the sentence “The monkey I watched fighting by the lake tried to steal my purse,” we can relate the stealing back to the monkey after having conveying other information such as the primate’s fight. Phrases and sentences can be embedded into larger narratives. Language is thus in principle open-ended, and we can construct communications of whatever complexity required. According to the most influential psycholinguist of the previous century, Noam Chomsky, this generative grammar is a human universal that underlies all languages.9 Recursion, he and his colleagues argue, defines the language faculty in its narrowest sense.

  Chomsky’s original ideas contributed enormously to the start of the so-called cognitive revolution in psychology and the decline of radical behaviorism. Rather than being the result of general associative learning rules, as contended by behaviorists such as B. F. Skinner,10 Chomsky argued that humans are innately predisposed to develop language. Numerous lines of evidence support this claim. Children acquire language rules effortlessly and without explicit instruction. They are not predisposed to learn a particular language—a Japanese infant brought up in an Italian household will become fluent in Italian and vice versa—but they are able to distill the rules that govern their linguistic environment. They can then apply these rules in entirely new contexts. For instance, my son, Timo, when two and a half years of age, spoke confidently of one shoe and two shoes, and then equally confidently about one foot and two foots. Even though I am reasonably confident he never heard anyone say “foots” before, he generalized the rule that typically is employed when generating plural in English. Exceptions to rules, as we all have to find out the hard way, must be learned individually.

  Most children acquire language in a similar way regardless of differences in intelligence, schooling, and culture. You probably started pronouncing, that is babbling, the phonemes of your environment by about eight months. Though young infants can distinguish phonetic contrasts of any language, they quickly zoom in on the sounds that make up their language. By the end of the first year, you produced your first words. My son Timo’s first word was “cheers,” and he uttered it with great enthusiasm as he demanded a clinking of cups. By the end of the second year, you accelerated your word acquisition dramatically, learning about one word every two hours. At around the same time, you began to string the first two-word expressions together. Recursive syntax, then, develops over the next two years. Of course, every child requires linguistic input—an environment of people who use a language—as the tragic case of the girl Genie illustrates. Having been neglected and not spoken to for most of her childhood, she failed to acquire normal fluent language. There thus appears to be a critical period of language acquisition during which children learn the language of their group.

  Such a period becomes evident when we try to learn a second language. A young child can quite comfortably learn two or three languages. My brother spoke English to his children, and his wife spoke German, while everyone else was speaking Dutch, since they were living in Holland. Both children acquired all three languages effortlessly, until they moved away from the Netherlands and dropped back to two languages. Bizarrely, in most countries a second language becomes part of the school curriculum in fifth grade or higher—approximately the age at which effortless language acquisition stops and the hard work begins. Learning a second language after puberty usually results in an accent that is virtually impossible to overcome. Alas, I will retain my German accent no matter how hard I try and how long I live in English-speaking countries. If language was acquired simply through general associative learning, one would not expe
ct to find such a critical period. Furthermore, rules would not be overgeneralized, and there would not be universals in grammar or developmental stages. If Skinner had been right, I should be able to lose that German accent, and we should be able to learn our languages like everything else. Indeed, we should be able to teach versions of our language to other animals who can learn associatively. But Chomsky argued that humans’ language instinct was unique in the animal kingdom. He suggests that a mutation, perhaps a mere one hundred thousand years ago, resulted in the great leap forward, giving our ancestors the precious gift of open-ended language.

  Although the Chomskian views on language reigned supreme for half a century, the last few years have increasingly seen new challenges to his linguistic gospel. For example, some researchers argue there is very little that can be said to be universally true across all human languages. The languages spoken on Earth today differ widely from each other. Some put verbs at the beginning, others at the end; some are built on short words, while others can create long composite words. There are languages that appear not to have basic forms such as prepositions, adjectives, articles, and adverbs. Even recursive syntax, according to Chomsky the core of language narrowly defined, is perhaps not present in all human languages. The Piraha of the Amazon and the Bininj Gun-Wok of Arnhem land in Australia are said to lack it. Thus a simple sentence like “They stood watching us fight” can only be expressed successively as “They stood; they were watching us; we were fighting.” These reports require further systematic examination, but they are threatening received wisdom. Linguists are increasingly questioning whether there are any grammatical constructions or markers that are truly universal.

  Part of the problem may be that in the past, language was studied primarily by examining the major written Indo-European languages. Yet most of the world’s languages do not have a writing system. Australia, New Guinea, and Melanesia comprise well over a thousand oral languages. In Vanuatu alone there are over one hundred different languages with an average of two thousand speakers each. Many of these languages are dying out, but they probably provide us with a much better sense of the nature of human language and how it emerged than a small selection of European written languages could.

  Any claim about universals should depend on a careful examination of the breadth of human languages. Recent studies on the diversification of languages have begun to apply computational models from evolutionary biology. For example, one study compared word order in hundreds of languages and concluded that the current existing rules of a language are shaped by its cultural history rather than any innate universal grammar. People gradually developed grammatical rules that suited their needs, and over time the rules change, are modified, or replaced. As is the case with words and their meaning, grammar rules are the product of a history of social interaction. Different descendant groups create diversity by developing in their own idiosyncratic ways, especially if isolated. Much of this cultural evolution seems to follow the same logic of descent with modification as natural selection in biological evolution. Indeed there is much debate about the relationship between the two (see Chapter 8).

  Even though the current critiques suggest Chomsky is wrong and humans do not have an innate universal grammar, this is not to say that we are not biologically prepared for language in ways that other animals are not. Only a mind capable of nested thinking, of meta-representation and recursion, should be able to establish arbitrary symbol meanings and grammar rules that enable efficient combining and recombining of these finite units into open-ended sentences. It requires a mind that wants to understand and be understood.

  Language is the source of misunderstandings.

  —ANTOINE DE SAINT-EXUPÉRY

  LANGUAGE IS ABOUT COOPERATION. IN conversations we exchange information by taking turns being speaker and listener. To have an effective conversation, one needs to keep track of what is known, desired, and believed by the communication partner. There is little point just repeating what the other already knows—though that does not stop everyone. One has to quickly compute what is being said, given the current context, and how one might respond or add to it. Conversations are pragmatic encounters and typically follow some fundamental rules.

  The philosopher Paul Grice identified four maxims we tend to adhere to in our conversations. The first is that we should say what we believe is true. If we all lied all the time, there would be no point in conversing with anyone else. That is not to say that deception and self-deception do not abound (more on that later). The second states that we should provide the appropriate level of information as required by the situation. When asked about the temperature, we usually are not expected to give the degrees to five decimals. The third maxim states that your contribution should be relevant to the goals of the conversation. Digressions to other topics should be avoided, which reminds me of a conversation I had last week where . . . Well, you get the point. The final maxim states that your contributions should be clear and avoid obfuscations. We should customize our talk to what the audience knows and avoid unnecessary jargon. I have thus used phrases such as “distinctly human traits” instead of using the technical expression “human autapomorphies.”11 Words should be chosen that are likely to be understood by one’s audience. We have all encountered awkward conversations in which something was not quite right because one or the other of these maxims were violated. (Try counting the violations the next time you hear a politician being interviewed. It might make it a lot more interesting.) Still, we largely adhere to these maxims. To do this, we must take many things into account; especially important is the mind of your conversation partner.

  Minds can be considered representational systems themselves—no obfuscation intended. Consider the book or screen you are looking at. Light hits your retina and triggers nerve cells to fire. This activation is passed on to the back of your brain, where various parallel processes establish the composition of the scene in terms of color, orientation, and so forth. These are then integrated into your visual experience of the writing in front of you. You are forming a mental representation that you can still access, to some extent, if you stop the input, for instance, by closing your eyes. We represent visuals but also sounds, concepts, and beliefs. People differ in their representations of the world, and we must take that into account in our conversations.

  You may, for example, believe that a banana is on the counter in the kitchen. I may know that you represent the world in this way but may also know that you are mistaken (because I have eaten it) and may hence volunteer new information. This, again, requires nested thinking: I (meta) represented your (mis) representation and adjusted my communication accordingly. If someone else knows that I believe that you think your banana is located in the kitchen, yet another layer of complexity is added. This embedding can go on and on, but I will postpone further discussion of mind reading to Chapter 6. Suffice to say that human conversation involves a lot of reasoning about what the other knows, desires, and believes to function as the efficient cooperative information exchange system that it is.

  The content of many of our conversations involves reflections on past events and potential future events. Human language is exquisitely capable of representing meaning that goes beyond the here and now. As we will see in the next chapter, imagining future events can involve the construction of novel scenarios by combining and recombining basic elements (not unlike the combining of words into new sentences). For this and other reasons, Michael Corballis and I have argued that language and our capacity to travel mentally through time evolved hand in hand—although the emergence of content likely preceded the means of communicating that content.

  There is between the whole animal kingdom on the one side, and man, even in his lowest state, on the other, a barrier which no animal has ever crossed, and that barrier is—Language.

  —FRIEDRICH MAX MüLLER

  IN 1873, TWO YEARS AFTER the publication of Darwin’s The Descent of Man, Friedrich Max Müller, chair of philology at Oxford,
posed a counterargument that no other animal had anything remotely like human language and hence there was no sign of gradual evolution, as Darwin’s theory seemed to predict. He raised this issue in defiance of the 1866 ban on discussions of the evolution of language by the Linguistic Society of Paris. In fact, Müller’s argument was perceived to be a serious threat to Darwin’s theory of evolution by natural selection. Recall that in the absence of genetics and a detailed fossil record, the debate centered on evidence of continuity between living species. Thus Müller’s claim about the language barrier not only was relevant to humans’ purported unique position but turned into an early battleground about the very theory of evolution. At the time little was known about primate communication, and Darwin himself wrote: “I wish someone would keep a lot of the most noisy monkeys, half free and study the means of communication.”

  Enter Richard Garner—a young man from Virginia who in the 1890s, with the help of Edison’s newly invented cylinder phonograph, went out to decipher the vocalizations of primates through playback experiments. The idea was to record primate vocalizations in various circumstances and then play them back to other individuals to study their responses. Garner conducted his initial work in zoos and, to wide acclaim, reported early success in identifying the vocabulary of different primate species. He claimed, for instance, to have identified capuchin monkeys’ “words” for things ranging from “food” to “sickness.” He believed that the primate tongues he discovered were limited to names for concrete things but that they were the building blocks from which human abstract notions evolved. Not surprisingly, these conclusions attracted a lot of attention from both the public and the academic world.

 

‹ Prev