The Language Instinct: How the Mind Creates Language

Home > Nonfiction > The Language Instinct: How the Mind Creates Language > Page 14
The Language Instinct: How the Mind Creates Language Page 14

by Steven Pinker


  The idea that the human mind is designed to use abstract variables and data structures used to be, and in some circles still is, a shocking and revolutionary claim, because the structures have no direct counterpart in the child’s experience. Some of the organization of grammar would have to be there from the start, part of the language-learning mechanism that allows children to make sense out of the noises they hear from their parents. The details of syntax have figured prominently in the history of psychology, because they are a case where complexity in the mind is not caused by learning; learning is caused by complexity in the mind. And that was real news.

  Words, Words, Words

  The word glamour comes from the word grammar, and since the Chomskyan revolution the etymology has been fitting. Who could not be dazzled by the creative power of the mental grammar, by its ability to convey an infinite number of thoughts with a finite set of rules? There has been a book on mind and matter called Grammatical Man, and a Nobel Prize lecture comparing the machinery of life to a generative grammar. Chomsky has been interviewed in Rolling Stone and alluded to on Saturday Night Live. In Woody Allen’s story “The Whore of Mensa,” the patron asks, “Suppose I wanted Noam Chomsky explained to me by two girls?” “It’d cost you,” she replies.

  Unlike the mental grammar, the mental dictionary has had no cachet. It seems like nothing more than a humdrum list of words, each transcribed into the head by dull-witted rote memorization. In the preface to his Dictionary, Samuel Johnson wrote:

  It is the fate of those who dwell at the lower employments of life, to be rather driven by the fear of evil, than attracted by the prospect of good; to be exposed to censure, without hope of praise; to be disgraced by miscarriage, or punished for neglect, where success would have been without applause, and diligence without reward.

  Among these unhappy mortals is the writer of dictionaries.

  Johnson’s own dictionary defines lexicographer as “a harmless drudge, that busies himself in tracing the original, and detailing the signification of words.”

  In this chapter we will see that the stereotype is unfair. The world of words is just as wondrous as the world of syntax, or even more so. For not only are people as infinitely creative with words as they are with phrases and sentences, but memorizing individual words demands its own special virtuosity.

  Recall the wug-test, passed by any preschooler: “Here is a wug. Now there are two of them. There are two___.” Before being so challenged, the child has neither heard anyone say, nor been rewarded for saying, the word wugs. Therefore words are not simply retrieved from a mental archive. People must have a mental rule for generating new words from old ones, something like “To form the plural of a noun, add the suffix -s.” The engineering trick behind human language—its being a discrete combinatorial system—is used in at least two different places: sentences and phrases are built out of words by the rules of syntax, and the words themselves are built out of smaller bits by another set of rules, the rules of “morphology.”

  The creative powers of English morphology are pathetic compared to what we find in other languages. The English noun comes in exactly two forms (duck and ducks), the verb in four (quack, quacks, quacked, quacking). In modern Italian and Spanish every verb has about fifty forms; in classical Greek, three hundred and fifty; in Turkish, two million! Many of the languages I have brought up, such as Eskimo, Apache, Hopi, Kivunjo, and American Sign Language, are known for this prodigious ability. How do they do it? Here is an example from Kivunjo, the Bantu language that was said to make English look like checkers compared to chess. The verb “Näïkì@@@lyìià,” meaning “He is eating it for her,” is composed of eight parts:

  N-: A marker indicating that the word is the “focus” of that point in the conversation.

  -ä-: A subject agreement marker. It identifies the eater as falling into Class 1 of the sixteen gender classes, “human singular.” (Remember that to a linguist “gender” means kind, not sex.) Other genders embrace nouns that pertain to several humans, thin or extended objects, objects that come in pairs or clusters, the pairs or clusters themselves, instruments, animals, body parts, diminutives (small or cute versions of things), abstract qualities, precise locations, and general locales.

  -ï-: Present tense. Other tenses in Bantu can refer to today, earlier today, yesterday, no earlier than yesterday, yesterday, or earlier, in the remote past, habitually, ongoing, consecutively, hypothetically, in the future, at an indeterminate time, not yet, and sometimes.

  -kì-: An object agreement marker, in this case indicating that the thing eaten falls into gender Class 7.

  -@@@-: A benefactive marker, indicating for whose benefit the action is taking place, in this case a member of gender Class 1.

  -lyì-: The verb, “to eat.”

  -ï-: An “applicative” marker, indicating that the verb’s cast of players has been augmented by one additional role, in this case the benefactive. (As an analogy, imagine that in English we had to add a suffix to the verb bake when it is used in I baked her a cake as opposed to the usual I baked a cake.)

  -à: A final vowel, which can indicate indicative versus subjunctive mood.

  If you multiply out the number of possible combinations of the seven prefixes and suffixes, the product is about half a million, and that is the number of possible forms per verb in the language. In effect, Kivunjo and languages like it are building an entire sentence inside a single complex word, the verb.

  But I have been a bit unfair to English. English is genuinely crude in its “inflectional” morphology, where one modifies a word to fit the sentence, like marking a noun for the plural with -s or a verb for past tense with -ed. But English holds its own in “derivational” morphology, where one creates a new word out of an old one. For example, the suffix -able, as in learnable, teachable, and huggable, converts a verb meaning “to do X” into an adjective meaning “capable of having X done to it.” Most people are surprised to learn how many derivational suffixes there are in English. Here are the more common ones:

  -able

  -age

  -al

  -an

  -ant

  -ance

  -ary

  -ate

  -ed

  -en

  -er

  -ful

  -hood

  -ic

  -ify

  -ion

  -ish

  -ism

  -ist

  -ity

  -ive

  -ize

  -ly

  -ment

  -ness

  -ory

  -ous

  -y

  In addition, English is free and easy with “compounding,” which glues two words together to form a new one, like toothbrush and mouse-eater. Thanks to these processes, the number of possible words, even in morphologically impoverished English, is immense. The computational linguist Richard Sproat compiled all the distinct words used in the forty-four million words of text from Associated Press news stories beginning in mid-February 1988. Up through December 30, the list contained three hundred thousand distinct word forms, about as many as in a good unabridged dictionary. You might guess that this would exhaust the English words that would ever appear in such stories. But when Sproat looked at what came over the wire on December 31, he found no fewer than thirty-five new forms, including instrumenting, counterprograms, armhole, part-Vulcan, fuzzier, groveled, boulderlike, mega-lizard, traumatological, and ex-critters.

  Even more impressive, the output of one morphological rule can be the input to another, or to itself: one can talk about the unmicro-waveability of some French fries or a toothbrush-holder fastener box in which to keep one’s toothbrush-holder fasteners. This makes the number of possible words in a language bigger than immense; like the number of sentences, it is infinite. Putting aside fanciful coinages concocted for immortality in Guinness, a candidate for the longest word to date in English might be floccinaucinihilipilification, defined in the Oxford Eng
lish Dictionary as “the categorizing of something as worthless or trivial.” But that is a record meant to be broken:

  floccinaucinihilipilificational: pertaining to the categorizing of something as worthless or trivial

  floccinaucinihilipilificationalize: to cause something to pertain to the categorizing of something as worthless or trivial

  floccinaucinihilipilificationalization: the act of causing something to pertain to the categorizing of something as worthless or trivial

  floccinaucinihilipilificationalizational: pertaining to the act of causing something to pertain to the categorizing of something as worthless or trivial

  floccinaucinihilipilificationalizationalize: to cause something to pertain to the act of causing something to pertain…

  Or, if you suffer from sesquipedaliaphobia, you can think of your great-grandmother, your great-great-grandmother, your great-great-great-grandmother, and so on, limited only in practice by the number of generations since Eve.

  What’s more, words, like sentences, are too delicately layered to be generated by a chaining device (a system that selects an item from one list, then moves on to some other list, then to another). When Ronald Reagan proposed the Strategic Defense Initiative, popularly known as Star Wars, he imagined a future in which an incoming Soviet missile would be shot down by an anti-missile missile. But critics pointed out that the Soviet Union could counterattack with an anti-anti-missile-missile missile. No problem, said his MIT-educated engineers; we’ll just build an anti-anti-anti-missile-missile-missile missile. These high-tech weapons need a high-tech grammar—something that can keep track of all the anti’s at the beginning of the word so that it can complete the word with an equal number of missile’s, plus one, at the end. A word structure grammar (a phrase structure grammar for words) that can embed a word in between an anti- and its missile can achieve these objectives; a chaining device cannot, because it has forgotten the pieces that it laid down at the beginning of the long word by the time it gets to the end.

  Like syntax, morphology is a cleverly designed system, and many of the seeming oddities of words are predictable products of its internal logic. Words have a delicate anatomy consisting of pieces, called morphemes, that fit together in certain ways. The word structure system is an extension of the X-bar phase structure system, in which big nourish things are built out of smaller nounish things, smaller nounish things are built out of still smaller nounish things, and so on. The biggest phrase involving nouns is the noun phrase; a noun phrase contains an N-bar; an N-bar contains a noun—the word. Jumping from syntax to morphology, we simply continue the dissection, analyzing the word into smaller and smaller nounish pieces.

  Here is a picture of the structure of the word dogs:

  The top of this mini-tree is “N” for “noun”; this allows the docking maneuver in which the whole word can be plugged into the noun slot inside any noun phrase. Down inside the word, we have two parts: the bare word form dog, usually called the stem, and the plural inflection -s. The rule responsible for inflected words (the rule of wug-test fame) is simply

  N Nstem Ninflection

  “A noun can consist of a noun stem followed by a noun inflection.”

  The rule nicely interfaces with the mental dictionary: dog would be listed as a noun stem meaning “dog,” and -s would be listed as a noun inflection meaning “plural of.”

  This rule is the simplest, most stripped-down example of anything we would want to call a rule of grammar. In my laboratory we use it as an easily studied instance of mental grammar, allowing us to document in great detail the psychology of linguistic rules from infancy to old age in both normal and neurologically impaired people, in much the same way that biologists focus on the fruit fly Drosophila to study the machinery of genes. Though simple, the rule that glues an inflection to a stem is a surprisingly powerful computational operation. That is because it recognizes an abstract mental symbol, like “noun stem,” instead of being associated with a particular list of words or a particular list of sounds or a particular list of meanings. We can use the rule to inflect any item in the mental dictionary that lists “noun stem” in its entry, without caring what the word means; we can convert not only dog to dogs but also hour to hours and justification to justifications. Likewise, the rule allows us to form plurals without caring what the word sounds like; we pluralize unusual-sounding words as in the Gorbachevs, the Bachs, and the Mao Zedongs. For the same reason, the rule is perfectly happy applying to brand-new nouns, like faxes, dweebs, wugs, and zots.

  We apply the rule so effortlessly that perhaps the only way I can drum up some admiration for what it accomplishes is to compare humans with a certain kind of computer program that many computer scientists tout as the wave of the future. These programs, called “artificial neural networks,” do not apply a rule like the one I have just shown you. An artificial neural network works by analogy, converting wug to wugged because it is vaguely similar to hug—hugged, walk-walked, and thousands of other verbs the network has been trained to recognize. But when the network is faced with a new verb that is unlike anything it has previously been trained on, it often mangles it, because the network does not have an abstract, all-embracing category “verb stem” to fall back on and add an affix to. Here are some comparisons between what people typically do and what artificial neural networks typically do when given a wug-test:

  VERB: mail

  TYPICAL PAST-TENSE FORM GIVEN BY PEOPLE: mailed

  TYPICAL PAST-TENSE FORM GIVEN BY NEURAL NETWORKS: membled

  VERB: conflict

  TYPICAL PAST-TENSE FORM GIVEN BY PEOPLE: conflicted

  TYPICAL PAST-TENSE FORM GIVEN BY NEURAL NETWORKS: conflafted

  VERB: wink

  TYPICAL PAST-TENSE FORM GIVEN BY PEOPLE: winked

  TYPICAL PAST-TENSE FORM GIVEN BY NEURAL NETWORKS: wok

  VERB: quiver

  TYPICAL PAST-TENSE FORM GIVEN BY PEOPLE: quivered

  TYPICAL PAST-TENSE FORM GIVEN BY NEURAL NETWORKS: quess

  VERB: satisfy

  TYPICAL PAST-TENSE FORM GIVEN BY PEOPLE: satisfied

  TYPICAL PAST-TENSE FORM GIVEN BY NEURAL NETWORKS: sedderded

  VERB: smairf

  TYPICAL PAST-TENSE FORM GIVEN BY PEOPLE: smairfed

  TYPICAL PAST-TENSE FORM GIVEN BY NEURAL NETWORKS: sprurice

  VERB: trilb

  TYPICAL PAST-TENSE FORM GIVEN BY PEOPLE: trilbed

  TYPICAL PAST-TENSE FORM GIVEN BY NEURAL NETWORKS: treelilt

  VERB: smeej

  TYPICAL PAST-TENSE FORM GIVEN BY PEOPLE: smeejed

  TYPICAL PAST-TENSE FORM GIVEN BY NEURAL NETWORKS: leefloag

  VERB: frilg

  TYPICAL PAST-TENSE FORM GIVEN BY PEOPLE: frilged

  TYPICAL PAST-TENSE FORM GIVEN BY NEURAL NETWORKS: freezled

  Stems can be built out of parts, too, in a second, deeper level of word assembly. In compounds like Yugoslavia report, sushi-lover, broccoli-green, and toothbrush,

  two stems are joined together to form a new stem, by the rule

  Nstem Nstem Nstem

  “A noun stem can consist of a noun stem followed by another noun stem.”

  In English, a compound is often spelled with a hyphen or by running its two words together, but it can also be spelled with a space between the two components as if they were still separate words. This confused your grammar teacher into telling you that in Yugoslavia report, “Yugoslavia” is an adjective. To see that this can’t be right, just try comparing it with a real adjective like interesting. You can say This report seems interesting but not This report seems Yugoslavia! There is a simple way to tell whether something is a compound word or a phrase: compounds generally have stress on the first word, phrases on the second. A dark róom (phrase) is any room that is dark, but a dárk room (compound word) is where photographers work, and a darkroom can be lit when the photographer is done. A black bóard, (phrase) is necessarily a board that is black, but some bláckboards (compound word) are green or even white. Without
pronunciation or punctuation as a guide, some word strings can be read either as a phrase or as a compound, like the following headlines:

  Squad Helps Dog Bite Victim

  Man Eating Piranha Mistakenly Sold as Pet Fish

  Juvenile Court to Try Shooting Defendant

  New stems can also be formed out of old ones by adding affixes (prefixes and suffixes), like the -al, -ize, and -ation I used recursively to get longer and longer words ad infinitum (as in sensationalizationalization). For example, -able combines with any verb to create an adjective, as in crunch—crunchable. The suffix -er converts any verb to a noun, as in crunch—cruncher, and the suffix -ness converts any adjective into a noun, as in crunchy-crunchiness.

  The rule forming them is

  Astem Stem Astemaffix

  “An adjective stem can consist of a stem joined to a suffix.”

  and a suffix like -able would have a mental dictionary entry like the following:

  -able:

  adjective stem affix

  means “capable of being X’d”

  attach me to a verb stem

  Like inflections, stem affixes are promiscuous, mating with any stem that has the right category label, and so we have crunchable, scrunchable, shmooshable, wuggable, and so on. Their meanings are predictable: capable of being crunched, capable of being scrunched, capable of being shmooshed, even capable of being “wugged,” whatever wug means. (Though I can think of an exception: in the sentence I asked him what he thought of my review in his book, and his response was unprintable, the word unprintable means something much more specific than “incapable of being printed.”)

 

‹ Prev