The Language Instinct: How the Mind Creates Language

Home > Nonfiction > The Language Instinct: How the Mind Creates Language > Page 48
The Language Instinct: How the Mind Creates Language Page 48

by Steven Pinker


  Similarity is thus the mainspring of a hypothetical general multipurpose learning device, and there is the rub. In the words of the logician Nelson Goodman, similarity is “a pretender, an imposter, a quack.” The problem is that similarity is in the mind of the beholder—just what we are trying to explain—not in the world. Goodman writes:

  Consider baggage at an airport check-in station. The spectator may notice shape, size, color, material, and even make of luggage; the pilot is more concerned with weight, and the passenger with destination and ownership. Which pieces of baggage are more alike than others depends not only upon what properties they share, but upon who makes the comparison, and when. Or suppose we have three glasses, the first two filled with colorless liquid, the third with a bright red liquid. I might be likely to say the first two are more like each other than either is like the third. But it happens that the first glass is filled with water and the third with water colored by a drop of vegetable dye, while the second is filled with hydrochloric acid—and I am thirsty.

  The unavoidable implication is that a sense of “similarity” must be innate. This much is not controversial; it is simple logic. In behaviorist psychology, when a pigeon is rewarded for pecking a key in the presence of a red circle, it pecks more to a red ellipse, or to a pink circle, than it does to a blue square. This “stimulus generalization” happens automatically, without extra training, and it entails an innate “similarity space”; otherwise the animal would generalize to everything or to nothing. These subjective spacings of stimuli are necessary for learning, so they cannot all be learned themselves. Thus even the behaviorist is “cheerfully up to his neck” in innate similarity-determining mechanisms, as the logician W. V. O. Quine pointed out (and his colleague B. F. Skinner did not demur).

  For language acquisition, what is the innate similarity space that allows children to generalize from sentences in their parents’ speech to the “similar” sentences that define the rest of English? Obviously, “Red is more similar to pink than to blue,” or “Circle is more similar to ellipse than to triangle,” is of no help. It must be some kind of mental computation that makes John likes fish similar to Mary eats apples, but not similar to John might fish; otherwise the child would say John might apples. It must make The dog seems sleepy similar to The men seem happy, but not similar to The dog seems sleeping, so that the child will avoid that false leap. That is, the “similarity” guiding the child’s generalization has to be an analysis of speech into nouns and verbs and phrases, computed by the Universal Grammar built into the learning mechanisms. Without such innate computation defining which sentence is similar to which other ones, the child would have no way of correctly generalizing—any sentence is “similar,” in one sense, to nothing but a verbatim repetition of itself, and also “similar,” in another sense, to any random rearrangement of those words, and “similar,” in still other senses, to all kinds of other inappropriate word strings. This is why it is no paradox to say that flexibility in learned behavior require innate constraints on the mind. The chapter on language acquisition (“ Chapter 9”) offers a good example: the ability of children to generalize to an infinite number of potential sentences depends on their analyzing parental speech using a fixed set of mental categories.

  So learning a grammar from examples requires a special similarity space (defined by Universal Grammar). So does learning the meanings of words from examples, as we saw in Quine’s gavagai problem, in which a word-learner has no logical basis for knowing whether gavagai means “rabbit,” “hopping rabbit,” or “undetached rabbit parts.” What does this say about learning everything else? Here is how Quine reports, and defuses, what he calls the “scandal of induction”:

  It makes one wonder the more about other inductions, where what is sought is a generalization not about our neighbor’s verbal behavior but about the harsh impersonal world. It is reasonable that our [mental] quality space should match our neighbor’s, we being birds of a feather; and so the general trustworthiness of induction in the…learning of words was a put-up job. To trust induction as a way of access to the truths of nature, on the other hand, is to suppose, more nearly, that our quality space matches that of the cosmos…. [But] why does our innate subjective spacing of qualities accord so well with the functionally relevant groupings in nature as to make our inductions tend to come out right? Why should our subjective spacing of qualities have a special purchase on nature and a lien on the future?

  There is some encouragement in Darwin. If people’s innate spacing of qualities is a gene-linked trait, then the spacing that has made for the most successful inductions will have tended to predominate through natural selection. Creatures inveterately wrong in their inductions have a pathetic but praiseworthy tendency to die before reproducing their kind.

  Quite right, though the cosmos is heterogeneous, and thus the computations of similarity that allow our generalizations to harmonize with it must be heterogeneous, too. Qualities that make two utterances equivalent in terms of learning the grammar, such as being composed of the same sequence of nouns and verbs, should not make them equivalent in terms of scaring away animals, such as being a certain loudness. Qualities that should make bits of vegetation equivalent in terms of causing or curing an illness, such as being different parts of a kind of plant, are not the qualities that should make them equivalent for nutrition, like sweetness; equivalent for feeding a fire, like dryness; equivalent for insulating a shelter, like bulk; or equivalent for giving as a gift, like beauty. The qualities that should classify people as potential allies, such as showing signs of affection, should not necessarily classify them as potential mates, such as showing signs of fertility and not being close blood relatives. There must be many similarity spaces, defined by different instincts or modules, allowing those modules to generalize intelligently in some domain of knowledge such as the physical world, the biological world, or the social world.

  Since innate similarity spaces are inherent to the logic of learning, it is not surprising that human-engineered learning systems in artificial intelligence are always innately designed to exploit the constraints in some domain of knowledge. A computer program intended to learn the rules of baseball is pre-programmed with the assumptions underlying competitive sports, so that it will not interpret players’ motions as a choreographed dance or a religious ritual. A program designed to learn the past tense of English verbs is given only the verb’s sound as its input; a program designed to learn a verb’s dictionary entry is given only its meaning. This requirement is apparent in what the designers do, though not always in what they say. Working within the assumptions of the Standard Social Science Model, the computer scientists often hype their programs as mere demos of powerful general-purpose learning systems. But because no one would be so foolhardy as to try to model the entire human mind, the researchers can take advantage of this allegedly practical limitation. They are free to hand-tailor their demo program to the kind of problem it is charged with solving, and they can be a deus ex machina funneling just the right inputs to the program at just the right time. Which is not a criticism; that’s the way learning systems have to work!

  So what are the modules of the human mind? A common academic parody of Chomsky has him proposing innate modules for bicycling, matching ties with shirts, rebuilding carburetors, and so on. But the slope from language to carburetor repair is not that slippery. We can avoid the skid with some obvious footholds. Using engineering analyses, we can examine what a system would need, in principle, to do the right kind of generalizing for the problem it is solving (for example, in studying how humans perceive shapes, we can ask whether a system that learns to recognize different kinds of furniture can also recognize different faces, or whether it needs special shape analyzers for faces). Using biological anthropology, we can look for evidence that a problem is one that our ancestors had to solve in the environments in which they evolved—so language and face recognition are at least candidates for innate modules, but reading and driving ar
e not. Using data from psychology and ethnography, we can test the following prediction: when children solve problems for which they have mental modules, they should look like geniuses, knowing things they have not been taught; when they solve problems that their minds are not equipped for, it should be a long hard slog. Finally, if a module for some problem is real, neuroscience should discover that the brain tissue computing the problem has some kind of physiological cohesiveness, such as constituting a circuit or subsystem.

  Being a bit foolhardy myself, I will venture a guess as to what kinds of modules, or families of instincts, might eventually pass these tests, aside from language and perception (for justification, I refer you to a recent compendium called The Adapted Mind):

  Intuitive mechanics: knowledge of the motions, forces, and deformations that objects undergo.

  Intuitive biology: understanding of how plants and animals work.

  Number.

  Mental maps for large territories.

  Habitat selection: seeking of safe, information-rich, productive environments, generally savannah-like.

  Danger, including the emotions of fear and caution, phobias for stimuli such as heights, confinement, risky social encounters, and venomous and predatory animals, and a motive to learn the circumstances in which each is harmless.

  Food: what is good to eat.

  Contamination, including the emotion of disgust, reactions to certain things that seem inherently disgusting, and intuitions about contagion and disease.

  Monitoring of current well-being, including the emotions of happiness and sadness, and moods of contentment and restlessness.

  Intuitive psychology: predicting other people’s behavior from their beliefs and desires.

  A mental Rolodex: a database of individuals, with blanks for kinship, status or rank, history of exchange of favors, and inherent skills and strengths, plus criteria that valuate each trait.

  Self-concept: gathering and organizing information about one’s value to other people, and packaging it for others.

  Justice: sense of rights, obligations, and deserts, including the emotions of anger and revenge.

  Kinship, including nepotism and allocations of parenting effort.

  Mating, including feelings of sexual attraction, love, and intentions of fidelity and desertion.

  To see how far standard psychology is from this conception, just turn to the table of contents of any textbook. The chapters will be: Physiological, Learning, Memory, Attention, Thinking, Decision-Making, Intelligence, Motivation, Emotion, Social, Development, Personality, Abnormal. I believe that with the exception of Perception and, of course, Language, not a single curriculum unit in psychology corresponds to a cohesive chunk of the mind. Perhaps this explains the syllabus-shock experienced by Introductory Psychology students. It is like explaining how a car works by first discussing the steel parts, then the aluminum parts, then the red parts, and so on, instead of the electrical system, the transmission, the fuel system, and so on. (Interestingly, textbooks on the brain are more likely to be organized around what I think of as real modules. Mental maps, fear, rage, feeding, maternal behavior, language, and sex are all common sections in neuroscience texts.)

  For some readers, the preceding list will be the final proof that I have lost my mind. An innate module for doing biology? Biology is a recently invented academic discipline. Students struggle through it. The person in the street, and tribes around the world, are fonts of superstition and misinformation. The idea seems only slightly less mad than the innate carburetor repair instinct.

  But recent evidence suggests otherwise; there may be an innate “folk biology” that gives people different basic intuitions about plants and animals than they have about other objects, like man-made artifacts. The study of folk biology is young compared with the study of language, and the idea might be wrong. (Maybe we reason about living things using two modules, one for plants and one for animals. Maybe we use a bigger module, one that embraces other natural kinds like rocks and mountains. Or maybe we use an inappropriate module, like folk psychology.) But the evidence so far is suggestive enough that I can present folk biology as an example of a possible cognitive module other than language, giving you an idea of the kinds of things an instinct-populated mind might contain.

  To begin with, as hard as it may be for a supermarket-jaded city dweller to believe, “stone age” hunter-gatherers are erudite botanists and zoologists. They typically have names for hundreds of wild plant and animal species, and copious knowledge of those species’ life cycles, ecology, and behavior, allowing them to make subtle and sophisticated inferences. They might observe the shape, freshness, and direction of an animal’s tracks, the time of day and year, and the details of the local terrain to predict what kind of animal it is, where it has gone, and how old, hungry, tired, and scared it is likely to be. A flowering plant in the spring might be remembered through the summer and returned to in the fall for its underground tuber. The use of medicinal drugs, recall, is part of the lifestyle of the Universal People.

  What kind of psychology underlies this talent? How does our mental similarity space accord with this part of the cosmos? Plants and animals are special kinds of objects. For a mind to reason intelligently about them, it should treat them differently from rocks, islands, clouds, tools, machines, and money, among other things. Here are four of the basic differences. First, organisms (at least, sexual organisms) belong to populations of interbreeding individuals adapted to an ecological niche; this makes them fall into species with a relatively unified structure and behavior. For example, all robins are more or less alike, but they are different from sparrows. Second, related species descended from a common ancestor by splitting off from a lineage; this makes them fall into non-overlapping, hierarchically included classes. For example, sparrows and robins are alike in being birds, birds and mammals are alike in being vertebrates, vertebrates and insects are alike in being animals. Third, because an organism is a complex, self-preserving system, it is governed by dynamic physiological processes that are lawful even when hidden. For example, the biochemical organization of an organism enables it to grow and move, and is lost when it dies. Fourth, because organisms have separate genotypes and phenotypes, they have a hidden “essence” that is conserved as they grow, change form, and reproduce. For example, a caterpillar, chrysalis, and butterfly are in a crucial sense the same animal.

  Remarkably, people’s unschooled intuition about living things seems to mesh with these core biological facts, including the intuitions of young children who cannot read and have not set foot in a biology lab.

  The anthropologists Brent Berlin and Scott Atran have studied folk taxonomies of flora and fauna. They have found that, universally, people group local plants and animals into kinds that correspond to the genus level in the Linnaean classification system of professional biology (species-genus-family-order-class-phylum-kingdom). Since most locales contain a single species from any genus, these folk categories usually correspond to species as well. People also classify kinds into higher-level life-forms, like tree, grass, moss, quadruped, bird, fish, and insect. Most of the life-form categories of animals coincide with the biologist’s level of class. Folk classifications, like professional biologist’s classifications, are strictly hierarchical: every plant or animal belongs to one and only one genus; every genus belongs to only one life-form; every life-form is either a plant or an animal; plants and animals are living things, and every object is either a living thing or not. All this gives people’s intuitive biological concepts a logical structure that is different from the one that organizes their other concepts, such as human-made artifacts. Whereas people everywhere say that an animal cannot be both fish and fowl, they are perfectly happy with saying, for example, that a wheelchair can be both furniture and vehicle, or that a piano can be both musical instrument and furniture. And this in turn makes reasoning about natural kinds different from reasoning about artifacts. People can deduce that if a trout is a kind offish and a
fish is a kind of animal, then a trout is a kind of animal. But they do not infer that if a car seat is a kind of chair and a chair is a kind of furniture, then a car seat is a kind of furniture.

  Special intuitions about living things begin early in life. Recall that the human infant is far from being a bag of reflexes, mewling and puking in the nurse’s arms. Three- to six-month infants, well before they can move about or even see very well, know about objects and their possible motions, how they causally impinge on one another, their properties like compressibility, and their number and how it changes with addition and subtraction. The distinction between living and nonliving things is appreciated early, perhaps before the first birthday. The cut initially takes the form of a difference between inanimate objects that move around according to the laws of billiard-ball physics and objects like people and animals that are self-propelled. For example, in an experiment by the psychologist Elizabeth Spelke, a baby is shown a ball rolling behind a screen and another ball emerging from the other side, over and over again to the point of boredom. If the screen is removed and the infant sees the expected hidden event, one ball hitting the other and launching it on its way, the baby’s interest is only momentarily revived; presumably this is what the baby had been imagining all along. But if the screen is removed and the baby sees the magical event of one object stopping dead in its tracks without reaching the second ball, and the second ball taking off mysteriously on its own, the baby stares for much longer. Crucially, infants expect inanimate balls and animate people to move according to different laws. In another scenario, people, not balls, disappeared and appeared from behind the screen. After the screen was removed, the infants showed little surprise when they saw one person stop short and the other up and move; they were more surprised by a collision.

 

‹ Prev