The Ascent of Babel: An Exploration of Language, Mind, and Understanding

Home > Other > The Ascent of Babel: An Exploration of Language, Mind, and Understanding > Page 7
The Ascent of Babel: An Exploration of Language, Mind, and Understanding Page 7

by Gerry T. M. Altmann


  Resolving these issues is difficult, and perhaps even impossible. It would be very hard to prove that a certain kind of knowledge was not innate. We would have to find a period when that knowledge was absent, perhaps by finding that the baby or infant lacked some ability that it should have if the knowledge were present. The problem then would be to be sure that our failure to find the ability did not come about simply because the experimental tools were not sensitive enough to what the baby was or was not able to do. And even if we believed the tools sensitive enough, the ability might be one that developed only after the infant had matured in some way (perhaps related to the process of synaptogenesis).

  A further criticism that has been levelled against theories proposing an innate endowment is that they are perhaps too simple an explanation when no other way has been found to explain the phenomena being studied-perhaps there is an alternative explanation but it is yet to be found. It is unclear whether any rebuttal is really possible to criticisms such as this. Consequently, an important part of the debate has been concerned less with what may be innate, and more with what it is possible to learn from the language itself.

  The rhythms ofgrammar

  The innateness hypothesis has been around for as long as linguistics, in its modern form, has existed. And ever since linguists first suggested that the language input did not contain sufficient information by which to learn the language, there has been a search for alternative (non-innate) means by which the developing child could acquire the relevant knowledge. In the mid-1980s, James Morgan at the University of Minnesota suggested that the problem with existing theories of language learnability was that they assumed that children were exposed to sequence after sequence of grammatical sentences. On this view, each sentence (no matter how simple or complex) is simply an unstructured sequence of words, with the task being to somehow project structure onto this sequence for the purposes of interpreting the sentence-hence the problems outlined earlier. Morgan proposed that the input sequences are not unstructured, but in fact contain a number of clues (or, more properly, cues) to their internal structure. If so, part of the problem for the learning device would be solved; it would be presented with the very structure that (on the alternative accounts) it should be trying to find. In particular, Morgan, and others, proposed that one cue to the internal structure of sentences is their prosodic structure.

  We have already seen that newborn infants are sensitive to the prosodic structure of their language, and that this may explain the sensitivity they demonstrate to syllabic structure. Studies have shown, in addition, that infants as young as just four and a half months are sensitive to the prosodic patterns that accompany some of the boundaries between the major constituents of a sentence. These patterns include durational changes (slight lengthening of vowels before such a boundary) and slight changes in pitch before and after a boundary (generally, a fall in pitch before, and a rise after). Morgan suggested that perhaps these cues to boundary location could help the child identify the internal structure of each sentence. Knowing where the boundaries were would be a first step to making sense of what could be found between them. For example, if `the girl' in `The girl knew the language was beautiful' was prosodically distinguished from the rest of the sentence, and so was ,the language', then the child could learn fairly easily the relative positions of determiner-type words ('the') and noun-type words ('girl', `language'). It would not have to worry about the positions of the determiner words relative to any other words in the sentence. The child could also learn (if the appropriate boundary cues were present) about the relative positioning of phrases, such as `the language', and `was beautiful'.

  As a test of this hypothesis, Morgan and his colleagues created a small artificial language composed of sequences of meaningless syllables ordered according to a set of rules that they made up and which specified which orders of the syllables were grammatical and which were ungrammatical-so Jix dup kav sog pel' might be grammatical, whereas Jix kav pel dup sog' might not be. This is little different from saying that `The girl speaks many languages' is grammatical and `The speaks languages girl many' is not. And in much the same way as `The girl speaks many languages' can be broken down according to the grammar of English into its constituents as `( the girl )( speaks ( many languages ))', so Jix dup kav sog pel' could be broken down, according to its artificial grammar, as `(jix dup)( kav ( sog pel ))'. Morgan arranged for someone to speak these sequences either in monotone (in much the same way as one might simply list a sequence of syllables), or with the natural prosody (rhythm and intonation) that would convey the constituent groupings (imagine the difference between a monotone version of `The girl speaks many languages', and one with exaggerated intonation). Adults listened to one or other of these two spoken versions of the language, and were shown, simultaneously, visual symbols that represented the `objects' that the words in this language referred to. Morgan found that the language was learned more accurately when heard with its natural intonation. So prosody can, and according to this theory does, aid the learning process.

  Despite the appeal of such a theory, it is unclear whether prosodic structure alone is sufficient to help the novice language user learn about internal structure. There is some suggestion that in everyday language constituent boundaries are not reliably marked by changes in prosody. On the other hand, in many languages the speech addressed to young children tends to have a much more exaggerated intonation than that addressed to adults. Moreover, the sentences spoken to young children in these languages tend also to have very simple internal structure. So maybe after all prosody is useful. But is it necessary?

  There exist languages, as we have already seen, in which the speech addressed to children is not particularly different from that addressed to adults (this is supposedly the case in Samoan and Javanese, for example). In some of these communities, the parents do not address their children until the children themselves start talking-the parents do not believe that their children are communicative beings until they can talk. Given the current state of the art, it is impossible to determine the effect this has on children brought up in these communities. Perhaps there are other cues to internal structure that these languages provide.

  Morgan's account of early acquisition of syntactic information leaves one further puzzle unsolved. How does the child know to use the information provided by the prosodic cues? It may seem obvious to us that the melody and rhythm of a sentence defines natural groupings within that sentence, and that what we need to learn is the structure of each group, but in just the same way that a computer would not know to do this (unless programmed appropriately), how would an infant know? Is this some innate constraint? Or is it learned? And if so, how could it be learned?

  The debate between advocates of, on the one hand, an innate basis for the acquisition of grammar, and on the other, a prosodic basis for its acquisition, is still ongoing. Both accounts appear to rely, one way or another, on something more than just the language input itself. So perhaps they are not so different after all. But as suggested earlier, this additional knowledge, if innate, may not be linguistic-it may instead reflect more generally the existence of human abilities that in turn reflect the way the world works.

  Languages are not learned, they are created

  The continuing controversies in language acquisition research arise because we still know too little about exactly how, and why, the infant learns to combine words. We also know too little, still, about how infants and young children learn to inflect their words. Do they inflect them by learning the mental equivalent of stand-alone rules, such as `past tense = + ed'? Or do they inflect them by analogy to the other words they have already learned? The various debates are far from over (and it is impossible to describe them all in a single chapter). But one thing we do know is that to learn language, however that is done, requires the right kind of exposure; without sufficient external input early enough, infants are unlikely to acquire language normally.

  Estimates vary as to the
duration of the `critical period' for language learning, but it is somewhere in the first six and 12 years (depending on which textbook, and which case study you read). The evidence suggests that the ability to learn a first language easily gradually tails off. There have been a number of distressing cases of children who have been brought up in isolation, with no language input to speak of. These children have generally emerged from their ordeal with nothing but rudimentary gestures. One girl was discovered when she was six, and went on to develop language pretty much as well as any other child who acquires a language at that age (as happens with, for instance, the children of immigrants). Another girl was discovered when she was around 12 or 13, and although she developed some language skills fairly quickly, she never progressed beyond the level expected of even a three year-old. Just why the ability to learn tails off is the subject of considerable debate but, as Steven Pinker suggests, there just may not have been the selective pressure, during our evolution, to maintain our phenomenal learning abilities beyond an initial period. Indeed, what adaptive benefits could there possibly be? So long as we get to puberty, the species is safe.

  So we need sufficient language input to acquire the language. But it is not quite that simple, because it seems to be the case that we do not simply acquire the language we are exposed to; rather we create a language consistent with what we hear. The evidence for this comes from the creation of the creole languages. In the early part of the 20th century, Hawaii was, in effect, one large sugar plantation, employing labourers from the Philippines, China, Portugal, and elsewhere. Although each ethnic group brought its own language, there was a need in the labouring community to break down the language barriers, and so a pidgin language developed: a mish-mash of words, mainly from English, which had the property that no two speakers of pidgin would necessarily use the same words in the same order. The pidgin languages (there are several around the world, not all based around English) were studied extensively by Derek Bickerton, based in Hawaii, who noticed that the children of pidgin speakers did not speak the same pidgin as their parents. What the children had learned was a language that shared the words of the pidgin they heard, but with none of the irregular word order of the different pidgins spoken by the different adults. It is as if the children imposed some order on what they heard, which was then subsequently reinforced during their interactions with one another. Most significant of all is the fact that creole languages (and again, Hawaiian Creole is just one example) can contain grammatical devices (that is, word order, inflections, and grammatical function words such as `the' or `was') which may not appear in any of the languages that had originally made up the pidgin of the parent generation.

  The facts surrounding creole demonstrate that the ability to combine words is not dependent solely on exposure to other people's combinations of words. They argue against a learning mechanism which simply analyses what it hears, and argue instead for a more proactive mechanism that is driven by some fundamental (and perhaps innate) desire to describe the world using language, in some form or other. The emphasis here is required because it is becoming increasingly clear that whatever facility we have for learning language is not specific to spoken language; children brought up in communities where sign language is the predominant medium of communication learn it in a way that almost exactly mirrors the way in which other children learn spoken language (and this even includes the creation of the sign language equivalents of pidgin and creole). Indeed, the structures of the two languages, in terms of elements corresponding to words, rhythm, and even syllables, are almost equivalent. However, whether this means that there are innate constraints on what language can look like, or whether it means that language looks the way it does because of some early, possibly rhythmic, experience common to all infants, is at present unclear. Certainly, we appear to be born with a predisposition to learn, and if necessary, create language.

  Learning is something that newborns, infants, children, and even adults, can do effortlessly. Theories of learning are becoming increasingly important in psycholinguistic theory; whereas we have some idea of what babies and infants are capable of, there are still gaps in our theories, and these gaps will most likely be filled by theories of how these early abilities are acquired. The puzzle is to work out which are acquired as part of some genetic inheritance, and which are acquired through learning. It cannot all be innate. And if there is a genetic component, no matter how substantial, the challenge is to understand how that component matures-how the innate predispositions provided to the child become realized, through exposure to their language, as actual abilities.

  We shall reconsider the issue of learning, and what learning may involve, when we consider, in Chapter 13, computational accounts of the learning process. However, not all psycholinguistic research is necessarily concerned with learning; identifying adults' abilities, and attempting to understand on the basis of those abilities the representations and processes underlying adult language usage, is an important prerequisite to understanding the route that the human language device pursues from birth through to adulthood.

  Organizing the dictionary

  The average one-year old knows approximately 100 words. The average adult knows between around 60 000 and around 75 000 words, not including variants of the same one. The average one-year-old will select from around just 10 words when speaking; the average adult will select from around 30 000. To put this in perspective, this book contains only around 5500 different words, including variants of the same word. As adults, then, we have ended up knowing an awful lot of words (some of which we might use just once in our lifetimes). And when we hear just one of those words, we must somehow distinguish it from each of the other 60 000 to 75 000 words we know. But on what basis do we distinguish between all those words? If they are contained within the mental equivalent of a dictionary, how is that dictionary organized?

  The dictionaries that we are perhaps most familiar with are arranged alphabetically. To find a word you start off with its first letter and go to the section of the dictionary containing words sharing that first letter. You then go to the section containing words with the same first two letters as your word, then the same first three letters, and so on, until you have narrowed down the search to a single dictionary entry. But not all dictionaries are arranged this way. Rhyming dictionaries, for instance, are organized not in groups of words sharing beginning letters, but in groups of words which rhyme. `Speech' would appear grouped with `peach' and `beach'. `Language' would appear grouped with `sandwich'.

  We tend to think that an alphabetic arrangement makes most sense, but only because we are used to the way that our spelling is organized. What does a Chinese dictionary look like? In a language like Chinese, the symbols do not represent words as such, instead they represent the meanings or ideas themselves; the symbols do not constitute a spelling in the sense that English speakers are most familiar with. The English alphabet is essentially a phonetic alphabet, where the order of the letters loosely represents the order of the sounds making up the word (we return to this issue in Chapter 11). But Chinese characters are not made up of subparts in this way; the characters give very little clue as to the sound of the word. Parts of some characters do relate to parts of the sound they should be spoken with, but only to parts-many characters give no hints at all. This has its advantages; the same character can have the same meaning in different dialects, even though the actual words that are spoken may differ. The disadvantage is that learning the set of characters required to read a novel or newspaper is no easy task.

  So a dictionary of Chinese characters cannot be organized according to any alphabet. At best it could be organized by shape (which would be independent of the word's sound or meaning), or by meaning (independently of sound or shape). The point is this: the most natural organization of a dictionary depends on the nature of the script that can be used to access the dictionary. Most dictionaries of English access words according to the alphabetic sequence of letters, but some do so according
to what they rhyme with, and some use other criteria (crossword dictionaries access by length, for instance). So given our own knowledge about the different words in the language that we hear and speak, how do we access words in that body of knowledge? What is the access code for the mental dictionary?

  D!'y'erent languages, different codes?

  There are two phases to looking up a word in a written, alphabetically arranged dictionary. The first involves establishing a range of entries within which the word is assumed to appear, and corresponds to flicking through the pages until one reaches the page containing all the words that share the first few letters of the word that is being looked up. The second phase involves narrowing down the search and eliminating from the search all words which, although sharing the same beginnings, deviate later on in the spelling. In both phases, the crucial elements that we focus on, and on which basis we narrow down the search, are letters. So when accessing the mental dictionary, or lexicon, and narrowing down the search for a spoken word, what equivalent element do we focus on? What kinds of element do we use to retrieve words from our lexicons?

 

‹ Prev