Book Read Free

The Ascent of Babel: An Exploration of Language, Mind, and Understanding

Page 2

by Gerry T. M. Altmann


  Intellectual origins

  Psycholinguistics, like most of psychology, can trace its origins as far back as the Greeks, and probably even before. More recently, psycholinguistics experienced an intellectual growth spurt in the mid 1960s, just as the study of linguistics, in its contemporary form, got under way. Noam Chomsky, a linguist at the Massachusetts Institute of Technology, had developed a new way of describing the rules that determine which sentences are grammatical and which are not. It is not too surprising that psycholinguistics would depend substantially on advances in linguistics. Although psycholinguists are predominantly interested in how language is produced and understood, linguists were, at the time, more interested in ways of describing the language itself. What psycholinguistics needed was a vocabulary with which to talk about language, and that is exactly what linguistics provided. In effect, linguists provided the equivalent of a periodic table of the elements, and a set of rules for predicting which combinations of the elements would be explosive. The parallel between chemistry and linguistics is not that far-fetched, and is yet another indication of how much we take language for granted; whereas we wonder at the good fortune we had in the discovery (or invention) of the periodic table, we rarely consider the fact that linguistics has developed a system of almost equal rigour. And just as chemistry is a far-reaching subject concerned with the simplest of atoms and their combination into the most complex of molecules, so linguistics is concerned with the simplest of sounds and their combination into, ultimately, the most complex of sentences.

  But if linguistics provided the equivalent of a periodic table and rules for combining the elements (whether sounds, words, or sentences), what was left for psycholinguistics to do? The answer to this question is pretty much the same as the answer to another question: if architects can provide a complete description of a building, in the minutest of detail, what is left for builders to do? Architects design things. The finished design is simply a description of a building. It does not necessarily describe how the building materials are made, or why ceilings are put up after walls, or why windows tend not to be in floors. It does not necessarily describe how the building functions. When Bruegel painted his tower of Babel, did he start painting from the top or from the bottom? In principle he could have started from the top, and the finished painting would have ended up exactly the same. The point is, he did not need to know how such a tower would be built. So the answer to the original question, about what else there was for psycholinguistics to do, is simple. Linguistics provides a vocabulary for talking about the ways in which sentences are constructed from individual words, and the ways in which words are themselves constructed from smaller components (right down to the individual sounds and the movements of the muscles that create those sounds). By contrast, psycholinguistics attempts to determine how these structures, whether sounds, words, or sentences, are produced to yield utterances, or are analysed to yield meaning. If linguistics is about language, psycholinguistics is about the brain.

  Despite the close relationship between linguistics and psycholinguistics, the latter has generally been considered to be affiliated not to linguistics, but to psychology. Psychology encompasses just about every facet of human behaviour, whether it concerns the workings of individuals or the workings of groups of interacting individuals. But psycholinguistics comes under a branch of psychology called cognitive psychology, which is concerned primarily with how the mind represents the external world-if it did not somehow manage this, we would be unable to describe that world using language, recall it using memory, interpret it through what we see, learn about it through experience, and so on.

  Tools of the trade

  Because of the disparate nature of the different subdisciplines within psychology, it should come as no surprise that they each have their own set of tools to aid in their investigations. Few people realize that there are disciplines within psychology which are amenable to empirical science. In psycholinguistics, we do not just think that certain things may go on within the mind, we know they do, in just the same way as a physicist knows that mass is related to gravity. We know, for instance, that newborn babies are sensitive to the intonation (changes in pitch, rhythm, and intensity) of their maternal language; they can tell their own language apart from other languages. We know also that very young infants (just weeks old) are sensitive to sounds which may not occur within their own language, but may occur in other languages. We know that they lose this sensitivity within about eight to nine months. We know that when adults hear a word such as `rampart', not only do they access whatever representation they have in their minds of this word, they also access the representations corresponding to `ramp', `part', `art' and probably `amp' as well. We know also that this does not happen only with single isolated words, but that it also happens with words heard as part of normal continuous speech; if you hear `They'll ram part of the wall', you will access the individual words that are intended, but also other spurious words, such as `rampart'.

  With knowledge such as this, the next step is to consider the implications of these facts for the way in which the mind works; how could babies possibly be aware of distinctions between one language and another when they have only just been born? If babies lose their sensitivity to speech sounds which are absent from their language simply because they have never heard those sounds, how could they be sensitive to any sounds in the first place? And if adults spend all this time accessing spurious words, how do we ever manage to figure out which words were intended, and which were not? And never mind us adults, how do babies manage? So the empirical science side of psycholinguistics necessarily informs its theoretical side.

  Of course, like any science, one has to be able to validate the tools available, and trust them. For instance, how can we know that a baby can distinguish between one language and another? We cannot ask it, and it certainly cannot tell us. In fact, it is relatively easy to understand, if less easy to do. One thing babies do pretty well is suck. And there exists a technique called non-nutritive sucking in which a normal rubber teat is placed in the baby's mouth. The teat is filled with fluid and connected by a thin tube to a pressure-sensitive device that can tell whether the teat has been compressed or not-i.e. whether the baby has sucked or not. This in turn is connected to a computer which, whenever there is a suck, plays a sound over a loudspeaker to the baby. Babies learn very quickly that each time they suck, they get to hear a sound (not surprising, as babies are used to getting what they want when they suck). They start to suck more in order to hear more. But babies get bored very quickly, so after a short while they start to suck less. At this point, the computer senses a decrease in sucking, and changes the sound being played. If the babies can tell this new sound from the old one, they perk up, and get interested again and start to suck more. Of course, if they do not perk up, they might either have failed to distinguish the new sound from the old, or have failed to stay awake (a common problem with this technique!). But crucially, if sucking rate does go up, they must have been able to distinguish the two sounds. Importantly, you can play more than just single sounds to the babies; you can play whole sets of sounds, and after they get bored, you play a new set of sounds, to see if the babies can tell that there is some difference between this new set and the old set.

  This example demonstrates just one of many experimental tools that psycholinguists have at their disposal. It is relatively simple, and we understand pretty much how it works, and why. On occasion, however, the tools may work, and we may be able to validate them, but without necessarily understanding why they work as they do. This is partly true of one of the tools psycholinguists use to study the ways in which the meanings of words are accessed on the basis of the sounds making up the speech input. A commonly used technique is based on a phenomenon called priming. The basic version of this technique is simple: it involves presenting people with words on a computer screen. Each time a word comes up, the person has to decide whether it is a real word in their language, or a nonword-e.g.
`boat' vs. 'loat'. They have two buttons in front of them, and they simply press the yes-button or the no-button as quickly as possible. So the task is easy and straightforward, and typically people take just a few hundred milliseconds to press the appropriate button. Perhaps one of the earliest findings in psycholinguistics was that people take less time to decide that `boat' is a real word if they have just seen (or heard) a related word such as `ship'. If beforehand they see or hear `shop', there is no such advantage. This priming effect also works between words like `bat' and `ball'- they are not related in meaning, but are simply common associates.

  We can only speculate as to how the priming effect actually comes about, although it is the subject of much theoretical discussion, and there are a number of plausible hypotheses. But even without a proven theory of how it works, we can use priming as a tool to find out about which words are accessed during spoken language recognition. For instance, Richard Shillcock (a psycholinguist at Edinburgh University, who first speculated about the merits of the title Teach your dog to talk) played people recordings of words like `bat', then, immediately after the word stopped, he presented `ball' on a computer screen. The basic priming effect was there. But he also found that words like `wombat' decreased decision times to `ball' as well. `Wombat' is completely unrelated to the word `ball', so we have to assume that `bat' was somehow accessed, and that this is why `ball' was responded to faster. This is a simplified account of the experiment (which we shall return to in Chapter 6), but the basic idea is there.

  On the impossibility of proving anything

  Empirical tools are all well and fine, but they are of little use without having theories to test. As with any scientific endeavour, the purpose of experimentation in psycholinguistics is to narrow down the range of hypotheses that could explain the phenomena of interest. Much experimentation in any science is to do with disproving alternative theories. This is not because scientists are bloody-minded and want only to prove their colleagues wrong; it is a necessary property of science. The outcome of any single experiment may well be incompatible with any number of different theories. But it may still be compatible with many more (and they cannot all be right). In fact, it will be compatible with an infinite number more, ranging from the reasonable to the totally unreasonable (such as, `This result only happens when the experiment is run in a leap year'-clearly absurd but, unless tested, it has not been shown to be false). Fortunately, we tend to ignore the unreasonable. Unfortunately (for the scientists concerned), there have been cases where what turned out to be right was previously judged totally unreasonable, such as that the Earth is round, and the planets revolve around the Sun.

  So experiments cannot, logically, prove a hypothesis to be true; they can only prove other hypotheses to be false. The paradox, of course, is that even if a hypothesis is in fact true, by the generally accepted definitions of scientific investigation, it is unprovable ... On the whole, we can safely ignore this paradox, because there are few theories which are completely uncontroversial (and this is true in just about every branch of science). Psycholinguistics is certainly not exempt from controversy, but it is a fortunate and inevitable consequence of such controversy that theoretical advances are made. In fact, psycholinguistic theory is becoming so sophisticated that computer programs can be written which embody that theory. Computer simulations exist which mimic even the earliest learning of babies and infants. Perhaps it is no surprise that these programs are modelled on properties of the human brain (we shall come back to these properties in Chapter 13). For now, it is simply enough to be reassured that psycholinguistics is a science like any other; it has its empirical tools, its testable theories, and its unsolved mysteries. Perhaps juggling-even a dog juggling-is not so interesting after all. A child learning to speak? That really is something worth talking about.

  The ascent of Babel takes many forms. The psycholinguistic ascent attempts to understand not language itself, but language as it is used by the mind. A mind that begins to develop even before birth. Our ascent of Babel must therefore begin where we begin, in utero.

  Babies, birth, and language

  It used to be thought that babies, before they were born, did little more than lie back and mature. They would occasionally move around or have a stretch, but would otherwise do little else. This view of the unborn baby as a passive, almost dormant inhabitant of an isolation tank is certainly wrong. There are countless anecdotes about unborn babies and their likes and dislikes-even in utero, babies are remarkably responsive to the right kinds of external stimulation.

  No one doubts that babies are good at learning and, as we shall see later, we know that they are capable of learning even a few hours after birth. But there is also evidence to suggest that they can learn before birth. This is hardly surprising. It would be unlikely that the ability to learn suddenly switched on at birth. Just when it develops is unknown. However, it may be long enough beforehand to allow the baby to learn something useful about the sounds it hears in utero.

  It is generally agreed that the auditory system, or the sense of hearing, is functional by around seven months (from conception), although that figure is really only an approximation to the truth. Certainly, by around seven months, we can be fairly confident that the baby can hear. Just what it hears is another matter. The baby is immersed in fluid, surrounded by membranes, muscle, and skin. Much of the sound around the mother never even makes it through, and what little does get through is distorted by all that tissue and fluid. Because the sounds that the baby hears in utero are the first sounds it hears, they may kick-start the whole process which leads, ultimately, to the ability to interpret and recognize different sounds. It is therefore important to understand the nature of the distortion that the sound undergoes as it reaches the baby. And to understand the distortion, and why it may be significant that the sound is distorted in a particular way, it is necessary to acquire a basic understanding of what sound is and how it is produced.

  Sound is heard (or perceived) when vibration of the air causes a membrane in the inner ear to vibrate. Changing the frequency of the vibration leads to a change in the sense of pitch; a soprano sings at a higher pitch than a baritone, by causing his or her vocal folds to vibrate at a higher frequency (faster) than the vocal folds of the baritone. The vocal folds function in just the same way as the neck of a balloon when you let the air out; the noise is caused by the vibration of the rubber in the neck (the equivalent of the vocal folds) which is itself caused by the air flow from the body of the balloon (the equivalent of the lungs) up through the neck. If you stretch the neck of the balloon, the noise goes up in pitch, and if you loosen it, it goes down. Pitch and frequency are different things-frequency is a physical property of the signal, whereas pitch is what we perceive (the same note played on different instruments contains different frequencies, but will be perceived as having the same pitch). Jargon aside, sounds are just complex vibrations transmitted through the air. And if the air is in contact with something else, the sound gets transmitted through that as well because the vibrating air causes it to vibrate too, whether it is water, skin, muscle, or amniotic fluid. But not everything vibrates as easily as air. Skin and fluid are a little more sluggish, and very high frequency vibrations do not pass through so well. So what the baby hears in utero is distorted.

  What babies hear is a lot worse than simply turning down the treble and turning up the bass on a stereo to accentuate the low frequencies in the music, and de-emphasize the high frequencies. It is more like the sound that you get from covering your mouth with your hand and talking through that. In physical terms, although a young adult with normal hearing can hear frequencies in the range of around 20 to 20 000 vibrations, or cycles, per second, the frequencies produced by the human voice are only in the range of 100 to 4000 cycles per second (one cycle per second is one Hertz, or Hz). A telephone tends to eliminate any frequencies above 3000 Hz; the reason it is sometimes hard to identify the caller is that people differ mainly in the higher frequencies
that they produce, and some of these are lost over the telephone. None the less, most of the information above 3000 Hz is largely redundant. The quality of speech over a telephone line is still vastly superior to that experienced by the unborn baby; only frequencies up to about 1000 Hz get through to the baby. If you could only hear frequencies up to 1000 Hz, you would recognize the tune of a well known song because of the changing pitch, rhythm, and duration of the notes, and perhaps the relative intensity, or loudness, of individual notes, but you would be unable to make out the individual words or even the individual sounds; they would be too muffled. So how much use could this input be? What information is contained in the lower frequency range which the baby could learn and which would also be useful?

  There is actually a substantial amount of information contained within the lower frequencies. For instance, most sentences are spoken with a particular melody, or intonation. Statements such as `She went to the shop' are spoken with a different intonation from questions such as `She went to the shop?' The difference is that in the statement, the pitch tends to go down at the end, whilst in the question, it tends to go up. Different languages put different intonation on their sentences. Australian speakers, for instance, differ characteristically from English speakers, putting more prominent rises in pitch at the ends of their sentences. Rhythm is another feature that is present in the lower frequencies of the speech, and that can change depending on which language you speak. English has a particular kind of rhythm that is exemplified by limericks; the rhythms are determined by where the stress falls; the beat falls on each stressed syllable. French, on the other hand, has a slightly different rhythm, where the beat tends to coincide with every syllable (French speakers do not apply the same stressed/ unstressed distinction to their syllables as English speakers do). Irrespective of which language is spoken, rhythm can be picked up from just the first 1000 Hz-it is present in the speech that the baby is exposed to prenatally.

 

‹ Prev