Fortunately, the eyes do stay still, at least some of the time. Between each saccade the eyes stay still for, on average, between 200 and 250 milliseconds. It is during this time (the fixation) that all the hard work gets done. Fixation times, like the length of the saccades, also depend on the script being read-the more information in the characters (with logograms at one extreme, and full alphabets at the other), the longer the fixation.
What the eyes see, and from where
Inevitably, each fixation provides a snapshot of the page from a slightly different view. In the following line, a letter near the centre of the line is underlined. If you stare at that underlined letter you will imagine that you can see the whole line. But how many words can you really make out? One very strong sensation is that you can see to either side of the underlined letter. But although the image on the retina is more or less symmetrical around the fixation point, it turns out that useful information (that is, useful with respect to the process of reading) is extracted from only a part of this image. During normal reading, this part corresponds, more or less, to about 15 characters to the right of the fixation point, and just a few characters to the left (generally, to the beginning of the word in which one is fixating). If, during the course of normal reading, your eyes had landed on the underlined letter in that earlier line, the extent of the text from which you would have extracted anything useful would have looked something like:
that underlined 1
And on the next fixation, it may have been something like:
underlined letter yo
In effect, the information we extract from the printed page reaches our brain via a constantly shifting window through which we view that page. Information from within that window can aid in the identification of whatever is written there, whilst information from outside the window cannot.
George McConkie and Keith Rayner discovered this in the mid-1970s whilst working at the universities of Cornell and Rochester respectively. In a series of complex experiments they monitored the eye movements of people reading sentences on a computer display. They changed the display each time the people's eyes moved so that only certain letters were visible around the point at which the eyes were looking (a bit like the example shown above). They then measured the time it took to read each sentence as a function of how many letters were visible on each fixation, and where. In effect, McConkie and Rayner artificially manipulated the width of that viewing window. Anything more than 15 letters to the right, and anything further than the beginning of the currently fixated word, did nothing to improve reading times. Anything less, and it took much longer to read the sentence. So shrinking that window had a detrimental effect on reading, whilst expanding it provided no benefit.
The asymmetry of that window is easily explained. When reading left-to-right, the eyes have already seen whatever is to the left of the fixation point. So there is no need to attend to it anymore. Of course, this would predict that in languages which are read right-to-left (e.g. Hebrew), an asymmetry in the opposite direction should show up. Sure enough, this is exactly what is found. Comparisons with other languages have also shown that the size of the viewing window changes as a function of the kind of script being read-generally, it is about twice as wide, in numbers of characters, as the length of the typical saccade.
The fact that information can be extracted quite a way to the right of the fixation point does not mean that all the letters, and all the words, in that region can be made out. In general, it seems that only the currently fixated word is recognized. But this does not mean that inormation elsewhere in that extended region is not useful. Rayner asked people to read a sentence that contained a nonword (e.g. SPAACH) in place of one of the real words (e.g. SPEECH). The original sentence might have been something like THE POLITICIAN READ THE SPEECH TO HIS COLLEAGUES. Rayner set his system up so that when the eyes got to within some number of characters of SPAACH, it was changed back into SPEECH. He then measured how long the eyes spent on SPEECH when they eventually landed on it. He found that if the change from SPAACH to SPEECH happened 12 or more characters to the left of SPAACH, the time subsequently spent on SPEECH was just the same as if SPEECH had been there all along. Hardly surprising. More interesting is what happened when the change happened between around seven and 12 characters to the left of SPAACH: Although people did not consciously notice the change, the time subsequently spent on SPEECH was longer than if SPEECH had been there all along-presumably because information had been extracted about SPAACH which subsequently proved incompatible with the information that was actually discovered when the eyes landed on SPEECH. The time spent on SPEECH was greater still if, instead of SPAACH, the `rogue' word had been something like BLAART. In general, the more letters that were shared between the rogue word and SPEECH, the shorter the subsequent fixation times on SPEECH. So although a nonword would not be recognized as such on that earlier fixation (seven to 12 characters away), it appears that information about its letters would be used to help in the recognition of the real word when it was eventually reached. Presumably, those letters caused the representation of the word to become activated, but only partially, on that earlier fixation. When the eyes subsequently landed on the word itself, it had already been partially activated, and so recognition was faster.
Although content words within the 15 character window are not recognized before being fixated, subsequent research has shown that very short words, or very predictable words, are often recognized whilst the eyes are fixating on a preceding word. One consequence of this is that these short predictable words are often skipped-the eyes do not land on them at all.
One final question before we leave eye movements and techniques for studying their effect on reading: when the eyes land or a word, where do they land? Do they land generally in the same place-the beginning, perhaps, or the middle? And if they do, would it matter if they sometimes missed the target? The answer to both these questions is `yes'. In a series of elegant studies, Kevin O'Regan, working in Paris, asked people to fixate on a particular point on the computer display. A word would then appear which could either be centred around that point, or shifted to one side or another. He could then work out how the time to recognize the word changed depending on where, within the word, the eyes were fixating when that word appeared. He found that there was an optimum position at which the recognition time was least, somewhere just to the left of the centre of the word. The actual optimum position changed depending on the specific word-it was closer to the middle the shorter and more frequent the word. But, in general, it was just to the left of centre. Interestingly, this coincides with where the eyes tend to land during normal reading (and of course, this depends on the kind of script being read, and whether it is read left-to-right or right-to-left). But how can the eyes `know' to land on this optimum viewing position if they do not yet know what the word is, and consequently, where that word's optimum viewing position is to be found? The answer, of course, is that they cannot, which is why they aim for what, in the general case, is the optimum position. If it turns out that where they actually land is not good enough (i.e. does not allow the word to be recognized), they make a corrective movement to the other side of the word, so maximizing the chances of successful recognition.
What should be clear by now, if nothing else, is that how the eyes move during reading is a different story from the one about how letters come to be invested with meaning and how that meaning is subsequently recovered. Both are clearly essential to the reading process. Although it is possible to talk about the two as if they are quite separable processes, they are not. For example, how far the eyes move during a saccade is a function of the script that is used-the more information vested in a single character, the shorter the jump. The same is true for the duration of a single fixation, and for the extent of the area from which information is extracted during that fixation. Eye movements are controlled, at least in part, by the processes involved in the extraction of meaning from the visual image. They each rely on the
other.
A final (written) word
Most of us take our ability to read as much for granted as our ability to use a telephone, watch television, or dial out for pizza. We forget all too easily that until relatively recently, most of the world's population were illiterate. Many people still are. This is not the place to list the obvious advantages of being literate. But there is one consequence that is worth discussing here and, somewhat poetically, it brings us right back to where we started, with hieroglyphics.
A property of hieroglyphic scripts, and Chinese also, is that there exist phonetic characters which, as in the alphabetic scripts, represent individual sounds. In this way, one can take the character for a word like `peach' and tack the symbol for the /s/ sound on the front, to create the new word `speech'. The invention of this kind of composite script was an important landmark in the evolution of writing systems, as it marked a shift from the written word as representing meaning directly, to the written word as representing spoken words. It also paved the way for the eventual invention of the more fully sound-based scripts such as the syllabaries and the alphabets. But whereas it seems obvious, to us, that one can tack an /s/ onto `peach' to create a new word, or take the /s/ off `speech' to get back to `peach', it turns out that pre-literate children, and illiterate adults, do not find this an easy and natural thing to do. The same is true of certain dyslexic children-see Chapter 12. They appear to lack the awareness that words can be broken down into individual sounds, and that these sounds can be added to, or taken away, or recombined, to create new words.
Jose Morais at the Free University in Brussels, working with Portuguese and Brazilian illiterates, found that whereas they had some appreciation of rhyme, and some awareness of the syllabic structure of a word, they were very poor at tasks which involved taking a sound off one word to create another. It is now generally accepted that the awareness that words are made up of individual sounds smaller than the syllable develops in large part as a consequence of learning to read and, specifically, learning to read an alphabetic script. This is not to say that all illiterates have no such awareness-some do. But on the whole, the `average' illiterate, and the `average' pre-literate child, will not be nearly so proficient at playing these sound manipulation games as the average literate. In this respect, learning to read alphabetically has consequences for the way in which we perceive the words we hear.
It is the fact that the concept of phonemes is not a natural one that makes the invention of phonetic symbols in hieroglyphics such an achievement. Flying is unnatural also, and it too is an achievement, but the invention of flying machines pales into insignificance compared to the invention of phonetic writing systems. Indeed, there is probably no other invention that comes close to the written word. On second thoughts, there is no `probably' about it.
The Greek philosopher Socrates (c. 470-399 Bc) believed, apparently, that the invention of writing (credited in Greek legend to Prometheus, the giver of fire) could only do harm to people-it would `implant forgetfulness into their souls . . . (they would be] calling things to remembrance no longer from within themselves, but by means of external marks' (Plato's Phaedrus, 275B). It is ironic, but in keeping with his beliefs, that it is only through writing that anything of Socrates is known today. Indeed, it is only through writing that almost all that is known today is known at all. Socrates may well have been right, but mankind owes much to those external marks on which it so relies.
When it all goes wrong
Og vjev gostv gesehsegj xet e tasqsoti, onehopi xjev ov natv ci moli vu xeli aq upi nuspoph, ugip vji pixtqeqis, epf fotduwis vjev ov xet emm moll vjev. Puv katv upi qesehseqj, iwisz gesehsegj. Us onehopi optvief vjev zua xuli aq epf xisi apecmi vu tgiel.
If that first paragraph was a surprise, imagine what it must be like to wake up one morning, open the newspaper, and discover that it was all like that. Not just one paragraph, every paragraph. Or imagine instead that you woke up and were unable to speak, or that what came out was not what you intended. Or that the first words you heard sounded like some of them were in a foreign language. Many of us have had anxiety dreams in which we open an exam paper to discover that it is written in a language we do not understand. And yet, within the dream, we know we are supposed to understand it. Those of us that have these dreams can recall the feeling of panic, of fear. Some people never wake up from that dream, because it is not a dream-it is their reality.
When brains go wrong, the ability to use language may well go wrong too, but it does not always. In fact, it is surprising how much can go wrong in the brain before language is affected. Chemical imbalances in the brain may affect the transmission of impulses from one neuron to another, leading to a range of disorders such as Parkinson's disease, schizophrenia, and severe depression, each due to different imbalances affecting different parts of the neural circuitry. But aside from any difficulties with articulation (as in Parkinson's), none of these disorders is primarily associated with deficits in the language faculty. For language to go, in adulthood at least, parts of the brain have to go too.
The situation in childhood is somewhat different. Children who sustain damage to the parts of the brain that, when damaged in adulthood, lead to quite severe language deficits, do not suffer the same deficits. They may not suffer any deficit at all. Sadly, this does not mean that children's brains are immune to the effects of damage. But at least with respect to the language faculty, significant recovery is possibleyoung brains are much more adaptable than older ones. This does not mean, though, that there are no long-lasting childhood language disorders. Dyslexia is a case in point. Little is known about its physical basis; more is known about its functional basis (that is, about which of the individual abilities that make up the language faculty are affected). Of course, this is not to say that there is no physical basis to dyslexiaultimately, any behaviour is rooted in the neurophysiology and neurochemistry of the brain, and there is some evidence that neurophysiological differences do exist between dyslexics and nondyslexics. We shall come back to dyslexia a little later. But because the brain is the organ which generates language, we shall start with what happens when something goes overtly wrong with that organ.
Damaged brains
The two most common causes of cell death in the brains of otherwise healthy adults are stroke and head injury. A stroke occurs when a blood vessel in the brain becomes blocked by a clot, or bursts because of a weakening of its walls. In either case, nearby cells die because of a failure in the blood supply and, in the case of a rupture, the physical damage that is caused by the leaked blood. Often, stroke leads to quite localized areas of cell death. Head injury generally leads to more widespread cell death, but the effects of both stroke and head injury can none the less be quite similar. They include impairments of one or more of the following: movement and/or sensation, vision, memory, planning and problem solving, and language. There may also be marked effects on mood and personality.
It has been known since the mid-nineteenth century that the two halves of the brain control different sides of the body-the left hemisphere controls the right side of the body, and the right controls the left. The two hemispheres are connected, and generally split the workload-except in the case of language, where the left hemisphere has primary responsibility. Consequently, a language deficit is a good pointer to left-hemisphere damage. And more specifically, to damage to the left side of that hemisphere.
Damage to the right hemisphere rarely causes any language impair ments of the kind that arise following left-hemisphere damage. However, right-hemisphere damaged patients may fail to recognize whether a speaker is happy, sad, surprised, or angry on the basis of his or her tone of voice. They may themselves be unable to convey such feelings by voice alone, and their speech can sound quite `mechanical' as a consequence. This is not just a general deficit in their ability to recognize changes in tone or pitch-some patients can tell on the basis of such differences alone whether a sentence such as `She's not coming to the party' is intended as
a statement, a command, or a question. What is impaired in these cases is not, primarily, the language faculty.
The fact that impairments specific to the language faculty are associated with damage to a part of the left hemisphere leads to the natural suggestion that that part of the brain is specialized for language. Whether that means that that part of the brain is genetically preprogrammed with language-specific functions (as opposed to general functions that could in principle serve more than just language) is another matter. It is well established that different parts of the brain are, in effect, wired up in different ways. Consequently, these different parts of the brain inevitably function in different ways, and so encode different kinds of information.
It is possible that the wiring in the language areas of the brain is wellsuited to the kinds of associative processes that language requires. But again, it is very much an open question whether they are well-suited to language because our genes `have language in mind' when specifying the neuroanatomy of the brain, or whether language has evolved so as to take advantage of a neuroanatomy whose genetic basis is, so to speak, agnostic with respect to its ultimate use. There is a clear evolutionary advantage in being able to communicate, but whether evolution has had time to encode a genetic basis for language structures, such as grammar, is unclear. Presumably, there is a part of our brains that encodes the knowledge and experience relevant for riding a bicycle. But when that particular bit of neuroanatomy was laid down, was it laid down with bicycles in mind? Probably not.
The Ascent of Babel: An Exploration of Language, Mind, and Understanding Page 22