The Language Instinct: How the Mind Creates Language

Home > Nonfiction > The Language Instinct: How the Mind Creates Language > Page 36
The Language Instinct: How the Mind Creates Language Page 36

by Steven Pinker


  For all we know, the brain might have regions dedicated to processes as specific as noun phrases and metrical trees; our methods for studying the human brain are still so crude that we would be unable to find them. Perhaps the regions look like little polka dots or blobs or stripes scattered around the general language areas of the brain. They might be irregularly shaped squiggles, like gerrymandered political districts. In different people, the regions might be pulled and stretched onto different bulges and folds of the brain. (All of these arrangements are found in brain systems we understand better, like the visual system.) If so, the enormous bomb craters that we call brain lesions, and the blurry snapshots we call PET scans, would leave their whereabouts unknown.

  There is already some evidence that the linguistic brain might be organized in this tortuous way. The neurosurgeon George Ojemann, following up on Penfield’s methods, electrically stimulated different sites in conscious, exposed brains. He found that stimulating within a site no more than a few millimeters across could disrupt a single function, like repeating or completing a sentence, naming an object, or reading a word. But these dots were scattered over the brain (largely, but not exclusively, in the perisylvian regions) and were found in different places in different individuals.

  From the standpoint of what the brain is designed to do, it would not be surprising if language subcenters are idiosyncratically tangled or scattered over the cortex. The brain is a special kind of organ, the organ of computation, and unlike an organ that moves stuff around in the physical world such as the hip or the heart, the brain does not need its functional parts to have nice cohesive shapes. As long as the connectivity of the neural microcircuitry is preserved, its parts can be put in different places and do the same thing, just as the wires connecting a set of electrical components can be haphazardly stuffed into a cabinet, or the headquarters of a corporation can be located anywhere if it has good communication links to its plants and warehouses. This seems especially true of words: lesions or electrical stimulation over wide areas of the brain can cause naming difficulties. A word is a bundle of different kinds of information. Perhaps each word is like a hub that can be positioned anywhere in a large region, as long as its spokes extend to the parts of the brain storing its sound, its syntax, its logic, and the appearance of the things it stands for.

  The developing brain may take advantage of the disembodied nature of computation to position language circuits with some degree of flexibility. Say a variety of brain areas have the potential to grow the precise wiring diagrams for language components. An initial bias causes the circuits to be laid down in their typical sites; the alternative sites are then suppressed. But if those first sites get damaged within a certain critical period, the circuits can grow elsewhere. Many neurologists believe that this is why the language centers are located in unexpected places in a significant minority of people. Birth is traumatic, and not just for the familiar psychological reasons. The birth canal squeezes the baby’s head like a lemon, and newborns frequently suffer small strokes and other brain insults. Adults with anomalous language areas may be the recovered victims of these primal injuries. Now that MRI machines are common in brain research centers, visiting journalists and philosophers are sometimes given pictures of their brains to take home as a souvenir. Occasionally the picture will reveal a walnut-sized dent, which, aside from some teasing from friends who say they knew it all along, bespeaks no ill effects.

  There are other reasons why language functions have been so hard to pin down in the brain. Some kinds of linguistic knowledge might be stored in multiple copies, some of higher quality than others, in several places. Also, by the time stroke victims can be tested systematically, they have often recovered some of their facility with language, in part by compensating with general reasoning abilities. And neurologists are not like electronics technicians who can wiggle a probe into the input or output line of some component to isolate its function. They must tap the whole patient via his or her eyes and ears and mouth and hands, and there are many computational waystations between the stimulus they present and the response they observe. For example, naming an object involves recognizing it, looking up its entry in the mental dictionary, accessing its pronunciation, articulating it, and perhaps also monitoring the output for errors by listening to it. A naming problem could arise if any of these processes tripped up.

  There is some hope that we will have better localization of mental processes soon, because more precise brain-imaging technologies are rapidly being developed. One example is Functional MRI, which can measure—with much more precision than PET—how hard the different parts of the brain are working during different kinds of mental activity. Another is Magneto-Encephalography, which is like EEG but can pinpoint the part of the brain that an electromagnetic signal is coming from.

  We will never understand language organs and grammar genes by looking only for postage-stamp-sized blobs of brain. The computations underlying mental life are caused by the wiring of the intricate networks that make up the cortex, networks with millions of neurons, each neuron connected to thousands of others, operating in thousandths of a second. What would we see if we could crank up the microscope and peer into the microcircuitry of the language areas? No one knows, but I would like to give you an educated guess. Ironically, this is both the aspect of the language instinct that we know the least about and the aspect that is the most important, because it is there that the actual causes of speaking and understanding lie. I will present you with a dramatization of what grammatical information processing might be like from a neuron’s-eye view. It is not something that you should take particularly seriously; it is simply a demonstration that the language instinct is compatible in principle with the billiard-ball causality of the physical universe, not just mysticism dressed up in a biological metaphor.

  Neural network modeling is based on a simplified toy neuron. This neuron can do just a few things. It can be active or inactive. When active, it sends a signal down its axon (output wire) to the other cells it is connected to; the connections are called synapses. Synapses can be excitatory or inhibitory and can have various degrees of strength. The neuron at the receiving end adds up any signals coming in from excitatory synapses, subtracts any signals coming in from inhibitory synapses, and if the sum exceeds a threshold, the receiving neuron becomes active itself.

  A network of these toy neurons, if large enough, can serve as a computer, calculating the answer to any problem that can be specified precisely, just like the page-crawling Turing machine in Chapter 3 that could deduce that Socrates is mortal. That is because toy neurons can be wired together in a few simple ways that turn them into “logic gates,” devices that can compute the logical relations “and,” “or,” and “not” that underlie deduction. The meaning of the logical relation “and” is that the statement “A and B” is true if A is true and if B is true. An AND gate that computes that relation would be one that turns itself on if all of its inputs are on. If we assume that the threshold for our toy neurons is .5, then a set of incoming synapses whose weights are each less than .5 but that sum to greater than .5, say .4 and .4, will function as an AND gate, such as the one on the left here:

  The meaning of the logical relation “or” is that a statement “A or B” is true if A is true or if B is true. Thus an OR gate must turn on if at least one of its inputs is on. To implement it, each synaptic weight must be greater than the neuron’s threshold, say .6, like the middle circuit in the diagram. Finally, the meaning of the logical relation “not” is that a statement “Not A” is true if A is false, and vice versa. Thus a NOT gate should turn its output off if its input is on, and vice versa. It is implemented by an inhibitory synapse, shown on the right, whose negative weight is sufficient to turn off an output neuron that is otherwise always on.

  Here is how a network of neurons might compute a moderately complex grammatical rule. The English inflection -s as in Bill walks is a suffix that should be applied under the following conditions: wh
en the subject is in the third person AND singular AND the action is in the present tense AND is done habitually (this is its “aspect,” in lingo)—but NOT if the verb is irregular like do, have, say, or be (for example, we say Bill is, not Bill be’s). A network of neural gates that computes these logical relations looks like this:

  First, there is a bank of neurons standing for inflectional features on the lower left. The relevant ones are connected via an AND gate to a neuron that stands for the combination third person, singular number, present tense, and habitual aspect (labeled “3sph”). That neuron excites a neuron corresponding to the -s inflection, which in turn excites the neuron corresponding to the phoneme z in a bank of neurons that represent the pronunciations of suffixes. If the verb is regular, this is all the computation that is needed for the suffix; the pronunciation of the stem, as specified in the mental dictionary, is simply copied over verbatim to the stem neurons by connections I have not drawn in. (That is, the form for to hit is just hit + s; the form for to wug is just wug + s.) For irregular verbs like be, this process must be blocked, or else the neural network would produce the incorrect be’s. So the 3sph combination neuron also sends a signal to a neuron that stands for the entire irregular form is. If the person whose brain we are modeling is intending to use the verb be, a neuron standing for the verb be is already active, and it, too, sends activation to the is neuron. Because the two inputs to is are connected as an AND gate, both must be on to activate is. That is, if and only if the person is thinking of be and third-person-singular-present-habitual at the same time, the is neuron is activated. The is neuron inhibits the -s inflection via a NOT gate formed by an inhibitory synapse, preventing ises or be’s, but activates the vowel i and the consonant z in the bank of neurons standing for the stem. (Obviously I have omitted many neurons and many connections to the rest of the brain.)

  I have hand-wired this network, but the connections are specific to English and in a real brain would have to have been learned. Continuing our neural network fantasy for a while, try to imagine what this network might look like in a baby. Pretend that each of the pools of neurons is innately there. But wherever I have drawn an arrow from a single neuron in one pool to a single neuron in another, imagine a suite of arrows, from every neuron in one pool to every neuron in another. This corresponds to the child innately “expecting” there to be, say, suffixes for persons, numbers, tenses, and aspects, as well as possible irregular words for those combinations, but not knowing exactly which combinations, suffixes, or irregulars are found in the particular language. Learning them corresponds to strengthening some of the synapses at the arrowheads (the ones I happen to have drawn in) and letting the others stay invisible. This could work as follows. Imagine that when the infant hears a word with a z in its suffix, the z neuron in the suffix pool at the right edge of the diagram gets activated, and when the infant thinks of third person, singular number, present tense, and habitual aspect (parts of his construal of the event), those four neurons at the left edge get activated, too. If the activation spreads backwards as well as forwards, and if a synapse gets strengthened every time it is activated at the same time that its output neuron is already active, then all the synapses lining the paths between “3rd,” “singular,” “present,” “habitual” at one end, and “z” at the other end, get strengthened. Repeat the experience enough times, and the partly specified neonate network gets tuned into the adult one I have pictured.

  Let’s zoom in even closer. What primal solderer laid down the pools of neurons and the innate potential connections among them? This is one of the hottest topics in contemporary neuroscience, and we are beginning to get the glimmerings of how embryonic brains get wired. Not the language areas of humans, of course, but the eyeballs of fruit flies and the thalamuses of ferrets and the visual cortexes of cats and monkeys. Neurons destined for particular cortical areas are born in specific areas along the walls of the ventricles, the fluid-filled cavities at the center of the cerebral hemispheres. They then creep outward toward the skull into their final resting place in the cortex along guy wires formed by the glial cells (the support cells that, together with neurons, constitute the bulk of the brain). The connections between neurons in different regions of the cortex are often laid down when the intended target area releases some chemical, and the axons growing every which way from the source area “sniff out” that chemical and follow the direction in which its concentration increases, like plant roots growing toward sources of moisture and fertilizer. The axons also sense the presence of specific molecules on the glial surfaces on which they creep, and can steer themselves like Hansel and Gretel following the trail of bread crumbs. Once the axons reach the general vicinity of their target, more precise synaptic connections can be formed because the growing axons and the target neurons bear certain molecules on their surfaces that match each other like a lock and key and adhere in place. These initial connections are often quite sloppy, though, with neurons exuberantly sending out axons that grow toward, and connect to, all kinds of inappropriate targets. The inappropriate ones die off, either because their targets fail to provide some chemical necessary for their survival, or because the connections they form are not used enough once the brain turns on in fetal development.

  Try to stay with me in this neuro-mythological quest: we are beginning to approach the “grammar genes.” The molecules that guide, connect, and preserve neurons are proteins. A protein is specified by a gene, and a gene is a sequence of bases in the DNA string found in a chromosome. A gene is turned on by “transcription factors” and other regulatory molecules—gadgets that latch on to a sequence of bases somewhere on a DNA molecule and unzip a neighboring stretch, allowing that gene to be transcribed into RNA, which is then translated into protein. Generally these regulatory factors are themselves proteins, so the process of building an organism is an intricate cascade of DNA making proteins, some of which interact with other DNA to make more proteins, and so on. Small differences in the timing or amount of some protein can have large effects on the organism being built.

  Thus a single gene rarely specifies some identifiable part of an organism. Instead, it specifies the release of some protein at specific times in development, an ingredient of an unfathomably complex recipe, usually having some effect in molding a suite of parts that are also affected by many other genes. Brain wiring in particular has a complex relationship to the genes that lay it down. A surface molecule may not be used in a single circuit but in many circuits, each guided by a specific combination. For example, if there are three proteins, X, Y, and Z, that can sit on a membrane, one axon might glue itself to a surface that has X and Y and not Z, and another might glue itself to a surface that has Y and Z but not X. Neuroscientists estimate that about thirty thousand genes, the majority of the human genome, are used to build the brain and nervous sytem.

  And it all begins with a single cell, the fertilized egg. It contains two copies of each chromosome, one from the mother, one from the father. Each parental chromosome was originally assembled in the parents’ gonads by randomly splicing together parts of the chromosomes of the two grandparents.

  We have arrived at a point at which we can define what grammar genes would be. The grammar genes would be stretches of DNA that code for proteins, or trigger the transcription of proteins, in certain times and places in the brain, that guide, attract, or glue neurons into networks that, in combination with the synaptic tuning that takes place during learning, are necessary to compute the solution to some grammatical problem (like choosing an affix or a word).

  So do grammar genes really exist, or is the whole idea just loopy? Can we expect the scenario in the 1990 editorial cartoon by Brian Duffy? A pig, standing upright, asks a farmer, “What’s for dinner? Not me, I hope.” The farmer says to his companion, “That’s the one that received the human gene implant.”

  For any grammar gene that exists in every human being, there is currently no way to verify its existence directly. As in many cases in biology
, genes are easiest to identify when they correlate with some difference between individuals, often a difference implicated in some pathology.

  We certainly know that there is something in the sperm and egg that affects the language abilities of the child that grows out of their union. Stuttering, dyslexia (a difficulty in reading that is often related to a difficulty in mentally snipping syllables into their phonemes), and Specific Language Impairment (SLI) all run in families. This does not prove that they are genetic (recipes and wealth also run in families), but these three syndromes probably are. In each case there is no plausible environmental agent that could act on afflicted family members while sparing the normal ones. And the syndromes are far more likely to affect both members of a pair of identical twins, who share an environment and all their DNA, than both members of a pair of fraternal twins, who share an environment and only half of their DNA. For example, identical four-year-old twins tend to mispronounce the same words more often than fraternal twins, and if a child has Specific Language Impairment, there is an eighty percent chance that an identical twin will have it too, but only a thirty-five percent chance that a fraternal twin will have it. It would be interesting to see whether adopted children resemble their biological family members, who share their DNA but not their environments. I am unaware of any adoption study that tests for SLI or dyslexia, but one study has found that a measure of early language ability in the first year of life (a measure that combines vocabulary, vocal imitation, word combinations, jabbering, and word comprehension) was correlated with the general cognitive ability and memory of the birth mother, but not of the adoptive mother or father.

 

‹ Prev