Adam's Tongue: How Humans Made Language, How Language Made Humans

Home > Other > Adam's Tongue: How Humans Made Language, How Language Made Humans > Page 23
Adam's Tongue: How Humans Made Language, How Language Made Humans Page 23

by Bickerton, Derek


  The idealization of instantaneity is a legitimate idealization. Science is built on such.

  Unfortunately, Chomsky applies this same idealization of instantaneity to the acquisition of language by the species, and that’s a very different kettle of fish.

  In a child’s acquisition, the faculty of language is already there, has been there for tens of thousands of years, waiting for kids to rediscover it, or perhaps merely switch it on. In the species’ acquisition, there was a time when there was no faculty at all, zero, zilch, a time when that faculty had to be built from the ground up. An idealization that works well in describing a state does not necessarily work at all in describing a process. Chomsky’s mistake is to treat a state and a process as if they were the same thing. Or it might be more accurate to say that he tries to turn a process into a state. That would be natural enough. States he understands; he’s been thinking about them all his life. Processes, that’s another matter.

  You’d almost think Chomsky regarded the evolution of language as a state rather than a process. He treats the faculty of language as if it were already there, in the species. After all, in both cases, child and species, we know that eventually “unbounded Merge must appear”—it did appear, didn’t it, in both cases?—so in neither case is there any point in looking for intermediate stages between the absence of Merge and its presence.

  After all, all it took for Merge to emerge was “some rewiring of the brain.” And there couldn’t have been anything you could call a protolanguage, because until Merge emerged, there was no way to put words together, and once it had emerged, you were already at language and there was no room for any kind of protolanguage.

  HOW MANY WAYS TO CONNECT STUFF?

  I’d realized from Chomsky’s talk that I’d been right about the Hauser-Chomsky-Fitch model of language evolution not allowing for any kind of protolanguage. I hadn’t realized, until I began to correspond with him by e-mail on this issue, that he didn’t believe there could be a protolanguage.

  In the course of this correspondence, I was therefore startled to encounter the following sentence: “It is a logical truism that protolanguage either involves a recursive operation or is finite.”

  “Logical truism”? I had the advantage of coming to language evolution from the study of pidgins and creoles, and the most certain thing I’d derived from that study was the fact that pidgin and creole speakers put words together in different ways. Creole speakers put words together the way everyone else who speaks a full human language puts words together, that is hierarchically, in a treelike structure—schematically, A + B → [A B], [A B] + C → [[A B] C], and so on. That, as I understand it, is the process Chomsky calls Merge. Pidgin speakers, on the other hand, put words together like beads on a string, A + B + C, etc., so that, in contrast with Merge, the relationship between A and B is no different from the relationship between B and C. So I promptly wrote back, “Protolanguage consists of A + B + C . . . i.e. there is no Merge.”

  “That’s commonly believed,” Chomsky equally promptly answered, “but it’s an error. A sequence a, b, c . . . that goes on indefinitely is formed by Merge: a, {a, b}. {{a, b} c} etc. . . . If we complicate the operation Merge by adding the principle of associativity, then we suppress {,} and look at it as a, b, c . . .”

  Principle of associativity? What had that got to do with language? It’s a principle in logic and mathematics, and all it means is that moving or removing brackets in logical formulae doesn’t affect their truth value, and moving or removing brackets in additions doesn’t affect the sum—(1 + 3) + 2 adds up to 6, just the same as 1 + (3 + 2) or even 1 + 3 + 2. You see, the brackets you sometimes find in logical formulae or sums of addition aren’t doing anything serious in the first place. They can be rearranged or dispensed with precisely because they don’t make any changes in the relationships between things; there is no meaningful sense in which 3, in (1 + 3) + 2, is nearer or more closely connected to 1 than it is to 2.

  But that doesn’t apply in any shape or form in language, otherwise an [English [language teacher]] would mean the same as an [[ English language] teacher], and it doesn’t: the first means a teacher of languages who happens to be English, and the second, someone of any nationality who teaches the English language. Change the brackets here, you change the meaning; remove them, you just make the phrase ambiguous. Or take a more famous example: [old [men and women]] versus [[old men] and women]. The differences between such pairs can be spelled out by intonation features: stress on “English” in [English [language teacher]] and on “language” in [[English language] teacher] for instance.

  But in protolanguage, for example in an early-stage pidgin, there are no structural relationships among words—only semantic ones, so there’s no equivalent way to disambiguate stuff. Moreover, you don’t have to go as far as pidgin or protolanguage to find beads-on-a-string joining things together. Merge operates only up to the level of the sentence. Phrases have to be properly merged with phrases, clauses with clauses, but once you get up to sentence level, beads-on-a-string takes over. Paragraphs, pages, chapters, books—there’s no limit to the number of sentences you can string together. A finite process? No way.

  Grammatical relations, relations created via the Merge process, are found only within sentences. There are no grammatical relations among sentences. There are no agreement phenomena that link one sentence with another, no sentence serves as subject or object of another, no sentences are Agents or Themes or Goals of other sentences, no noun in one sentence can bind a pronoun in another, no adjective in one sentence can modify a noun in another. Sentences are linked only in terms of discourse coherence, which in turn is determined by semantic and pragmatic, not grammatical, factors. Take a sequence like:

  “John is looking after his little sister. The price of copper fell 17 percent overnight. The national vegetable of Wales is the leek.”

  As a paragraph, this is nonsense. But all the sentences that compose it are fully grammatical and, within themselves, both comprehensible and semantically appropriate. In isolation they’re fine; together they’re nonsensical, because they bear no relation in terms of topic. But now look at this sequence:

  “The price of copper fell 17 percent overnight. A sudden drop had been predicted by many analysts. The recent cease-fire in Central Africa has already resulted in an increased supply.”

  The exact same conditions apply; the sentences are merely strung together, and, apart from “the,” no word appears in more than one of the sentences, so there are no lexical links. On the other hand, to anyone familiar with politics and economics, the paragraph makes perfect sense.

  So both processes, Merge and beads-on-a-string, work alongside each other in human language. Since beads-on-a-string is simpler than Merge, it’s probably older. If it’s older, it’s only reasonable to suppose that, at the dawn of language, Merge hadn’t yet emerged and beads-on-a-string was all our remote ancestors had. A system without Merge doesn’t therefore have to be finite—you can, in principle at least, go on adding beads to a string for as long as you have beads. So Chomsky’s “logical truism” is simply false.

  EVOLUTION IN A DRUM

  So let’s put the two evolutionary models side by side:

  MINE

  CHOMSKY’s

  TIME 1: Animals have concepts that won’t merge.

  TIME 1: Animals have concepts that won’t merge.

  TIME 2: Protohumans start talking.

  TIME 2: Typically human concepts, which will merge, appear.

  TIME 3: Talking produces typically human concepts.

  TIME 3: The brain gets rewired.

  TIME 4: Merge appears and starts merging typically human concepts.

  TIME 4: Merge appears and starts merging typically human concepts.

  TIME 5: The brain maybe gets rewired (plausible but not certain).

  TIME 5: Capacities for complex thought, planning, etc. develop.

  TIME 6: Capacities for complex thought, planning, etc. develo
p.

  TIME 6: People start talking.

  The stages do not differ substantially in their content, but the ordering of the stages is very different. And even this is not the most important difference. The most important difference is that in the first model, one stage drives the next. In that model, once our ancestors started talking, their iconic or indexical signals gradually formed into true symbols through variations in their manner of use (this theme will be developed in the chapters that follow). Merge, a process that does not have to be specially derived, since it arises through the way the brain handles any data, appears as soon as there are units semantically capable of being merged. Use of Merge in both language and thought selects for any development in the brain, whether mutational or epigenetic, that will expedite or automate language and thought processes. The end result of all these processes is a vastly improved thinking machine, not to mention a fast, subtle, and highly flexible language.

  One stage driving the next, each new development changing the selective pressure for the next—that’s how evolution works. In particular (and more rapidly, because the processes are more focused), that’s how niche construction works.

  Now in Chomsky’s model, the most crucial stages have no motivation whatsoever. Nothing drives them. Typically human concepts pop up out of nowhere. The brain gets rewired for no particular reason. People suddenly start talking, again for no particular reason (just because “there would be an advantage” to it!), and the menial details of how they started talking, how, with all those concepts whizzing around in their heads, they got to agree on how to label them, gets swept under the rug.

  Chomsky’s version of evolution does not intersect anywhere with the realities of the world or the realities of evolution: it’s evolution in a drum, a totally abstract, self-encapsulated procedure. Yet it is implicit in a paper published in America’s flagship science journal, and coauthored by two biologists. Go figure.

  Note that far more is involved here than merely the way in which language evolved. At issue is the whole relationship between language and thought. If I’m to deliver on the second half of the subtitle of this book, I shall have to tackle that. And here again Chomsky and I find ourselves in diametrically opposed positions:

  Chomsky believes that human thinking came first and enabled language.

  I believe that language came first and enabled human thinking.

  One or the other has to be true. It isn’t even possible to take some wishy-washy, middle-of-the-road position and say, Well, it’s a little of both, they coevolved. Many people might like to take that route. “Coevolution” is a fashionable word nowadays. You only have to murmur it, and throw in “meme” and “self-organization” for good measure, and people will nod knowingly and be filled with respect for you. And of course, once the processes I’ve talked about were fully established, language and human thought most certainly did coevolve.

  But at the beginning, it’s a hen-and-egg, horse-and-carriage problem. One had to come first, and there’s a logical truism if ever I saw one. So next we’ll see which one did, and how it did, and why.

  10

  MAKING UP OUR MINDS

  Let’s look on the positive side. Chomsky’s position focuses attention on the brain. Granted, the statement that “the brain got rewired” positively bristles with “what?” and “how?” and “why?” questions, and will hardly serve us as a guide. And so far, we’ve been looking mostly at behavior. But brains and behavior are intimately linked, and we’ve reached a point from which we can’t go much further without taking brains into account.

  Regardless of whether language is primarily biological or primarily cultural, the brain has just one way of putting it all together.

  What do brains, the brains of all other species, and for a lot of the time ours too, actually do? According to Gary Marcus of New York University, the brain “takes information from the senses, analyzes that information, and translates it into commands that get sent back to the muscles.” And that’s all that brains were specifically built to do, because that is sufficient for life on earth. It’s enough in most cases to keep the brains’ owners fed and alive and able to pass on their genes to another generation. Brains weren’t made to think about the nature of the universe or the laws that govern it. Or even about our own personal affairs—unless we’re actually busy with them at the time.

  Brains don’t (normally) do what they don’t have to do, because brains are energetically expensive. Ours use 20 percent of our energy, though looking at some people you mightn’t think so. Purists have got down on folk for comparing brains to computers. All cultures, they sneer, have compared the brain to their own most modern technology—the Greeks to water mills, Victorians to telephone exchanges; it’s just a fad. But in fact the purpose of a brain is exactly that of a special kind of computer, an onboard computer, like you have in boats or cars or planes or space stations.

  The purpose of an onboard computer is to preserve the homeostasis of whatever it’s on board of. It does so by monitoring many conditions, both internal and external; anything from keeping internal temperatures within a narrow range to warning you you’re about to bump into something. But its range of behaviors does not include sending you messages it constructed itself, for its own ends. It doesn’t have its own ends. It has what’s been programmed into it.

  Human engineers program onboard computers, but evolution programmed the brain. It programmed the brain for homeostasis, to ensure, as far as it could, that conditions in and around the organism that housed it remained as favorable as possible to that organism’s well-being.

  As Marcus suggests, the brain does its job in a series of steps, along a one-way trajectory:

  • Receive information from senses.

  • Send it to be analyzed for identification.

  • Choose a course of action based on the analysis.

  • Send an order to execute that action.

  Thus an odor is detected; the odor is compared with other odors and their possible causes; the odor is determined to be that of a predator, but taking its strength plus prevailing wind conditions into account, most likely a predator at some distance; consequently two messages are sent, one to the muscles—“Freeze!”—one to the attention: “Remain on high alert until further notice.”

  Note that the action is unidirectional, and without side trips. True, feedback from the animal’s own actions following the order may influence subsequent developments, but can only do so by reentering the process at the beginning again, making a closed loop.

  Now see what happens when you think even the simplest of thoughts, say, “Roses are red.”

  • Think of “roses.”

  • Think of “red.”

  • Connect the two.

  You may, or you may not, have a visual image of a red rose. If you do, you will say, “I think in images.” If you don’t, you will say, “I think in words.” In both cases that’s like the sun crossing the sky—not what’s really happening at all. There are no images in the brain. There are no words in the brain. All that’s there are neurons and their connections and differential rates and strengths of electrochemical impulses. These provide a subjective sense of words and images. The metamorphosis may seem magical but it’s no more magical than the “changing colors” of mountains at sunset, likewise produced by processes in your brain.

  If we define “thinking” as “any kind of mental computation”—surely the most general and theory-neutral way of defining it—then both of the series of operations I’ve just described (processing information from the environment and thinking of something such as a rose) can be legitimately called “thinking.” But beyond the fact that they’re both brain-internal, brain-directed processes, there’s hardly anything in common between the two.

  It seems reasonable to suppose that the first kind of thinking, the brain’s “business as usual,” you might say, may best be characterized as “online thinking”—thinking that takes place as a consequence of ways in
which the thinking organism is, right at that moment, interacting with objects and events in the outside world. If that is so, then the best way to characterize the “Roses are red” kind of thinking is “offline thinking”—thinking that has no necessary or direct connection with what’s happening outside, but that is generated and takes place wholly within the brain. It is worth comparing the two in a little more detail.

  The steps in offline thinking are quite different from the steps in online thinking, they do not include any part of those steps, and they work in quite different ways. The online steps are triggered by events outside the organism. The offline steps could be triggered by an outside event but they do not need to be, and most often probably are not. In the online sequence, each step triggers the next. If it didn’t, the owner of the brain wouldn’t last long—if the odor wasn’t sent for analysis, if the analysis didn’t trigger action, or if the order, when issued, was not obeyed, you’d soon be cat meat. In the offline sequence, no step is necessarily linked to the next; you may think of “roses” first and then “red,” or the other way around, or both simultaneously; it makes no difference. In the online sequence, the last step is usually an instruction to the body, even if that instruction is simply “Do nothing.” In the offline sequence, there need not even be a last step. You can think of redness, or roses, or both, without necessarily even putting them together, let alone doing something about it.

 

‹ Prev