The Science of Language

Home > Other > The Science of Language > Page 3
The Science of Language Page 3

by Noam Chomsky


  Now, there happen to be very simple ways to get arithmetic from Merge. Take the concept Merge, which simply says, take two things, and construct a thing that is the set of the two things; that's its simplest form. Suppose you restrict it, and take only one thing, call it “zero,” and you merge it; you get the set containing zero. You do it again, and you get the set containing the set containing zero; that's the successor function. The details are somewhat more complex, but it is fairly straightforward. In fact, there are a couple of other ways in which you can get it; but it's just a trivial complication of Merge, which restricts it and says, when you put everything in just this way, it does give you arithmetic. When you've got the successor function, the rest comes.[C]

  There are arguments against that. Brian Butterworth has a book (2000) about it in which he gives many arguments against thinking that the language and arithmetical capacities are related. It's not very clear what the evidence means. The evidence is, in part, dissociations. You can get neural dysfunction in which you lose one capacity and keep the other. However, that isn't going to tell you anything, because it doesn't distinguish competence from performance. It may be that those neural deficiencies have to do with using the capacity. So to take an analogy, there are dissociations in reading a language. But nobody thinks that there is a special reading part of the brain. It's just that there is a way of using language in reading, and that way can be damaged; but the language is still there. And it could be the same thing for arithmetic. The same is true of the other kinds of dissociations that are talked about. It could be true that there are all sorts of ways of explaining them. As a matter of fact, it could turn out that, whatever language is, it is just distributed in different parts of the brain. So maybe it could be copied, so that you could copy one part, and keep it, and get rid of the rest. There are so many possibilities that the evidence just doesn't show very much. So what we're left with is speculation, but when you don't have enough evidence, you pick the simplest explanation. And the simplest explanation that happens to conform to all the evidence we have is that it's just an offshoot of language derived by imposing a specific restriction on Merge.

  In fact, there are other specific restrictions, which are much more modern. So take what are called “formal languages,” say . . . arithmetic, or programming systems, or whatever. They're kind of like natural language, but they're so recent and so self-conscious that we know that they're not really much like the biological object, human language.

  Notice how they're not. Take Merge [the basic computational principle of all natural languages]. Just as a matter of pure logic, if you take two things, call them X and Y, and you make the set of X and Y ({X, Y}), there are two possibilities. One is that X is distinct from Y; the other is that they're not distinct. If everything is constructed by Merge, the only way for X to be not distinct from Y is for one to be inside the other. So let's say that X is inside Y. Well, if X is inside Y and you merge it, then you've got the set so that if Y = [. . . X . . .] then {X,Y}. In effect, Internal Merge (X,Y) = {X,Y} = {X, [. . . X . . .]}. That's a transformation. So in fact, the two kinds of Merge that are possible are taking two things and putting them together or taking one thing and taking a piece of it and sticking it at the edge. That's the displacement [or movement] property of natural language, which is found all over the place. I had always thought [until recently] that displacement was a kind of strange imperfection of language, compared with Merge or concatenate; but that is just a mistake. As internal Merge, it just comes automatically, unless you block it. That's why language uses that device for all sorts of things; it comes ‘for free.’ Assuming so, then you can ask the question, “How are these two kinds of Merge employed?” And here you look at the semantic interface; that's the natural one. There are huge differences. External Merge is used, basically, to give you argument structure. Internal Merge is basically used to give you discourse-related information, like focus, topic, new information, all that kind of stuff that relates to the discourse situation.[C] Well, that's not perfect, but it's close enough so that it's probably true; and if we could figure it out, or understand it well enough, we would find that it is perfect.

  Suppose [now] that you're inventing a formal language. It has no discourse-related properties. So you just use external Merge. You put a constraint on systems – in effect, not to use internal Merge. And then you get, effectively, just argument structure. Now, it's interesting that if these systems give us scopal properties, they do it in particular ways, which happen to be rather similar to natural language. So if you're teaching, say, quantificational logic to undergraduates, the easiest way to do it is to use standard quantification theory – you put the variables on the outside and use parentheses, and so on and so forth. Well, we know perfectly well that there are other ways of doing it – logic without variables, as has been known since Curry (1930; Curry & Feys 1958).

  And it has all the right properties. But it's extremely hard to teach. You can learn it, after you've learned it in the ordinary notation. I don't think anyone's tried – and I think it would be extremely hard – to do it the other way, to teach the Curry system and then end up showing that you could also do it in this other way. But why? They're logically equivalent, after all. I suspect that the reason is that the standard way has many of the properties of natural language. In natural language, you do use edge properties for scope; and you do it through internal Merge. Formal languages don't have internal Merge; but they have got to have something that is going to be interpreted as scope. So you use the same device you do in natural language: you put it on the outside with the restricted variables, and so on.

  These are things that just flow from having a system with Merge inside you; and probably the same is true of music, and lots of other things. We got this capacity that came along and gives us extraordinary options for planning, interpretation and thought, and so on and so forth. And it just starts feeding into everything else. You get this massive cultural revolution, which is quite striking, probably about sixty or seventy thousand years ago. Everywhere where humans are, it's essentially the same. Now, maybe in Australia they don't have arithmetic; Warlpiri, for example, does not. But they have intricate kinship systems which, as Ken Hale pointed out, have a lot of the properties of mathematical systems. Merge just seems to be in the mind, working on interesting formal problems: you don't have arithmetic, so you have complicated kinship systems.

  JM: That suggests that at least the possibility of constructing natural sciences – that that came too with Merge.

  NC: It did, it starts right away. Right at this period you start finding it – and here we have fossil evidence and archaeological evidence of recording of natural events, such as the lunar cycles, and things like that. People begin to notice what is going on in the world and trying to interpret what is going on. And then it enters into ceremonies, and the like. It went on that way for a long time.

  What we call science [that is, natural science with explicit, formal theories and the assumption that what they describe should be taken seriously, or thought of as ‘real’] is extremely recent, and very narrow. Galileo had a hell of a time trying to convince his funders – the aristocrats – that there was any point in studying something like a ball rolling down a frictionless inclined plane. “Who cares about that? There is all sorts of interesting stuff going on in the world. What do you have to say about flowers growing? That would be interesting; tell me about that.” Galileo the scientist had nothing to say about flowers growing. Instead, he had to try to convince his funders that there was some point in studying an experiment that he couldn't even carry out – half of the experiments that Galileo described were thought experiments, and he describes them as if he carried them out, but it was later shown that he couldn't . . .The idea of not looking at the world as too complicated, of trying to narrow it down to some artificial piece of the world that you could actually investigate in depth and maybe even learn some principles about it that would help you understand other things [what
we might think of as pure science, science that aims at basic structures, without regard to applications] – that's a huge step in the sciences and, in fact, it was only very recently taken. Galileo convinced some people that there were these laws that you just had to memorize. But in his time they were still used as calculating devices; they provided ways of building things, and the like. It really wasn't until the twentieth century that theoretical physics became recognized as a legitimate domain in itself. For example, Boltzmann tried all his life to convince people to take atoms and molecules seriously, not just think of them as calculating devices; and he didn't succeed. Even great scientists, such as, say, Poincaré – one of the twentieth century's greatest scientists – just laughed at it. [Those who laughed] were very much under Machian [Ernst Mach's] influence: if you can't see it, touch it . . . [you can't take it seriously]; so you just have a way of calculating. Boltzmann actually committed suicide – in part, apparently, because of his inability to get anyone to take him seriously. By a horrible irony, he did it in 1905, the year that Einstein's Brownian motion paper came out, and everyone began to take it seriously. And it goes on.

  I've been interested in the history of chemistry. Into the 1920s, when I was born – so it isn't that far back – leading scientists would have just ridiculed the idea of taking any of this seriously, including Nobel prizewinning chemists. They thought of [atoms and other such ‘devices’] as ways of calculating the results of experiments. Atoms can't be taken seriously, because they don't have a physical explanation, which they didn't. Well, it turned out that the physics of the time was seriously inadequate; you had to radically revise physics to be unified with and merged with an unchanged chemistry.

  But even well after that, even beyond Pauling, chemistry is still for many mostly a descriptive subject. Take a look at a graduate text in theoretical chemistry. It doesn't really try to present it as a unified subject; you get different theoretical kinds of models for different kinds of situations. If you look at the articles in the technical journals, such as, say, Science or Nature, most of them are pretty descriptive; they pick around the edges of a topic, or something like that. And if you get outside the hard-core natural sciences, the idea that you should actually construct artificial situations in an effort to understand the world – well, that is considered either exotic or crazy. Take linguistics. If you want to get a grant, what you say is “I want to do corpus linguistics” – collect a huge mass of data and throw a computer at it, and maybe something will happen. That was given up in the hard sciences centuries ago. Galileo had no doubt about the need for focus and idealization when constructing a theory.[C]

  Further, [in] talking about the capacity to do science [in our very recently practiced form, you have to keep in mind that] it's not just very recent, it's very limited. Physicists, for example, don't go commit suicide over the fact that they can't find maybe 90 percent of what they think the universe is composed of [dark matter and dark energy]. In . . . [a recent] issue of Science, they report the failure of the most sophisticated technology yet developed, which they hoped would find [some of] the particles they think constitute dark matter. That's, say, 90 percent of the universe that they failed to find; so we're still in the dark about 90 percent of the matter in the universe. Well, that's regarded as a scientific problem in physics, not as the end of the field. In linguistics, if you were studying Warlpiri or something, and you can't understand 50 percent of the data, it's taken to mean that you don't know what you're talking about.

  How can you understand a very complex object? If you can understand some piece of it, it's amazing. And it's the same pretty much across the board. The one animal communication system that seems to have the kind of complexity or intricacy where you might think you could learn something about it from [what we know about] natural languages is that of bees. They have an extremely intricate communication system and, as you obviously know, there is no evolutionary connection to human beings. But it's interesting to look at bee signs. It's very confusing. It turns out there are hundreds of species of bees – honey bees, stingless bees, etc. The communication systems are scattered among them – some of them have them, some don't; some have different amounts; some use displays, some use flapping . . . But all the species seem to make out about as well. So it's kind of hard to see what the selectional advantage [of the bee communication system] is. And there's almost nothing known about its fundamental nature. The evolution of it is complicated; it's barely studied – there are [only] a few papers. Even the basic neurophysiology of it is extremely obscure. I was reading some of the most recent reviews of bee science. There are very good descriptive studies – all sorts of crazy things are reported. But you can't really work out the basic neurophysiology, and the evolution is almost beyond investigation, even though it's a perfect subject – hundreds of species, short gestation period, you can do any experiment you like, and so on and so forth. On the other hand, if you compare the literature on the evolution of bee communication to the literature on the evolution of human language, it's ridiculous. On the evolution of human language there's a library; on the evolution of bee communication, there are a few scattered textbooks and technical papers. And it's a far easier topic. The evolution of human language has got to be one of the hardest topics to study. Yet somehow we feel that we have got to understand it, or we can't go further. It's a highly irrational approach to inquiry.[C]

  2 On a formal theory of language and its accommodation to biology; the distinctive nature of human concepts

  JM: Let me pursue some of these points you have been making by asking you a different question. You, in your work in the 1950s, effectively made the study of language into a mathematical, formal science – not mathematical, of course, in the way Markov systems are mathematical, but clearly a formal science that has made very considerable progress. Some of the marks of that progress have been – for the last few years, for example – successive elimination of all sorts of artifacts of earlier theories, such as deep structure, surface structure, and the like. Further, recent theories have shown a remarkable ability to solve problems of both descriptive and explanatory adequacy. There is a considerable increase in degree of simplification. And there also seems to be some progress toward biology – not necessarily biology as typically understood by philosophers and by many others, as a selectional evolutionary story about the gradual introduction of a complex structure, but biology as understood by people like Stuart Kauffman (1993) and D'Arcy Thompson (1917/1942/1992). I wonder if you would comment on the extent to which that kind of mathematical approach has progressed.[C]

  NC: Ever since this business began in the early fifties – two or three students, Eric Lenneberg, me, Morris Halle, apparently nobody else – the topic we were interested in was, how could you work this into biology? The idea was so exotic, no one else talked about it. Part of the reason was that ethology was just . . .

  JM: Excuse me; was that [putting the theory of language into biology] a motivation from the beginning?

  NC: Absolutely: we were starting to read ethology, Lorenz, Tinbergen, comparative psychology; that stuff was just becoming known in the United States. The US tradition was strictly descriptive behaviorism. German and Dutch comparative zoologists were just becoming available; actually, a lot was in German. We were interested, and it looked like this was where linguistics ought to go. The idea was so exotic that practically no one talked about it, except the few of us. But it was the beginning of Eric Lenneberg's work; that's really where all this started.

  The problem was that as soon as you tried to look at language carefully, you'd see that practically nothing was known. You have to remember that it was assumed by most linguists at the time that pretty much everything in the field was known. A common topic when linguistics graduate students talked to one another was: what are we going to do when there's a phonemic analysis for every language? This is obviously a terminating process. You could maybe do a morphological analysis, but that is terminating too. And it was als
o assumed that languages are so varied that you're never going to find anything general. In fact, one of the few departures from that was found in Prague-style distinctive features: the distinctive features might be universal, so perhaps much more is universal. If language were biologically based, it would have to be. But as soon as we began to try to formulate the universal rules that were presupposed by such a view, it instantly became obvious that we didn't know anything. As soon as we tried to give the first definitions of words – what does a word mean? etc. – it didn't take us more than five minutes of introspection to realize that the Oxford English Dictionary wasn't telling us anything. So it became immediately obvious that we were starting from zero. The first big question was that of finding something about what was going on. And that sort of put it backwards from the question of how we are going to answer the biological questions.

  Now, the fundamental biological question is: what are the properties of this language system that are specific to it? How is it different from walking, say – what specific properties make a system a linguistic one? But you can't answer that question until you know something about what the system is. Then – with attempts to say what the system is – come the descriptive and explanatory adequacy tensions. The descriptive pressure – the attempt to provide a description of all possible natural languages – made it [the system] look very complex and varied; but the obvious fact about acquisition is that it has all got to be basically the same. So we were caught in that tension.

 

‹ Prev