Cartesian Linguistics
Page 2
Rationalist–romantics (RRs) and empiricists differ a great deal in their views of the mind and – not surprisingly – in their views of how the mind should be studied. They differ both in how they conceive of the mind having the ‘shape’ and content it does, and in how they conceive of the role of the world outside the head in shaping and giving content. Empiricists claim that we learn much of what we get – at least, when it comes to ‘higher’ concepts and cognitive processes. RRs disagree; these are mostly innate.4 Comparing these views highlights the features of each, and allows us to ask which view, and which research strategy based on that view, has the best prospects of success.
To illustrate their differences, let us look at how each camp conceives of two kinds of mental entities and how they come to be in the mind – how they are acquired or learned. One class consists of ‘atomic’ concepts such as WATER, DRINK, COLD, and thousands of others that we use in various ways to carry out various cognitive tasks, such as describing, speculating, reminiscing, telling stories, etc. The other class of ‘entities’ consists of the rules or principles that govern how the mind puts the elemental concepts that words express together to assemble the complex concepts expressed by phrases and sentences. Phrases include “drink cold water,” and endless others; sentences include “Jane will only drink cold water” and endless others. Humans – or better, human minds – routinely put together complexes such as these.5 The RRs hold that the mind’s concepts and the ways of putting them together in language and thought are largely innately configured; they also hold, then, that the right way to study the mind is to construct theories of the various sorts of inner mental machinery that put concepts in place or ‘activates’ them, configures them in forms that the machinery allows or requires, and does the same for the rules or principles that govern how to put concepts together in the complex forms expressed by sentences. The RR theorist is a nativist (someone who maintains that both concepts and the ways to put them together to make complexes such as those expressed by sentences are somehow innate, implicit in the mind). And because the RR researcher is a nativist and tries to say what concepts and combinatory mechanisms are and how they develop in a child’s automatic process of maturation by constructing theories of the innate mechanisms and their operations without trying to include any objects outside the head in the subject matters of their theories, RR theorists also adopt an internalist research strategy.
RRs (see in this regard especially CL’s discussions of von Humboldt and Herbert of Cherbury) point to what they see as a strong connection between nativism and the phenomena of everyday linguistic creativity. ‘Ordinary’ linguistic creativity along with its important consequences – the capacity to engage in fantasy, speculation, play, planning, thought unconnected to current circumstance, plus the capacity to construct ‘theories’ of the world, such as speculating who is going to win the next election or the next game of football – is readily available to everyone at an early age, RRs hold, only because hundreds of thousands of richly endowed linguistically expressed concepts and the means of putting them together are innate and thus readily available at an early age. Because they are, children’s minds readily provide innovative sentences, which the child can use in multiple ways. Anyone can observe mental creativity in young children – it is found in using often-novel sentences in understanding and doing in various ways. It is exhibited not just in speech, but in turning cardboard boxes into houses, in a child’s fantasies, in wondering about how something works, in children’s estimates of what their parents and other children intend, in their experimentation with various tools and toys, and so on. The issue is how young children can manage to be so creative at a young age – certainly by the time they are four or so, often before then. Since one must assume that with a child as with anyone else, the conceptual tools one needs to classify and think, and the combinatory mechanisms that allow one to put concepts together in various kinds of arrangements must be in place before they can be assembled in complexes, the only way to explain the early appearance of creativity is to assume innateness of both concepts and combinatory principles. And it is only because these concepts and principles of assembly and the ways to activate them with minimal experience are built into children’s mind – presumably lodged in their genome and the ways it develops or grows – that we can quickly understand their creative efforts, and they ours. Innateness provides a basis for understanding one another, even at a young age. For innate concepts can be thought of as the meanings of words (lexical items, in technical terms); they constitute words’ ‘internal content’ (or perhaps ‘intrinsic content’).
As suggested in Part I, RRs also emphasize a connection between creativity and their decision to adopt internalism as a research policy for the scientific study of the mind. Consider what happens if one decides to construct a theory (a science now, not a guess about the outcome of a football match) of an interesting and important aspect of the use of language and concepts – using language to refer to things. At the very least, attempting this requires focusing not just on words and how they are assembled into phrases and sentences in a system in the head, but on relationships between these internal entities and things and classes of things in the outside world. Doing this expands the subject matter of one’s theory to include not just mental objects – concepts and such – but things and classes of things in the world, and perhaps their properties too. It also demands that the relations between what is inside the head and what outside be ‘natural’ and determinate, fixed perhaps by something like biological growth. That is a daunting and – if the creativity observations are taken into account – very likely impossible task. One will find no determinate head-world relations of the sort required to ‘fix’ the uses of sentences.6
Yet many contemporary philosophers – Putnam, Kripke, Burge, Fodor, etc. – believe that in order to make sense of how language is meaningful at all, and for its words to have meaning, one must assume a determinate connection between some nouns, at least, and things in the world – a single thing for a proper name, or a class of things for a general term. The relationship must be determinate, or involve very few specifiable options. Otherwise tools of theory-construction fail. Proceeding on this assumption, the supposed determinate relationship is often called “reference,” although “denotation” and “signification” are also used. It is often claimed that nouns, or at least some of them, refer “rigidly,” to use Kripke’s colorful terminology. Ordinary linguistic creativity poses a serious problem for an attempt to construct a theory of meaning that requires determinate head-world relationships. If you hold that meaning depends on reference and you want a theory of meaning for a language, you better hope that for each noun, there is a determinate referent. Or if, like Gottlob Frege (1892), you think that a referential relationship to things is more complicated, that a word is first linked to a sense (for him, an abstract object), and a sense in turn fixes a reference, you better hope that for every noun there is a single sense, and for each such sense, a single referent. Otherwise, your theory will have to allow for all of the complex and highly variable factors that figure in a person’s use of language for various purposes, and in the efforts people make to understand what another person’s linguistic actions mean – what they intend by them, including what they intend/mean to refer to, if anything. You will have to take into account changes in speaker intentions, in the kind of job a word is being asked to do (tell someone how to get to Chicago, criticize a work of art. . .), in the circumstances of speech, in irony as opposed to flat-footed description, in fiction as opposed to fact, and so on. To specify what the context of discussion is, you will have to say what count as the “subjects which form the immediate focus of interest” (to quote the philosopher Peter Strawson);7 and there is little hope that anyone can say what these are in a way that allows for any kind of population-wide uniformity, unless possibly – the limit case, and hardly relevant for the conception of language, meaning, and reference the philosophers under consideration ha
ve in mind – the population consists of the speaker alone, at a time, trying to accomplish a single, well-understood task. More generally, there is no guarantee that anything, even when dealing with flat-footed description and small populations, can be fixed determinately. To fix is to fix language use. Unfortunately for your project of constructing a theory based on hopes like these, as Descartes long ago and Chomsky in CL and elsewhere (New Horizons in the Study of Language and Mind – Chomsky 2000 – among others) point out, people just do not care about what your theoretical efforts demand – they do not want, and do not produce, fixed uses, even of nouns.8 And yet to a degree that seems to be adequate for solving everyday practical problems, at least, people still manage to understand theory-resisting free uses of expressions. Resisting the needs of those who would like to have regularity and even determination, people seem to benefit from their capacity to be creative. They enjoy using words in all sorts of ways, all the while being adequately (for the task(s) at hand) understandable and speaking appropriately. Apparently, using a word – noun or other word – in the same way all the time is as tedious as putting a widget in a slot on an assembly line over and over. In sum, in no case does anything determine how they or you must use a word or understand it when used by another, for whatever purpose, on whichever occasion. The use of language is a form of human action, and it is on the face of it a particularly innovative and uncaused, yet coherent and appropriate free form of it.
Nevertheless, someone drawn to the kinds of cases Kripke and others focus on to provide a motivation for taking proper names as “rigid designators” seriously might suggest that nothing else explains how people with widely differing views of, say, Dick Cheney can still use “Cheney” and expect others to know who they intend. Given different understandings of Cheney, one cannot rely on what those others happen to know or assume about Cheney. So – it is argued – there must be some referential relationship that does not rely at all on people’s knowledge or understanding of Cheney, or any other object or event to which one wants to refer. But this attempt at convincing an RR theorist is bogus. Nothing outside of context of speech or author-controlled context of writing9 antecedently fixes a reference – antecedently, that is, to someone’s using a term to refer, and someone else interpreting what the speaker says, using whatever resources s/he has. Of course, the process of determining what another person “has in mind” can fail, although our resources often prove sufficiently reliable that it does not matter for the purposes of discourse. These resources include shared biologies, as well as environments, communities, interests, choices in lexical pairings of sounds and the semantic features of the hearer’s lexicon, and the like. These usually suffice. They must: words do not refer, people do – and those who would understand the speaker must, as best they can, put themselves into the position of the speaker by using whatever resources they have to figure out what the speaker has in mind.
Two difficulties confront those who want to claim that there ‘is’ a referential relationship between natural language terms and things ‘out there’. One is that in few cases – perhaps none – is there reason to think that the world ‘out there’ actually contains any ‘things’ of the sort the fixed referentialists have in mind. London is a set of buildings on a territory, but it (the same ‘thing’) could be moved upstream to avoid inundation; Chomsky wrote Failed States, which weighs half a kilo and it (the half kilo of wood pulp) is compelling (because it contains an argument); my personal library has Failed States and my university library has it too; Theseus built a ship and replaced all of its planks which were then reassembled in the same positions, but Theseus’s ship is the rebuilt model, not the reassembled one. The ways we understand things are fixed by our conceptual resources, and our conceptual resources clearly allow for things to be abstract and concrete at the same time; they contain wood pulp and information; they are one yet many; they let ownership and responsibility trump material constitution. These are only a few innumerable illustrations that indicate that we ‘make’ the things of our world to suit our conceptual resources, and that typically these ‘things’ are identified in terms of our interests, not some kind of objective standards. We routinely name persons, but what ‘are’ persons such as Dick Cheney? PERSON is what Locke called a “forensic” concept, one that suits our need to assign responsibility for actions and that maintains psychic continuity. The point is general, the things and classes of things that make up the world as we typically understand it are not the well-defined entities of the sciences. What, however, of a referentialist favorite, WATER? Surely water is H2O? Chomsky (2000, 1995a) offers many examples that indicate that we natural language users have nothing like the scientist’s H2O in mind when we speak and think of water. We find no difficulty in saying that water becomes tea when heated and a tea bag is placed in it. Our water washes us and our possessions; it may or may not be clear; it is what is in a river, no matter what it may contain in addition, even if pollutants constitute the majority; water can be calm or disturbed; and so on. Most of the universe’s water is in a glassy state (in asteroids, and the like), yet if a glass is made of this material, it is not offered for chewing when one asks for a glass of water. These and other examples constitute the background for Chomsky’s otherwise enigmatic remark that “Water is H2O” is not a sentence of English. It is not because H2O belongs to molecular chemistry, WATER is what our natural language English “water” expresses. If still not convinced, Chomsky points to a parallel in phonology. The syllable /ba/ is in the head. It is not ‘out there’. The point is general: linguistic sounds are ‘in the head’. They do not issue from people’s mouths. All that issues from people’s mouths when they speak is a series of compressions and decompressions in the air, not /ba/ or /ta/. Just as there is no /ba/ or /ta/ ‘out there’, so there is no London.10
A second difficulty is that natural languages do not seem to have anything like what philosophers and some others call “proper names” – nouns that ‘directly’ refer to a single entity – or rigidly referring general terms such as “water.” Languages (the languages individuals have in their heads) do have names, of course; that is a syntactic category of expression, one which may or may not be a primitive of a theory of a language. And names tend to have at least some meaning: most people when hearing words such as Moses and Winchell will by default assign them something like the conceptual feature PERSON NAME. Their specific lexicons might assign specific names more than this. But whether minimally or more heavily specified, names do have meanings, or ‘express concepts’, understanding by that they have at least some semantic features – and they have meanings distinct from the proper nouns and rigidly referring terms postulated in philosophical discussion. Since they do, it is hard to understand why anyone would think that a theory of meaning for a natural language requires going outside the head.
Perhaps, however, there is an explanation for this: an analogy to science and the practices of scientists, one that has often misled studies of natural language. Notice that familiarity with a person and his or her circumstances, reliance upon folk theories and other default strategies, and the like, play no role in understanding technical presentations in mathematics and the natural sciences. Nevertheless, reference for the group of participants (mathematicians and scientists) is virtually determinate, and the terms they use really do seem to ‘refer by themselves’. This is not, however, because the symbols of technical work really do refer ‘by themselves’, but rather because all of the participants can be assumed – as Frege put it – to “grasp the same sense,” and the sense is taken by all to characterize an entity or class of entities drawn from the subject matter of their joint project, whether it be mathematics, elementary particle physics, or formal linguistics. There is room for disagreement over whether a difficult proof succeeds, or a hypothesis is correct, but in doing technical work in a scientific or mathematical domain, it can be assumed everyone knows what a speaker is talking about, what s/he refers to. One physicist’s chiral anoma
ly is the same as another’s, one mathematician’s aleph-null another’s because they strive to be speaking of ‘the same thing’, whatever that might be. This is because, as Chomsky suggests, in the domains of mathematics and the natural sciences, one finds strong ‘normative’ constraints on same-use, constraints not found in the use of natural language, where people employ and enjoy linguistic creativity. Everyday speakers are not engaged on a unified project. And as Chomsky also points out, it is no surprise that Fregean semantic theories – those that suppose a community with shared thoughts and shared uniform symbols for expressing these thoughts, and an assumed constraint to be talking about the same thing whenever they use a specific symbol – work quite well with mathematics and the natural sciences (1996, ch. 2). But they do not work with natural languages, a hard lesson for the many philosophers and semanticians who try to adapt Fregean semantics to natural languages.
Strong normative constraints on use – the “conventions” of David Lewis (supposedly needed in order to allow for communication and cooperation at all) and the supposedly determinate “practices” of Sellars and company – do not exist.11 They are just not needed in everyday speech. We have many resources available to deal with interpretation, and speaker and hearer find attempts to constrain fettering. That does not mean that one cannot have a theory of meaning for a natural language. But it must be internalist.
In sum, there is no reference apart from someone who refers; relations to the outside world (and even in a sense ‘the world outside’ as understood by the concepts expressed in natural languages), are established by and through actual uses. That is true in the sciences and everyday discourse, although in the sciences and math – as indicated – practices are ‘normalized’ and come close enough to the Fregean picture of semantic theory to allow one to idealize and ignore the contributions of a person. All this puts internalists such as Chomsky in what is these days an unusual position. He rejects the very popular (among linguists and some philosophers) Fregean model of semantics (‘theory of meaning’), and along with it what Jerry Fodor calls a “representational theory of mind.” If you hold that natural language reference (that which involves use of the terms of natural languages such as “London” by people in variable circumstances engaged on different projects and having variable interests) is not an apt subject matter for science, you must also hold that representation of things in the outside world by use of natural language terms is not either. Indeed, you must reject – or perhaps reinterpret – a considerable chunk of contemporary “cognitive science,” at the very least, that chunk that purports to offer a semantics for natural languages that assumes a relation between natural language entities and the world. Perhaps, as Fodor (1998) put it, a representational (essentially Fregean) theory of such concepts is the “only game in town.” Yet Chomsky and other contemporary RR theorists (there are a few) seem to have no qualms about doing cognitive science and dealing with fully internally determined concepts/meanings. I suspect that is because they know that there is a non-re-presentational naturalistic science of language and of what it provides the mind (likely in the form of “semantic features”) in place, and they think that this suffices for a theory of natural language meaning and meaning-composition. If so, they can look at the loss of determinate mind–world relationships such as reference and denotation with equanimity.12 Indeed, they might be quite willing to maintain that no science of vision, nor other theory of the mind, need be committed to a Fodorian representational view.13 Do the “blobs” of David Marr’s Vision denote anything out there? Surely not. His 3-D ‘representations’ do not, either. The points made above about the syllables /ba/ and /ta/ in phonology (an internalist science) are worth considering again in this connection. There is an interesting way in which Chomsky agrees with the philosopher Wittgenstein (whose later works, along with J. L. Austin’s, he was reading when he wrote his massive Logical Structure of Linguistic Theory – a work that takes language as a (natural) tool that can be used in various ways). Wittgenstein (1953) thought that words and sentences are ‘tools’ that we use to carry out various everyday tasks, and he held that their meanings are the jobs they perform. Since they are, he thought, if you want to know what expression E means for person P, find out how s/he uses it – what function it serves in performing whatever task s/he is carrying out. Then he reasoned that since people use words and sentences in all sorts of ways, the best one can do is describe how another uses a word on an occasion. You cannot, he said, construct a theory of meaning if you think that the meanings of words are found in the ways they are used. In this respect he and Chomsky agree: there just is not enough uniformity in the ways people use expressions to support theory. So if you think of meanings in terms of their uses, you get no science. So far, Chomsky and he agree. But for Chomsky, that just shows that you are looking in the wrong place for a theory of meaning; look at what biology provides you in the head. Wittgenstein’s warning was generally ignored; philosophers such as Lewis and Sellars and innumerable others simply assumed (perhaps with the practices of math and science in mind) that there must be a great deal more uniformity of use than appears, and postulated conventions and uniform practices that just do not exist. As we have seen with reference, that is not a good strategy. Internalists such as Chomsky suggest looking at the matter from the other direction; do not think of meaning in terms of use, but think instead of internally sourced and theoretically specifiable features of words and sentences. These still provide ‘tools’. Having the natures they do allows them to be used in the ways that they obviously are. In other words, explain not how people use words, but how their creative use of words is possible. It is possible, the RR theorist holds, only because with language, internal systems provide configured and rich ‘perspectives’ for the use of people. These perspectives have the shapes and characters they do because of the contributions of syntax (which puts words together) and the semantic features of the words that compose them, where these semantic features are taken from internal resources. And these shapes and characters help shape and ‘give meaning to’ experience and thought.