Dark Matter of the Mind
Page 40
To begin our discussion, I want to examine what I take to be the philosopher Ludwig Wittgenstein’s transition from the a priori of Plato to the conventional of Aristotle. Since Cosmides, Tooby, the editors of the three-volume Innate Mind, and most other nativists take Chomsky’s work on language as their inspiration, I offer a discussion of why instinct is in fact of little use even in discussing language, using Wittgenstein as a starting point. Then I argue that it is also of little use in understanding either culture or dark matter more generally. Finally, I circle back in the next chapter to show how the learning of dark matter entails the construction of self and a relationship to culture and language that obviates the need for a construct such as human nature.
I have argued in preceding discussions that dark matter helps us understand the relationship between the individual and his or her culture, by linking culturing, languaging, interpreting, and remembering. The case made is that the Aristotelian line of thought favoring the social over the innate, timeless knowledge of Platonism or rationalism, is the most compatible with the facts and discussion here. I further argue that empiricism is in many ways preferable to rationalism as a foundation for understanding human cognition. What I propose to do in this next part of the discussion is to examine the standard model of the mind and instincts within nativist cognitive science and then to move on to different perspectives, leading to my own views. I want to begin with “Wittgenstein’s shift.”
NO SYNTACTIC INSTINCT
Ludwig Wittgenstein (1889–1951) was one greatest figures of twentieth-century philosophy. He was initially inspired to think about language from the work of Bertrand Russell and Gottlob Frege, among others. However, as he began to find his own voice, it became clear to all—Russell in particular, early on—that what he had to say was unique. Wittgenstein is in part revered because he did not allow his past states, whether from his personal life or from his earlier philosophical commitments, to govern his future direction or present ideas.
For example, on a personal level, although he was born into one of the wealthiest families in Vienna, he was determined not to allow his material fortune to adversely affect his intellectual objectives. Therefore he simply gave away his inheritance to his siblings and dedicate himself full time to philosophy. Later he even abandoned his teaching post at Cambridge University in order to undertake a Thoreau-like existence, so as to better focus on his thinking (not his writing, necessarily, because he published so very little relative to many modern philosophers). In his thinking and writing, Wittgenstein was no less prepared to prune or destroy the past to serve the present.
Wittgenstein’s first book was the Tractatus Logico-Philosophicus (translated into English in 1922 from the 1921 German original), in which he develops a Platonic, formal view of language as set of propositions representing states of affairs. As he says ([1922] 1998, 3:3): “Only the proposition has sense; only in the context of a proposition has a name meaning.” The form and extent of language are given by logic, according to the early Wittgenstein of the Tractatus. Logic (i.e., the a priori) underlies all of language; meaning arises only in the context of referential items (names) linked in a logically well-formed proposition. Anything outside of this—that is, logically ill-formed propositions, or propositions containing names that lack (empirically verifiable) referents (e.g., unicorns, God, Santa Claus, Truth)—are nonsense. This view of language in effect makes language the arbiter of the world (in some exegeses of Wittgenstein, at least). The importance of the notions of truth and the a priori in Wittgenstein’s early writings are what lead me to label this phase his “Platonic era.” This version of his philosophy led to the school of logical positivism, the school that advocated the construction of a purely logical language for thought, a self-contained language in which the ways it is used are dictated by its form or orthogonal to it.
Later, as his ideas evolved in a very different direction, Wittgenstein criticized the Tractatus view of language as “dogmatic”—the idea that there is a unique, true interpretation of every proposition.1 His antidogmatic view began to permeate his thought, and he grew interested more in the function of language than its form (my interpretation), making what I would refer to as his “Aristotelian shift” from form to usage.
The thesis of his posthumously published book, Philosophical Investigations ([1953] 2009), is perhaps best captured by his statement that “for a large class of cases—though not for all—in which we employ the word meaning it can be defined thus: the meaning of a word is its use in the language game. And the ‘meaning’ of a name is sometimes explained by pointing to its bearer” (43). Thus did the school of “ordinary language philosophy” (Ayer 1940; Austin 1975; Searle 1970—the “Oxford School,” etc.) emerge from the Investigations much as logical positivism arose from the Tractatus. The views of the Investigations are prefigured in his 1933 Blue Book (Wittgenstein [1958] 1965, 4): “If we had to name anything which is the life of the sign, we should have to say that it was its use” [emphasis in the original]. Wittgenstein transformed the philosophical study of language from deductive philosophy to inductive science. He insisted that to know the meaning of a sentence, we have to look at its usage, not analyze its logical form—a message that some have ignored, many preferring his earlier views to his later conception.
Perhaps the most famous expression from Wittgenstein’s later work on language was “language games.” He argued that there is no core meaning of any word, but rather a “network” of uses. In other words, language is “languaging,” as culture is “culturing.” This is what Pike labeled the “dynamic” and “field” perspectives on language, whereas the earlier Wittgenstein took what Pike would have described as a “static” perspective on meaning. Additionally, Wittgenstein took languaging to be a social activity, thus ruling out a “private language.” Grammar is an activity, not an instinct, thus leading to the strange statement that “I can know what someone else is thinking, but not what I am thinking” ([1953] 2009, 222). This aphorism makes sense only if language is inherently social and meanings emerge from social use.
Wittgenstein’s influence on the study of meaning and language continues robustly through the present century, through philosophers such as Quine (1960), Austin (1975), Searle (1970), Rorty, (1981), Brandom (1998), Grice (1991), and Sperber and Wilson (1995), among others, all of them agreeing with the perspective that language is a tool, or in the words of the philosopher Marcelo Dascal (2002), a “cognitive technology.” Some aspects of this work are prefigured in earlier writers, of course, especially the work of the pragmatists (e.g., James 1907; Peirce 1992, 1998). These writings are important for the thesis here that dark matter arises in the individual by means of their interpretations of their experiences, being, and saying in society, as well as through their memory-based construction of a concept of themselves.
This line of reasoning about language is on the one hand orthogonal to the question of whether language is innate. After all, it is possible that the genes provide grammatical and cultural constraints that serve as the parameters within which usage shapes meaning and form. Many philosophers, even those sympathetic to Wittgenstein’s view of meaning, find their work compatible with universal grammar certainly and thus would agree with this rough characterization. But, on the other hand, once the genie is let out of the bottle, once we agree that usage can be responsible for meanings and forms, why not probe the extent of this responsibility? How much is left for innate knowledge such as UG to do?
In D. Everett (2012a) I argue that not much at all, perhaps nothing, would be left. Information flow, word order, sentence size, vocabulary, how to code concepts—for instance, either as word affixes as in many American Indian languages, or as words in languages as different as Mandarin and English—help create language forms and can be ascribed economically and perspicaciously to cultural history, values, practices, knowledge structures, and so on, as we saw earlier in our discussion of the emergence of grammar. This reminds me of a quote by Becker (2000, 3)
on the challenges of translation: “If you take away grammar and lexicon from a language, what is left? . . . Everything!”
This Bastian hypothesis can be tested only if specific knowledge of grammar, via statements of UG, are clear in a testable manner, along with their falsifiability conditions—and, crucially, only if the biology being appealed to can also be explicated (Lenneberg [1967] is only the barest beginning). I (D. Everett 2012a) and various others have pointed out myriad objections to specific “invariant properties,” lack of falsifiability, and so on. Intriguingly, aside from the widespread perspective that proposals of UG lack falsifiability (some defenders argue strenuously that this is false, but I find their arguments unconvincing), there is in fact no analysis claiming to be based on UG that would change if language derived from function or some other non-UG foundation. The analyses lack a causally engaged biology. Generally, biology is not even used in explanations. And so, unless it is causally implicated in specific analyses, UG is an incantation; it lacks any significance whatsoever.
The possibilities for the emergence or origin of current languages, UG notwithstanding, are exhausted by the following:2
1. Language similarities are the result of monogenesis—all languages began in Africa, and so they are all daughter languages of the original mother language; hence there will be other physical, external property or process similarities among languages.
2. Language is a priori: it emerges from the genes, the physics of the brain, or some combination of these.
3. Language forms and meanings develop together symbiotically and have a number of relationships, including iconicity (i.e., the more complex a concept, the more complex its linguistic expression). So the preposition to is shorter than the preposition around because the latter expresses more information.
4. Language is a mathematical system and has no more connection to biology than mathematics. Two plus two equals four whether you are human or vegetable. By this reasoning, phrases are endocentric in all possible worlds.3
5. Language, culture, biology, and so on, all impact one another, and thus teasing them apart is hard.
6. We have no idea and we do not care terribly about the “cause” of language—we just want to know how it works.
7. We will never know the answer to any of this without more field research on the seven thousand or so languages in the world, about which we still know relatively little.
8. Independent principles (such as physics, phonetics, and semantics) guarantee a degree of organization without the need to appeal to either innate or cultural knowledge.
None of these possibilities are implausible. None of them entirely excludes the others. To rule out any of these from consideration without strong empirical motivation would be a bad move. Therefore any researcher—Chomsky, myself, any other—must ask themself: “Does my model exclude some of these or others without warrant?” It was the purpose of an earlier paper of mine (D. Everett 2010a) to underscore this: “The Shrinking Chomskyan Corner in Linguistics.” Proponents of UG have painted themselves into a corner by ruling out other possibilities without evidence, though this is not a necessary implication of research in UG per se (no more than it would be for any other theory).
For my money, the best hypotheses are numbers 5, 7, and 8. No one in my lifetime will likely know much about 1 or 2. Numbers 4 and 6 together have produced important results, such as the work of Katz (1972) and Postal (2009) on Platonic linguistics. Another example comes from formal models such as head-driven phrase structure grammar and several works by Geoffrey Pullum on formal semantics and syntax—both of these possibilities emerge from mathematical linguistics. Number 7 is where we find most fieldworkers as they confront the most complex task in all of linguistics: figuring out how little-studied languages work.4 In none of this does UG offer great enlightenment.
If there are any instincts crucial for language, they will bear little resemblance to what Pinker (1995) refers to as “the language instinct.” From what we currently know about language(s), the only candidate for an “instinct” is what some (Lee et al. 2009; Joaquim and Schumann 2013) refer to as the “interactional instinct,” or what I (D. Everett 2012a and above) referred to as the “social instinct” (see below for a detailed discussion of a purported “phonology instinct”).
To paraphrase Chomsky, it is perfectly safe to attribute knowledge to the genes or instincts, so long as we realize that there is no substance to this assertion.5 Or, as Blumberg (2006, 205) puts it, “Nativists and evolutionary psychologists have draped themselves in the blanket of science, but, when all is said and done, they are merely telling bedtime stories for adults.”
NO PHONOLOGY INSTINCT
It has been recognized at least since Sapir’s 1908 PhD dissertation on Takelma, directed by Franz Boas, that phonology—the study of how speakers organize and perceive their sounds—is an interesting source of insight into human psychology. Sapir’s main Takelma teacher, Tony Tillohash, helped Sapir recognize the psychological reality of the phoneme. This work further affected Sapir’s sense of the connection between psychology, cognition, and culture, both for the aspects of this relationship that can be seen overtly and via those aspects that are worked into the dark matter of speakers.
Therefore, it is only natural that contemporary phonologists and psychologists also probe native-speaker intuitions and behaviors to discover more of the interesting and profound connections between sound systems and dark matter. One of the best studies I am aware of in recent years is Cutler’s (2012) profound research on speech recognition, Native Listening: Language Experience and the Recognition of Spoken Words. However, there are a number of phonologists who seem to want to probe even deeper, for the ever-appealing “instincts”—innate content—regarding sound systems that all Homo sapiens are, ex hypothesi, born with. Obviously, since it is a thesis of this book that there is neither need nor convincing evidence for inborn knowledge in Homo sapiens, some of these more detailed and better-argued claims should be addressed here.
Rather than restate the arguments of D. Everett (2012a) against UG, though, let’s briefly explore two other nativist proposals on human language meanings and forms. One is the work of Iris Berent (2013a, 2013b) on knowledge of sound systems. The other—addressed in the next section—is Wierzbicka’s (1996) theory of universal semantic knowledge, her natural semantics metalanguage (NSM). I want to examine these claims for innate knowledge of language before leaving this part of our discussion, because where Chomsky’s work centers on syntactic knowledge, these proposals on phonological knowledge and semantic knowledge cover the remainder of language from a nativist perspective.
First, let’s take Berent’s argument. Though the comments that follow are mostly negative, Berent’s research is worth the relatively large space below that is dedicated to rebutting it. This is because it is one of the best-articulated arguments for linguistic nativism, with a large amount of experimentation and a detailed, painstakingly built case. Even if my criticisms are all on the mark, her work is worth reading and a milestone in the history of the nativist program. Arguably there are no arguments for nativist syntax as detailed and carefully laid out as her arguments for nativist phonology. Berent’s claim is that phonotactics (the organization of segments into syllables, seen in the fact, e.g., that [bli] is a possible syllable of English while [lbi] is not) is an “instinct,” a grammaticalization6 of a functional principle of sound organization that has entered universal grammar and in which the original functional principles are no longer directly relevant for the resultant instinct.7
In Berent’s (2013a) The Phonological Mind, we find a sustained argument on behalf of the proposition that there is innate phonological knowledge, centering around preferences for sounds and sound sequences and signs and sign sequences in spoken and signed languages. My criticisms here are limited to a small portion of Berent’s monograph, in particular those she focuses on in Berent (2013b). As we see, there are many serious problems with Berent’s concept of a ph
onological mind, the most important of which is the “origin problem.” Where did the phonological knowledge come from? Without an account of the evolution of an instinct, proposing such nativist hypotheses are pure speculation. Rather, at best, we can take nonevolutionary evidence for an instinct as explanada rather than explanans. But other problems are also serious—in particular, the overinterpretation of observations (as “wonders”); the use of a falsified proposal as the basis for an instinct; a failure to conduct phonetic studies of the sound sequences she studies; and a failure to independently study the bases of phonotactics, simply accepting one “principle” as given and conducting experiments that are not only unconvincing but also somewhat circular. Berent concludes that her experimental results from English, Spanish, French, and Korean support her proposal that there is a universal sonority sequencing generalization (SSG) inborn in all Homo sapiens.
To understand her arguments, we must first understand the terms she uses, beginning with “sonority.” Sonority is just is the property of one sound being inherently louder than another sound. For example, when the vowel [a] is produced in any language the mouth is open wider than for other vowels, and like other vowels, [a] offers very little impedance to the flow of air out of our lungs and mouths. This makes [a] the loudest sound, relatively speaking, of all phonemes of English. A sound with less inherent loudness (e.g., [k]) is said to be less sonorous. Several of Berent’s experiments demonstrate that speakers of all the languages she tested—children and adults—prefer words organized according to the SSG. The idea behind the SSG is that the least loud (sonorous) segments are found at the far edges of syllables, while the loudest segments are found closer to the nucleus of the syllable.