Dark Matter of the Mind

Home > Other > Dark Matter of the Mind > Page 42
Dark Matter of the Mind Page 42

by Daniel L. Everett


  Next, Berent talks about “shared design.” This is just the idea that all known phonological systems derive from similar phonological features. But this is not a “wonder” of any sort. There is nothing inherently instinctual in building new phonological systems from the same vocal apparatus and auditory system, using in particular the more phonetically grounded components of segmental sequencing.

  Another purported “wonder” is what Berent refers to as “scaffolding.” This is nothing more than the idea that our phonologies are reused. They serve double duty—in grammar and as a basis for our reading and writing (and other related skills). This is of course false in much of some writing systems (e.g., Epi-Olmec hieroglyphics, where speaking and writing are based on nearly non-overlapping principles). In fact, “reuse” is expected in cognitive or biological systems to avoid unnecessary duplication of effort. Not only is it a crucial feature of brain functioning (Anderson 2014), but it is common among humans to reuse technology—the use of cutting instruments for a variety of purposes, from opening cans to carving ivory, for instance. Therefore, reuse is a common strategy of cognition, evolution, resource management, and on and on, and is thus orthogonal to the question of instincts.

  Next, Berent talks about “regenesis,” the appearance of the same (apparently) phonological principles in new languages, in particular when principles of spoken phonology (e.g., the SSG, according to Berent) show up in signed systems. The claim is that the SSG emerges when humans generate a new phonological system de novo. But even here, assuming we can replace the invalid SSG with a valid principle, we must use caution in imputing “principles” to others as innate knowledge. We have just seen, after all, how the particular phonetic preference Berent calls the SSG could occur without instincts.

  But even if we take her claims and results and face value, “regenesis” is still a red herring. In spoken languages, the notion simply obscures the larger generalization or set of generalizations that people always prefer the best-sounding sequences perceptually, even when cultural effects in their native languages override these. Berent again attempts to counter this with research on sequences of signs in signed languages. Yet there is no sound-based principle in common between signed and spoken languages—by definition, since one lacks sounds altogether and the other lacks signs (see chap. 7). Both will of course find it useful to organize word-internal signs or sounds to maximize their perceptability, but no one has ever successfully demonstrated that signed languages have “phonology” in the same sense as spoken languages. In fact, I have long maintained that, in spite of broadly similar organizational principles, sign organization in visual vs. spoken languages is grounded in entirely different sets of features (for example, where is the correlate of the feature “high tone” or F2 transition in signed languages?), and thus that talking of them both as having “phonologies” is nothing more than misleading metaphor.

  Another “wonder” Berent appeals to, to show that phonology is an instinct, is the common poverty of the stimulus argument or what she refers to as “early onset.” Children show the operation of sophisticated linguistic behaviors early on—so early, in fact, that a particular researcher might not be able to imagine how it might have been learned, jumping to the conclusion that it must not have been learned but emerges from the child’s innate endowment. Yet all Berent shows in discussing early onset is the completely unremarkable fact that children rapidly learn and prefer those sound sequences that their auditory and articulatory apparatuses have together evolved to recognize and produce most easily. This commonality is not linguistic per se. It is physical, like not trying to pick up a ton of bricks with only the strength in one’s arms,. or more appropriately, in not using sounds that people cannot hear (e.g., with frequency that only dogs can hear).

  Finally, Berent argues for “core phonological knowledge” based on what she terms “unique design.” This means that phonology has its own unique properties. But this shows nothing about innate endowment. Burrito making has its own unique features, as does mathematics, both eminently learnable (like phonology). And Berent’s discussion fails to explain why these unique features could not have been learned, nor why there would be any evolutionary advantage such that natural selection would favor them.

  Summing up to this point, Berent has established neither that speakers are following sonority organization that is embedded in their “core knowledge,” nor that her account is superior to more intuitively plausible phonetic principles. Nor are any of her “seven wonders of phonology” remotely wondrous.

  And yet, in spite of all of my objections up to this point, there is a far more serious obstacle to accepting the idea of a phonological mind. This is what Blumberg (2006) refers to as the problem of “origins,” which we have mentioned and which is discussed at length in several recent books (Buller 2006; R. Richardson 2007; Blumberg 2006, among others)—an obstacle Berent ignores entirely, and an all too common omission from proponents of behavioral nativism. Put another way, how could this core knowledge have evolved? More seriously, relative to the SSG, how could an instinct based on any related principle have evolved? As we have seen, to answer the origins problem, Berent would need to explain (as Tinbergen [1963], among others, discusses at length) the survival pressures, population pressures, environment, and so on, at the time of the evolution of a valid phonotactic constraint; if the trait appears as a mutation in one mind, what leads to its genetic spread to others in a population—what was its fitness advantage? In fact, the question doesn’t even make sense regarding the SSG, since there is no such principle. But even if a better-justified generalization could be found, coming up with any plausible story of the origin of the principle is a huge challenge, as are definitions of innate and instinct, and the entire line of reasoning—as we saw in chapter 1—based on innate knowledge, inborn dark matter.

  I therefore reject Berent’s proposals for a phonological mind. Moreover, I believe that such proposals underscore the problem of psychology (Geertz 1973) that experiments, however clear their results, are no more useful than the quality of their foundational hypotheses. As a psychologist rather than a linguist, Berent seems to have muddled the questions.

  NO SEMANTIC INSTINCT

  Similar issues face other nativist accounts of language, such as the natural semantic metalanguage (NSM) of Anna Wierzbicka (1996) of the Australian National University and her followers. The idea of NSM is that there is a universal set of concepts that are found in all human languages. These are so-called “irreducible” semantic primitives. The proposed list includes things like:

  Substantives: I, you, someone, etc.

  Relationals: kind, part, etc.

  Determiners: this, the same, etc.

  Evaluators: good, bad, etc.

  Quantifiers: one, two, all, each, every, etc.

  Logical concepts: not, maybe, if, can, etc.

  Wierzbicka believes (John Colapinto, pers. comm., 2007) that such primitives support the psychic unity of man, thus explicitly connecting (however unknowingly) her research program to Bastian’s. But, aside from her Bastian bias, what is the empirical support for NSM? And where might these primitives sources derive from? Looking into such questions, as we saw with Berent’s theory, Blumberg’s (2006, 33) observation comes to mind: “When we ask questions about origins, we defeat designer-thinking.” In other words, if we asked a proponent of instincts, “Where did the instinct come from?” or “Exactly how did the instinct arise in evolutionary terms?” we would likely be told, “No one knows.” (And how well did an author consider alternatives to nativism before positing an innate principle?)

  Before discussing this issue, however, we should note that the empirical evidence for NSM is weak. For example, in D. Everett (2005a, 2012a, 2012b) it is made clear that Pirahã data are counterexamples for many of these NSM primitives. Pirahã lacks numbers, quantifiers, Boolean operators (conjunctions, disjunctions, conditional markers, etc.), many of the substantives in the NSM list, and others (see also C. Eve
rett and Madora 2012 and M. Frank et al. 2008 for follow-up corroborating studies of number).

  Second, setting aside the empirical weakness of the theory, Wierzbicka, like Berent, offers no account of the sources of the primitives the theory proposes. This is a severe weakness. For example, if one wanted to claim that NSM primitives are innate, one would have to argue why a nativist account is warranted when the primitives would otherwise be of obvious utility in communication. Because of their utility, these “primitives” would appear independently in many, if not most, languages; however, they are implemented in particular languages. For example, numbers and quantifiers are so useful that some would argue (erroneously) that without them, there can be no human language (Davidson 1967). But at least the utility of numbers and quantifiers make it completely unsurprising to learn that these categories are found in many or most of the world’s languages. Therefore the desire to establish them as a priori, universal knowledge is unwarranted, especially since the Pirahã data show that they are not all always useful in every culture—hardly a surprising fact to the anthropologist.

  Moreover, while the work of Berent and Wierzbicka is undoubtedly correct in many areas, they both state their problem roughly in the following way: “Here are some (near) universal facts about aspects of language. These also show up with infants and in experiments. We see no way for them to be learned. Thus they are innate dark matter.”

  Before explaining my objections to nativist accounts more generally, however, I want to consider a final example of a nativist “module” of the mind—namely, the proposal that there is a moral instinct or innate basis for morality (apart from emotions and survival or other biological values in the sense defined earlier).

  The idea of innate morality emerges in the work of Paul Bloom (2013), Marc Hauser (2006), and others. Both Bloom and Hauser are outspoken proponents for nativist epistemologies. Bloom even goes so far as to claim that the greatest misconception people have about morality is that “morality is a human invention” (S. Harris 2013). To support such claims, Bloom turns to research with human infants, claiming that “the powerful capacities that we and other researchers find in babies are strong evidence for the contribution of biology.” He argues that “moral capacities are not acquired through learning” (ibid.). Bloom and his colleagues’ research is highlighted in the media and is outlined in his popular book, Just Babies: The Origins of Good and Evil (2013), as well as in numerous articles for peer-reviewed scientific journals. The reasons behind the public appeal of Bloom’s work seem to boil down to three: (i) designer bias; (ii) Ivy League bias; and (iii) simple answers for complex questions.12

  THE CULTURAL APPEAL OF INSTINCTS

  Designer bias we discussed earlier as the strong appeal to the general public and to many scientists of the notion that humans are the way they are for reasons beyond their control—because our genes tightly constrain us. There are hugely popular books on art instinct, religious instinct, moral instinct, language instinct, and so on. And these works are popular because they strike a chord with the general public. Many people—scientists and laypeople alike—reasonably prefer a simpler story over a messier story. Instincts are simple and clear to state, to try to understand. But the idea that human knowledge, culture, identity, and selves may result from a combination of general properties of the brain interacting with the world in numerous ways is just too complicated to have the same appeal. Instincts thus play a simplifying role. In this sense, they are similar to the idea that pyramids in Mexico and Egypt were both designed by space aliens.

  The “Ivy League” or status bias, on the other hand, is helpful when issues are complex or controversial. This bias provides a simpler path to deciding who is right—just assume that the person at the most prestigious university is correct. This kind of bias is fairly strong in US culture, as well as in many other highly literate societies. This allows us to avoid weighing issues based on reasoning, providing a much quicker “emulate the famous or authority figure” alternative. This is much like what we do when we choose clothes based on what famous people wear, imitate the mannerisms of a well-known personality, or wear sports jerseys with the numbers of our heroes/sports favorites. Vicarious thinking takes less effort than actual thinking. By and large, as Boyd and Richerson (2005) argue, status bias is a rational, low-risk, effort-saving strategy.

  There is nothing wrong with such biases per se. Taking “so-and-so’s word for it” can save time, especially when the individual in question is eminent in his or her field. And, after all, Harvard, Yale, Princeton, and the other Ivy League schools are at the top of the university rankings because their faculty are homogenously outstanding. This is one reason people of certain cultures come by such biases. The idea that instincts override culture is often (though by no means always) associated with names from Ivy League schools, thus supporting the idea of such instincts among the public, along with the designer bias and the simple-vs.-complexity bias.

  The other bias—one that has a long, respectable philosophical tradition behind it—is that when faced with two explanations of the same phenomenon, prefer the simpler one. In actual scientific practice, this doesn’t mean “choose the conceptually simper theory,” necessarily, but rather, choose the shorter—in strings of symbols—answer. Popularly applied, however, for the general public this bias means “choose the easier argument to follow, the argument with fewer variables, etc.” This helps to explain part of the popular appeal of these ideas. But pointing it out is not a criticism of the theory, merely underscoring a widespread cultural value that contributes to the theory’s popularity.

  No Morality Instinct

  There are criticisms to make, however, of innate ideas, especially with regard to morality (Lieberman [2013] and Patricia Churchland [2012] being two of the most incisive). Let’s begin by evaluating what I call the “Monomorality idea”—the notion that morality is innate—by taking a look at this theory’s methodology. My criticisms apply not only to Bloom’s work, but also to Hauser’s (2006) Moral Minds and most other research on the epistemology of babies.

  Working with babies is very difficult. Babies cannot talk. They cannot move well. They are subject to the “Clever Hans” effect—that is, picking up subtle cues from the experimenter about what response is desired. And, most crucially and surprisingly, most studies of baby cognition are based on a single technique with variations. As Blumberg (2006, 167) says, “Amazingly, virtually every claim by nativists regarding the remarkable, unexpected competencies and core knowledge of infants is based on experiments using the eyes as portals to the mind. Yet few people are aware of how the resurgence of nativism rests on little more than a bold decision to play fast and loose with an experimental procedure that has been used since the 1960s to answer legitimate questions about infant perception.”

  Here’s how the method works (and virtually every experimenter on infant cognition employs this method): The idea is that infants shift focus to new things from familiar things, by looking away from the latter toward the former. So an infant looking at x for a period of time will pay less and less attention to it the longer they are exposed to it, whether x is a color, person, event, and so on. If y is then shown, the infant will shift its gaze to y from x. This is the infant’s “novelty preference.” One can show how problematic this is fairly easily. For example, enter a room where a baby is sitting with its mother, fire six very loud blanks at the mother, and have the mother fall to the ground screaming, then faking death. The infant would most likely shift attention and stare! Does this mean there is a “do not murder” preference? (Ethics aside, of course.)

  This methodology does indeed unambiguously show infants’ perceptual ability to distinguish x’s from y’s. So if x is “green” and y is “red,” and if the infants in our opinion have had no chance to learn the difference between green and red, are we not then justified in concluding that these colors are distinguished innately by the child? As Blumberg (2005, 168) puts it, yes, so long as we are unconcer
ned with either parsimony or falsifiability. Clearfield and Mix (2001) show that such experiments in fact miss numerous variables. One is whether the infant is acting upon information that is different than the information the researcher is thinking about. One principle example that Clearfield and Mix raise is that in numerical studies, there are times when infants focus on the length of what they are seeing as opposed to the specific numbers or amounts that they are seeing. They argue that in fact, the children are attending to different stimuli than contemplated by the experiments, and thus that these experiments are flawed. When a large part of an entire field rests on the idea that it can perfectly interpret infant gazes to understand infant morality, numerical cognition, and so on, the conclusions so derived are questionable at best.

  Beyond the methodology, there are other problems. I have been arguing that there are various sets of values, ranked in different ways, across different societies and across individuals in the same society. Bloom’s work gets at one or two of these based largely on emotions—what I refer to earlier as “biologically based” values. For example, there simply is not a sufficient number of cross-linguistic studies of morality to support the conclusions. In some societies (D. Richardson 2005), dishonesty is claimed to be valued over honesty in many situations. In other societies, theft is generally not a problem. In other societies, marital infidelity is insignificant. Thus while some taboos, such as incest, may be universal, the variation across societies and within others is sufficient to render any nativist account anemic and practically of little value. Consider, for example, what it means to be “bad.” Say that this were an innate concept. What would it mean in Wittgenstein’s sense of meaning as use? How would one come to understand the nuances of the meanings entailed without understanding the contexts in which the concept is used? Or perhaps what is innate is a small schema that includes food theft as the model where every extension must be learned. Under what evolutionary scenario could that have evolved? That is, what were the populations like, the availability of food, the nature of theft at the time of evolution that could account for this as a behavior of twenty-first-century infants? And where would this be encoded in the brain?

 

‹ Prev