Book Read Free

The David Foster Wallace Reader

Page 109

by David Foster Wallace


  Wacko name-calling notwithstanding, I have encountered only one serious kind of objection to this Pro-Life + Pro-Choice position. But it’s a powerful objection. It concerns not my position per se but certain facts about me, the person who’s developed and maintained it. If this sounds to you both murky and extremely remote from anything having to do with American usage, I promise that it becomes almost excruciatingly clear and relevant below.

  The Descriptivist revolution takes a little time to unpack, but it’s worth it. The structural linguists’ rejection of conventional usage rules in English depends on two main kinds of argument. The first is academic and methodological. In this age of technology, some Descriptivists contend, it’s the scientific method—clinically objective, value-neutral, based on direct observation and demonstrable hypothesis—that should determine both the content of dictionaries and the standards of “correct” English. Because language is constantly evolving, such standards will always be fluid. Philip Gove’s now-classic introduction to Webster’s Third outlines this type of Descriptivism’s five basic edicts: “1—Language changes constantly; 2—Change is normal; 3—Spoken language is the language; 4—Correctness rests upon usage; 5—All usage is relative.”

  These principles look prima facie OK—simple, commonsensical, and couched in the bland s.-v.-o. prose of dispassionate science—but in fact they’re vague and muddled and it takes about three seconds to think of reasonable replies to each one of them, viz.:

  1—All right, but how much and how fast?

  2—Same thing. Is Hericlitean flux as normal or desirable as gradual change? Do some changes serve the language’s overall pizzazz better than others? And how many people have to deviate from how many conventions before we say the language has actually changed? Fifty percent? Ten percent? Where do you draw the line? Who draws the line?

  3—This is an old claim, at least as old as Plato’s Phaedrus. And it’s specious. If Derrida and the infamous Deconstructionists have done nothing else, they’ve successfully debunked the idea that speech is language’s primary instantiation.27 Plus consider the weird arrogance of Gove’s (3) with respect to correctness. Only the most mullah-like Prescriptivists care all that much about spoken English; most Prescriptive usage guides concern Standard Written English.28

  4—Fine, but whose usage? Gove’s (4) begs the whole question. What he wants to suggest here, I think, is a reversal of the traditional entailment-relation between abstract rules and concrete usage: instead of usage’s ideally corresponding to a rigid set of regulations, the regulations ought to correspond to the way real people are actually using the language. Again, fine, but which people? Urban Latinos? Boston Brahmins? Rural Midwesterners? Appalachian Neogaelics?

  5—Huh? If this means what it seems to mean, then it ends up biting Gove’s whole argument in the ass. Principle (5) appears to imply that the correct answer to the above “which people?” is: All of them. And it’s easy to show why this will not stand up as a lexicographical principle. The most obvious problem with it is that not everything can go in The Dictionary. Why not? Well, because you can’t actually observe and record every last bit of every last native speaker’s “language behavior,” and even if you could, the resultant dictionary would weigh four million pounds and need to be updated hourly.29 The fact is that any real lexicographer is going to have to make choices about what gets in and what doesn’t. And these choices are based on… what? And so we’re right back where we started.

  It is true that, as a SNOOT, I am naturally predisposed to look for flaws in Gove et al.’s methodological argument. But these flaws still seem awfully easy to find. Probably the biggest one is that the Descriptivists’ “scientific lexicography”—under which, keep in mind, the ideal English dictionary is basically number-crunching: you somehow observe every linguistic act by every native/naturalized speaker of English and put the sum of all these acts between two covers and call it The Dictionary—involves an incredibly crude and outdated understanding of what scientific means. It requires a naive belief in scientific Objectivity, for one thing. Even in the physical sciences, everything from quantum mechanics to Information Theory has shown that an act of observation is itself part of the phenomenon observed and is analytically inseparable from it.

  If you remember your old college English classes, there’s an analogy here that points up the trouble scholars get into when they confuse observation with interpretation. It’s the New Critics.30 Recall their belief that literary criticism was best conceived as a “scientific” endeavor: the critic was a neutral, careful, unbiased, highly trained observer whose job was to find and objectively describe meanings that were right there, literally inside pieces of literature. Whether you know what happened to New Criticism’s reputation depends on whether you took college English after c. 1975; suffice it to say that its star has dimmed. The New Critics had the same basic problem as Gove’s Methodological Descriptivists: they believed that there was such a thing as unbiased observation. And that linguistic meanings could exist “Objectively,” separate from any interpretive act.

  The point of the analogy is that claims to Objectivity in language study are now the stuff of jokes and shudders. The positivist assumptions that underlie Methodological Descriptivism have been thoroughly confuted and displaced—in Lit by the rise of post-structuralism, Reader-Response Criticism, and Jaussian Reception Theory, in linguistics by the rise of Pragmatics—and it’s now pretty much universally accepted that (a) meaning is inseparable from some act of interpretation and (b) an act of interpretation is always somewhat biased, i.e., informed by the interpreter’s particular ideology. And the consequence of (a)+(b) is that there’s no way around it—decisions about what to put in The Dictionary and what to exclude are going to be based on a lexicographer’s ideology. And every lexicographer’s got one. To presume that dictionary-making can somehow avoid or transcend ideology is simply to subscribe to a particular ideology, one that might aptly be called Unbelievably Naive Positivism.

  There’s an even more important way Descriptivists are wrong in thinking that the scientific method developed for use in chemistry and physics is equally appropriate to the study of language. This one doesn’t depend on stuff about quantum uncertainty or any kind of postmodern relativism. Even if, as a thought experiment, we assume a kind of 19th-century scientific realism—in which, even though some scientists’ interpretations of natural phenomena might be biased,31 the natural phenomena themselves can be supposed to exist wholly independent of either observation or interpretation—it’s still true that no such realist supposition can be made about “language behavior,” because such behavior is both human and fundamentally normative.

  To understand why this is important, you have only to accept the proposition that language is by its very nature public—i.e., that there is no such thing as a private language32—and then to observe the way Descriptivists seem either ignorant of this fact or oblivious to its consequences, as in for example one Dr. Charles Fries’s introduction to an epigone of Webster’s Third called The American College Dictionary:

  A dictionary can be an “authority” only in the sense in which a book of chemistry or physics or of botany can be an “authority”—by the accuracy and the completeness of its record of the observed facts of the field examined, in accord with the latest principles and techniques of the particular science.

  This is so stupid it practically drools. An “authoritative” physics text presents the results of physicists’ observations and physicists’ theories about those observations. If a physics textbook operated on Descriptivist principles, the fact that some Americans believe electricity flows better downhill (based on the observed fact that power lines tend to run high above the homes they serve) would require the Electricity Flows Better Downhill Hypothesis to be included as a “valid” theory in the textbook—just as, for Dr. Fries, if some Americans use infer for imply or aspect for perspective, these usages become ipso facto “valid” parts of the language. The truth is that s
tructural linguists like Gove and Fries are not scientists at all; they’re pollsters who misconstrue the importance of the “facts” they are recording. It isn’t scientific phenomena they’re observing and tabulating, but rather a set of human behaviors, and a lot of human behaviors are—to be blunt—moronic. Try, for instance, to imagine an “authoritative” ethics textbook whose principles were based on what most people actually do.

  Grammar and usage conventions are, as it happens, a lot more like ethical principles than like scientific theories. The reason the Descriptivists can’t see this is the same reason they choose to regard the English language as the sum of all English utterances: they confuse mere regularities with norms.

  Norms aren’t quite the same as rules, but they’re close. A norm can be defined here simply as something that people have agreed on as the optimal way to do things for certain purposes. Let’s keep in mind that language didn’t come into being because our hairy ancestors were sitting around the veldt with nothing better to do. Language was invented to serve certain very specific purposes—“That mushroom is poisonous”; “Knock these two rocks together and you can start a fire”; “This shelter is mine!” and so on. Clearly, as linguistic communities evolve over time, they discover that some ways of using language are better than others—not better a priori, but better with respect to the community’s purposes. If we assume that one such purpose might be communicating which kinds of food are safe to eat, then we can see how, for example, a misplaced modifier could violate an important norm: “People who eat that kind of mushroom often get sick” confuses the message’s recipient about whether he’ll get sick only if he eats the mushroom frequently or whether he stands a good chance of getting sick the very first time he eats it. In other words, the fungiphagic community has a vested practical interest in excluding this kind of misplaced modifier from acceptable usage; and, given the purposes the community uses language for, the fact that a certain percentage of tribesmen screw up and use misplaced modifiers to talk about food safety does not eo ipso make m.m.’s a good idea.

  Maybe now the analogy between usage and ethics is clearer. Just because people sometimes lie, cheat on their taxes, or scream at their kids, this doesn’t mean that they think those things are “good.”33 The whole point of establishing norms is to help us evaluate our actions (including utterances) according to what we as a community have decided our real interests and purposes are. Granted, this analysis is oversimplified; in practice it’s incredibly hard to arrive at norms and to keep them at least minimally fair or sometimes even to agree on what they are (see e.g. today’s Culture Wars). But the Descriptivists’ assumption that all usage norms are arbitrary and dispensable leads to—well, have a mushroom.

  The different connotations of arbitrary here are tricky, though—and this sort of segues into the second main kind of Descriptivist argument. There is a sense in which specific linguistic conventions really are arbitrary. For instance, there’s no particular metaphysical reason why our word for a four-legged mammal that gives milk and goes moo is cow and not, say, prtlmpf. The uptown term for this is “the arbitrariness of the linguistic sign,”34 and it’s used, along with certain principles of cognitive science and generative grammar, in a more philosophically sophisticated version of Descriptivism that holds the conventions of SWE to be more like the niceties of fashion than like actual norms. This “Philosophical Descriptivism” doesn’t care much about dictionaries or method; its target is the standard SNOOT claim that prescriptive rules have their ultimate justification in the community’s need to make its language meaningful and clear.

  Steven Pinker’s 1994 The Language Instinct is a good and fairly literate example of this second kind of Descriptivist argument, which, like the Gove-et-al. version, tends to deploy a jr.-high-filmstrip SCIENCE: POINTING THE WAY TO A BRIGHTER TOMORROW–type tone:

  [T]he words “rule” and “grammar” have very different meanings to a scientist and a layperson. The rules people learn (or, more likely, fail to learn) in school are called “prescriptive” rules, prescribing how one ought to talk. Scientists studying language propose “descriptive” rules, describing how people do talk. Prescriptive and descriptive grammar are simply different things.[35]

  The point of this version of Descriptivism is to show that the descriptive rules are more fundamental and way more important than the prescriptive rules. The argument goes like this. An English sentence’s being meaningful is not the same as its being grammatical. That is, such clearly ill-formed constructions as “Did you seen the car keys of me?” or “The show was looked by many people” are nevertheless comprehensible; the sentences do, more or less, communicate the information they’re trying to get across. Add to this the fact that nobody who isn’t damaged in some profound Oliver Sacksish way actually ever makes these sorts of very deep syntactic errors36 and you get the basic proposition of N. Chomsky’s generative linguistics, which is that there exists a Universal Grammar beneath and common to all languages, plus that there is probably an actual part of the human brain that’s imprinted with this Universal Grammar the same way birds’ brains are imprinted with Fly South and dogs’ with Sniff Genitals. There’s all kinds of compelling evidence and support for these ideas, not least of which are the advances that linguists and cognitive scientists and AI researchers have been able to make with them, and the theories have a lot of credibility, and they are adduced by the Philosophical Descriptivists to show that since the really important rules of language are at birth already hardwired into people’s neocortex, SWE prescriptions against dangling participles or mixed metaphors are basically the linguistic equivalent of whalebone corsets and short forks for salad. As Steven Pinker puts it, “When a scientist considers all the hightech mental machinery needed to order words into everyday sentences, prescriptive rules are, at best, inconsequential decorations.”

  This argument is not the barrel of drugged trout that Methodological Descriptivism was, but it’s still vulnerable to objections. The first one is easy. Even if it’s true that we’re all wired with a Universal Grammar, it doesn’t follow that all prescriptive rules are superfluous. Some of these rules really do seem to serve clarity and precision. The injunction against two-way adverbs (“People who eat this often get sick”) is an obvious example, as are rules about other kinds of misplaced modifiers (“There are many reasons why lawyers lie, some better than others”) and about relative pronouns’ proximity to the nouns they modify (“She’s the mother of an infant daughter who works twelve hours a day”).

  Granted, the Philosophical Descriptivist can question just how absolutely necessary these rules are: it’s quite likely that a recipient of clauses like the above could figure out what they mean from the sentences on either side or from the overall context or whatever.37 A listener can usually figure out what I really mean when I misuse infer for imply or say indicate for say, too. But many of these solecisms—or even just clunky redundancies like “The door was rectangular in shape”—require at least a couple extra nanoseconds of cognitive effort, a kind of rapid sift-and-discard process, before the recipient gets it. Extra work. It’s debatable just how much extra work, but it seems indisputable that we put some extra interpretive burden on the recipient when we fail to honor certain conventions. W/r/t confusing clauses like the above, it simply seems more “considerate” to follow the rules of correct English… just as it’s more “considerate” to de-slob your home before entertaining guests or to brush your teeth before picking up a date. Not just more considerate but more respectful somehow—both of your listener/reader and of what you’re trying to get across. As we sometimes also say about elements of fashion and etiquette, the way you use English “makes a statement” or “sends a message”—even though these statements/messages often have nothing to do with the actual information you’re trying to communicate.

  We’ve now sort of bled into a more serious rejoinder to Philosophical Descriptivism: from the fact that linguistic communication is not strictly dependent on usag
e and grammar it does not necessarily follow that the traditional rules of usage and grammar are nothing but “inconsequential decorations.” Another way to state this objection is that something’s being “decorative” does not necessarily make it “inconsequential.” Rhetoric-wise, Pinker’s flip dismissal is very bad tactics, for it invites precisely the question it’s begging: inconsequential to whom?

  A key point here is that the resemblance between usage rules and certain conventions of etiquette or fashion is closer than the Philosophical Descriptivists know and far more important than they understand. Take, for example, the Descriptivist claim that so-called correct English usages like brought rather than brung and felt rather than feeled are arbitrary and restrictive and unfair and are supported only by custom and are (like irregular verbs in general) archaic and incommodious and an all-around pain in the ass. Let us concede for the moment that these claims are 100 percent reasonable. Then let’s talk about pants. Trousers, slacks. I suggest to you that having the so-called correct subthoracic clothing for US males be pants instead of skirts is arbitrary (lots of other cultures let men wear skirts), restrictive and unfair (US females get to wear either skirts or pants), based solely on archaic custom (I think it’s got to do with certain traditions about gender and leg-position, the same reasons women were supposed to ride sidesaddle and girls’ bikes don’t have a crossbar), and in certain ways not only incommodious but illogical (skirts are more comfortable than pants;38 pants ride up; pants are hot; pants can squish the ’nads and reduce fertility; over time pants chafe and erode irregular sections of men’s leg-hair and give older men hideous half-denuded legs; etc. etc.). Let us grant—as a thought experiment if nothing else—that these are all sensible and compelling objections to pants as an androsartorial norm. Let us, in fact, in our minds and hearts say yes—shout yes—to the skirt, the kilt, the toga, the sarong, the jupe. Let us dream of or even in our spare time work toward an America where nobody lays any arbitrary sumptuary prescriptions on anyone else and we can all go around as comfortable and aerated and unchafed and motile as we want.

 

‹ Prev