by Arika Okrent
But Whorf (though perhaps naive in other ways) was not linguistically naive. His ideas about language and thought were informed by a highly technical and sophisticated understanding of the grammatical structure of languages that were very different from any European language. He saw what he called “the new, and for the most part probably misguided interest in semantics” as marred by the “parochial viewpoint to which ‘language’ means simply ‘English,’” and he tried to dissociate himself from the “various popular bromides about the misleading nature of words.”
He began to formulate his ideas about the relationship of thought to language when, after finally piecing together a grammatical description of Hopi (an Uto-Aztecan language spoken in Arizona), he realized that he knew how to form plurals but not how to use them. It was like knowing that the English plural is formed by es when a word ends in an “s” sound, but not knowing that it's inappropriate to refer to a pile of rice as “rices.” He realized that “the category of plural in Hopi was not the same thing as in English, French or German. Certain things that were plural in these languages were singular in Hopi.” For example, something like “day” could not be pluralized in Hopi, because days were experienced one at a time; they could not be assembled into an objective group that could be observed all at once—a Hopi criterion for pluralness. Whorf connected this observation to other features of the language that suggested the Hopi experience of time was not the same as it was for a speaker of an SAE (Standard Average European) language. Could it be that a different way of categorizing things in language reflected a different way of categorizing things in the world?
He never got a chance to fully explore the question. He died of cancer in 1941, at the age of forty-four. He left behind a number of papers on the topic—some published, some unpublished, some written for experts and some for lay audiences—that served as the basis for what came to be called the Whorfian hypothesis (or Sapir-Whorf hypothesis). The science-minded scholars of the 1950s reinterpreted Whorf's incomplete and complicated exploration of various issues having to do with language, thought, and culture as an empirically testable claim, hence the Whorfian “hypothesis.”
The closest Whorf himself ever got to a hypothesis-style statement was a description of his “linguistic relativity principle,” which held that “users of markedly different grammars are pointed by their grammars toward different types of observations and different evaluations of externally similar acts of observation, and hence are not equivalent as observers but must arrive at somewhat different views of the world.” As to what exactly he meant by that, well, there are about as many interpretations as there are mentions of Whorf's name in print. In any case, this statement is a long way from saying that abstract nouns or passive verbs or words like “is” are conspiring to gang up on us and steal our good sense. Whorf's ideas were definitely fertilized by the language-fearing times in which he lived, but his formulation of the language/thought question was the most sensitive to the complicated way in which language actually works and the most attractive to the social scientists who decided to take up the question in the 1950s.
It proved very difficult to come up with a scientific test of the Whorfian hypothesis. Suppose you compared two groups of people who spoke different languages, and you found some kind of difference between the groups. How would you know it was the language that caused that difference? Perhaps it was from a difference in culture. Perhaps it was the culture that shaped the language, and not the other way around. Every language was attached to a culture, and there was no way to separate one thing from the other. This was one of the problematic issues that came up when a group of top linguists, psychologists, and philosophers got together for a conference on the Whorfian hypothesis in 1953. The papers from the conference were published in a book called Language in Culture, and soon every field that touched on language and human behavior was buzzing about it.
A sociologist named James Cooke Brown, who had just taken a position at the University of Florida in Gainesville, was paying close attention. In the winter of 1955, when classes let out for the holidays, he “sat down before a bright fire to commence what I hoped would be a short paper on the possibility of testing the social psychological implications of the Sapir-Whorf hypothesis.” He wanted to show that “the construction of a tiny model language, with a grammar borrowed from the rules of modern logic, taught to subjects of different nationalities in a laboratory setting under conditions of control, would permit a decisive test.” If the problem with the Whorfian hypothesis experiment was that natural languages couldn't be disentangled from the cultures in which they were spoken, then why not avoid the problem by using an artificial language? This “tiny model language” became Loglan (from logical language), a project that would occupy the rest of Brown's life. It would grow large enough to be used for original poetry, translations of works like Alice's Adventures in Wonderland, and, in one case, a proposal of marriage. It would bring Brown fame and disappointment, admirers and enemies, and a trip to federal court over the question of who rightfully owns a language—the man who invented it or the people who use it?
A Formula
for Success
In 1960, Brown published a sketch of Loglan in Scientific American. This was an amazing coup for a language inventor. In a post-utopian, postwar world, where no one even deigned to laugh at new language projects anymore, it was incredible that a major periodical would treat an invented language seriously enough to devote ten pages to it.
Brown had found a way to make language invention respectable by treating his creation with scientific detachment. He didn't say his language would stop war and heal the world; he presented it merely as an instrument for testing a specific hypothesis. He didn't crow about how easy it was to learn; he computed a “learnability score” for each word (based on how many sounds in the word overlapped with the sounds for that word in different natural languages) and proposed that the correlation between learnability scores and actual learnability could be tested in the lab. He didn't make wild claims about the profound and life-altering effects his language would have on thought; he demurred that he was “by no means certain yet that Loglan is a thinkable language, let alone a thought-facilitating one.” His approach, humble, rational, and unemotional, was nothing like the idealistic flights of foolishness that people had come to expect from language inventors. If you wanted to get any attention for your invented-language project in 1960, scientific detachment was definitely the way to go.
Another language called Interlingua tried to adopt a similar approach in the 1950s and 1960s, and got a little bit of success in return. Interlingua was created by a committee called the International Auxiliary Language Association (IALA), which had been founded by Alice Vanderbilt Morris in 1924. The original goal of the association was to promote intelligent and objective discussion of competing invented languages and to encourage scholarly research into the matter of determining both the best form for an auxiliary language and the best uses for it. It was a meeting ground for the high-prestige language inventors and other professionals who were interested in the international language idea (linguists such as Edward Sapir, Morris Swadesh, Roman Jakobson, and André Martinet). Activity fell off in the 1930s and was further disrupted by the war, but the organization survived and ultimately published its own committee-designed Interlingua in 1951.
The first Interlingua periodical was Spectroscopia Molecular, a monthly overview of international work in … molecular spectroscopy. (It involves shooting energy at something in order to see what does or doesn't bounce back—physicists, chemists, and astronomers do it.) Next came a newsletter, Scientia International, a digest of the latest goings-on in the world of science. Interlingua positioned itself as a way for scientists of different language backgrounds to keep up with their fields. They wouldn't even necessarily have to speak the language. As long as they understood it, it would fulfill its businesslike function. By attaching itself to science, and refraining from grand claims, Interling
ua spread a little further than it otherwise might have. Some major medical congresses and journals published abstracts in Interlingua throughout the 1950s and 1960s. But it failed to sustain interest. Interlingua was another one of those Greco-Latin least common denominator languages, and if you were interested in those kinds of things, you were probably already doing Esperanto. Everyone else just wasn't interested in those kinds of things, science oriented or not.
Loglan, however, was doing a different kind of thing. Scientific detachment was only one part of the appeal of Loglan (the part that convinced people not to dismiss it immediately). What really got people interested was its new kind of design principle—the calibrated alignment of language with logic. Actually, the principle wasn't new at all. It stretched all the way back to Leibniz and Wilkins and the seventeenth-century idea that we could somehow speak in pure logic. It was new in that great strides had been made in the field of logic since then, so the idea of “speaking logic” now meant something a bit different.
In the early twentieth century, philosophers such as Gottlob Frege, Bertrand Russell, and Rudolf Carnap had developed a preliminary mathematics of language, but it was not a mathematics of concepts—no breaking down the concept dog into the basic elements that defined its dogness. It was instead a mathematics of statements. It was a method of breaking down propositions like “The dog bit the man” or “All dogs are blue” into logical formulas. These formulas were not expressed in terms of nouns, verbs, and adjectives. Instead, like mathematical formulas, they were expressed in terms of functions and arguments. Much like x(x + 5) is a function waiting for you to tell it what the argument x is, dog(x) is a function, “is a dog,” waiting for you to tell it what particular x is a dog. Blue(x) is a function, “is blue,” waiting to find out “what” is blue. Bite(x, y) is a function waiting for two arguments, the biter and the bitten. Give(x, y, z) is a function waiting for three arguments—x gives y to z.
The power of such a notation, both the mathematical and the logical, is that you can do a whole lot without ever knowing what x is. The formula x(x + 5) can itself become an argument in a larger formula; it can participate in the solving of equations and proofs. It may never return a specific number, but it can help you assess the general validity of the statements in which it plays a part. Logical formulas can do the same. “All dogs are blue” is represented by the logical statement x dog(x) → blue(x). Translated back into English, this means, “For every x, if x is a dog, then x is blue.” This logical breakdown can't tell you whether or not the statement is true out there in the real world (we know it's not true, but the logic doesn't), but it can tell you, more precisely than the original English can, what conditions need to be met in order for it to be true. This type of logical notation is even more abstract, and more powerful, than the most complex formulas of arithmetic. Not only do you not need to know what specific x's are dogs or are blue; you don't need to know exactly what “dog” and “blue” are, only that they are functions that take one argument (in logical terms, they are “one-place predicates”). This is very useful. It made whole new branches of theoretical mathematics possible, and it also gave rise to computer programming languages.
Brown's idea was to make logical forms speakable. Then he could test whether this had a Whorfian effect on people who learned it. Would speaking in logic make people more logical? Would it facilitate thought? Of course logical forms already were speakable in the sense that you could give a long-winded paraphrase like “For every x, if x is a dog, then x is blue.” But in Loglan the translation would be compact and independent of the grammar of English (or any other language).
“All dogs are blue.”
Brown's article generated a great deal of excitement in the Scientific American audience. He received hundreds of letters asking for more information.
Brown was not the only one in the late 1950s working with the idea that the apparatus of formal logic could serve as a language. A Dutch mathematician named Hans Freudenthal sought to apply the idea to the problem of finding an adequate means for communicating with beings in outer space. In his 1960 book, Lincos: Design of a Language for Cosmic Intercourse, he proposed sending out, by means of varying radio wavelengths, messages that would begin with very simple statements of arithmetic and slowly introduce more and more complex types of statements in a way that would lead the space beings, one logical step at a time, to figure out how the Lincos symbols were related to meaning. They would start by deducing from examples that “>” represented “greater than” and progress to recognizing that this:
represented “whistling for one's dog.”
Lincos was published by a highly respected international science publisher, and many academics found Freudenthal's idea interesting, but it never went anywhere. Freudenthal's dense, technical approach failed to attract a more general audience. A second planned volume was never completed.
The year after Brown published his Scientific American article, he expected to get a raise from the university, but the administration declined to give him one. He was insulted. He had brought scholarly recognition and money (in the form of a small government grant) to his department with his Loglan project, and he expected better treatment. Already bristling under the tension between his progressive politics and the conservative leadership of the university (at that time a Deep South university bracing itself against the growing civil rights movement), he wrote out a list of grievances and submitted it as his resignation. He didn't need the job anyway. He was making a fortune from the success of a board game he had invented that had been published by Parker Brothers a few years earlier.
The board game was called Careers. Brown, a lifelong socialist, objected to the single-minded focus on money in the game Monopoly. So he developed a game where success is defined not by money alone but by a combination of money, fame, and happiness. The players accumulate points in these three areas by moving around the board, entering different career tracks. They decide before the game what proportion of money, fame, and happiness makes up their personal “success formula.” It does you no good to keep winning money if it is fame or happiness you are after. (Although if you land on the right square, you can buy a yacht to gain happiness points, or a statue of yourself to get fame points.) You win when you have fulfilled your own success formula.
In real life, Brown had spent his first forty years searching for the right track. He was born in 1921, in the Philippines, to Midwestern parents who had moved there to teach. When he was eight, his parents split up, and his mother took him back to the States. He was a bright student with a high IQ, and after serving as a combat navigator in England during the war, he majored and minored in various subjects at the University of Minnesota, including philosophy, mathematics, statistics, and sociology. He wrote a dissertation on “cooperative group formation,” and formed a cooperative community of his own in Indiana, before moving to Mexico to write science fiction. But needing a better way to support his wife and two young children, he moved back to Minneapolis to work at an ad agency, a job he hated, and while he was there, he began working on Careers. His marriage broke up, and after some time in New York (working at the Institute for Motivational Research), and another short, troubled marriage, he found himself in Gainesville with a new wife, an exciting intellectual project, and a steadily growing income that meant he didn't have to work for anyone anymore. The money Brown earned from Careers allowed him to set up his own Loglan Institute in Gainesville and build a Frank Lloyd Wright–inspired modern home with a separate addition for institute activities. He was free now to devote himself to Loglan and to indulge in his passion for sailing and travel.
In 1962, Brown looked poised to fulfill his success formula and then some.
Suitable
Apologies
On a bright October day in 1987, Nora Tansky and Bob LeChevalier married in a small backyard ceremony at their home in Fairfax, Virginia. The soft rustling that accompanied the reading of their vows was not the sound of autumn leaves
but the fluttering of sheets of white paper, each one printed with a copy of the vows and distributed among the guests so they would be able to follow along. Bob, a large, heavyset man with a wide smile, went first:
mi prami tu
“I love you”
.i mi djica lepo mi kansa tu
“I desire the state of being with you”
.i mi cuxna lepo mi speni tu
“I choose the state of being married to you”
Nora, petite and shy and unaccustomed to public speaking, was so nervous when it was her turn that she skipped a line without realizing it. After the ceremony, one of the guests, a student who had been taking the Loglan class that Bob and Nora held at their home, pointed out her mistake, and someone caught a photo of her reacting to the news with an embarrassed, happy laugh.