Power Systems

Home > Other > Power Systems > Page 10
Power Systems Page 10

by Noam Chomsky


  On top of that, you have a huge propaganda offensive from the business sector, saying, “Don’t believe any of it. None of it is real.” A little to my surprise, this has even affected the more serious and responsible parts of the business press, like the Financial Times, maybe the best newspaper in the world. Just at the time that these emissions reports were coming out, the Financial Times euphorically suggested that the United States was entering a new age of plenty and might have a century of energy independence, even global hegemony, ahead of it thanks to the new techniques of extracting fossil fuels from shale rock and tar sands.32 Leaving aside the debates about whether these predictions are right or wrong, celebrating this prospect is like saying, “Fine, let’s commit suicide.” I’m sure the people who write such articles have read the same climate change reports I have and take them seriously. But their institutional role makes such positions a social or cultural necessity. They could make different decisions, but that would require real rethinking of the nature of our institutions.

  The propaganda barrage has been effective. As Naomi Klein writes in the Nation, “A 2007 Harris poll found that 71 percent of Americans believed that the continued burning of fossil fuels would cause the climate to change. By 2009 the figure had dropped to 51 percent. In June 2011 the number of Americans who agreed was down to 44 percent—well under half the population. According to Scott Keeter, director of survey research at the Pew Research Center for People and the Press, this is ‘among the largest shifts over a short period of time seen in recent public opinion history.’”33

  A significant majority of Americans still think climate change is a serious problem, but it’s true that it has declined. The Pew polls are quite interesting in that they’re international polls, and they show that internationally there’s very strong concern. The United States is not totally off the spectrum, but it’s close to the edge. Concern in the United States is notably less than in comparable countries. And the drop that Klein is describing is exactly what they report. It’s very hard to doubt that that’s connected with the propaganda campaign that has been quite openly conducted.

  In fact, a couple of years ago, right after the insurance company victories on the health reform bill, so-called Obamacare, there was a report in the New York Times about leaders of the American Petroleum Institute and other business groups looking to the victory in the health care campaign as a model to undermine concern about global warming.34 In the Republican presidential debates, for example, even to mention global warming would be to commit political suicide.

  Some of the candidates have remarkable positions on climate change. Take Ron Paul. He appeals to a lot of progressives. He said on Fox, “The greatest hoax I think that has been around for many, many years if not hundreds of years has been this hoax on the environment and global warming.”35 He doesn’t provide any argument or evidence as to why he disregards the scientific consensus—just, I say so, period. With that attitude, you really are approaching the edge.

  And, in fact, actions are being taken to implement those views. A sign of the shift in the nature of elite discourse in recent years is that the Republicans in Congress are now trying to dismantle the few environmental regulations and controls that do exist, which were instituted under Nixon. Nixon would look like a radical today, Dwight Eisenhower like a super radical.

  7

  Learning How to Discover

  CAMBRIDGE, MASSACHUSETTS (MAY 15, 2012)

  It’s been more than five decades since you first wrote about universal grammar, the idea of an inborn capacity in every human brain that allows a child to learn language. What are some of the more recent developments in the field?

  Well, that gets technical, but there’s very exciting work going on refining the proposed principles of universal grammar. The concept is widely misunderstood in the media and in public discussions. Universal grammar is something different: it is not a set of universal observations about language. In fact, there are interesting generalizations about language that are worth studying, but universal grammar is the study of the genetic basis for language, the genetic basis of the language faculty. There can’t be any serious doubt that something like that exists. Otherwise an infant couldn’t reflexively acquire language from whatever complex data is around. So that’s not controversial. The only question is what the genetic basis of the language faculty is.

  Here there are some things that we can be pretty confident about. For one thing, it doesn’t appear that there’s any detectable variation among humans. They all seem to have the same capacity. There are individual differences, as there are with everything, but no real group differences—except maybe way at the margins. So that means, for example, if an infant from a Papua New Guinea tribe that hasn’t had contact with other humans for thirty thousand years comes to Boulder, Colorado, it will speak like any kid in Colorado, because all children have the same language capacity. And the converse is true. This is distinctly human. There is nothing remotely like it among other organisms. What explains this?

  Well, if you go back fifty years, the proposals that were made when this topic came on the agenda were quite complex. In order just to account for the descriptive facts that you saw in many different languages, it seemed necessary to assume that universal grammar permitted highly intricate mechanisms, varying a lot from language to language, because languages looked very different from one another.

  Over the past fifty to sixty years, one of the most significant developments, I think, is a steady move, continuing today, toward trying to reduce and refine the assumptions so that they maintain or even expand their explanatory power for particular languages but become more feasible with regard to other conditions that the answer must meet.

  Whatever it is in our brain that generates language developed quite recently in evolutionary time, presumably within the last one hundred thousand years. Something very significant happened, which is presumably the source of human creative endeavor in a wide range of fields: creative arts, tool making, complex social structures. Paleoanthropologists sometimes call it “the great leap forward.” It’s generally assumed, plausibly, that this change had to do with the emergence of language, for which there’s no real evidence before in human history or in any other species. Whatever happened had to be pretty simple, because that’s a very short time span for evolutionary changes to take place.

  The goal of the study of universal grammar is to try to show that there is indeed something quite simple that can meet these various conditions. A plausible theory has to account for the variety of languages and the detail that you see in the surface study of languages—and, at the same time, be simple enough to explain how language could have emerged very quickly, through some small mutation of the brain, or something like that. There has been a lot of progress toward that goal and, in a parallel effort, to try to account for the apparent variability of languages by showing that, in fact, the perceived differences are superficial. The seeming variability has to do with minor changes in a few of the structural principles that are fixed.

  Discoveries in biology have encouraged this line of thinking. If you go back to the late 1970s, François Jacob argued that it could well turn out—and probably is true—that the differences between species, let’s say an elephant and a fly, could be traceable to minor changes in the regulatory circuits of the genetic system, the genes that determine what other genes do in particular places. He shared the Nobel Prize for early work on this topic.

  It looks like something similar may be true of language. There’s now work on an extraordinarily broad range of typologically different languages—and, more and more, it looks like that. There’s plenty of work to do, but a lot of this research falls into place in ways that were unimaginable thirty or forty years ago.

  In biology it was plausible quite recently to claim that organisms can vary virtually without limit and that each one has to be studied on its own. Nowadays that has changed so radically that serious biologists propose that there’s basically
one multicellular animal—the “universal genome”—and that the genomes of all the multicellular animals that have developed since the Cambrian explosion half a billion years ago are just modifications of a single pattern. This thesis hasn’t been proven, but it is taken seriously.

  Something similar is going on, I think, in the study of language. Actually, I should make it clear that this is a minority view, if you count noses. Most of the work on language doesn’t even comprehend these developments or take them seriously.

  Is the acquisition of language biological?

  I don’t see how anyone could doubt that. Just consider a newborn infant. The newborn is barraged by all kinds of stimuli, what William James famously called “one great blooming, buzzing confusion.”1 If you put, say, a chimpanzee or a kitten or a songbird in that environment, it can only pick out what’s related to its own genetic capacities. A songbird will pick out a melody of its species or something from all this mass because it’s designed to do that, but it can’t pick out anything that’s relevant to human language. On the other hand, an infant does. The infant instantly picks language-related data out of this mass of confusion. In fact, we now know that this goes on even in the uterus. Newborn infants can detect properties of their mother’s language as distinct from certain—not all, but certain—other languages.

  And then comes a very steady progression of acquisition of complex knowledge, most of it completely reflexive. Teaching doesn’t make any difference. An infant is just picking it out of the environment. And it happens very fast, in a very regular fashion. A lot is known about this process. By about six months, the infant has already analyzed what’s called the prosodic structure of the language, stress, pitch—languages differ that way—and has sort of picked out the language of its mother or whatever it hears, its mother and its peers. By about nine months, roughly, the child has picked out the relevant sound structure of the language. So when we listen to Japanese speakers speaking English, we notice that, from our point of view, they confuse “r” and “l,” meaning they don’t know the distinction. That’s already fixed in an infant’s mind by less than a year old.

  Words are learned very early, and, if you look at the meaning of a word with any care, it’s extremely intricate. But children pick up words often after only one exposure, which means the structure has got to be in the mind already. Something is being tagged with a particular sound. By, say, two years, there’s pretty good evidence that the children have mastered the rudiments of the language. They may just produce one-word or two-word sentences, but there’s now experimental and other evidence that a lot more is in there. By three or four, a normal child will have extensive language capacity.

  Either this is a miracle or it’s biologically driven. There are just no other choices. There are attempts to claim that language acquisition is a matter of pattern recognition or memorization, but even a superficial look at those proposals shows that they collapse very quickly. It doesn’t mean that they’re not being pursued. In fact, those lines of inquiry are very popular. In my view, though, they’re just an utter waste of time.

  There are some very strange ideas out there. For instance, a lot of quite fashionable work claims that children acquire language because humans have the capacity to understand the perspective of another person, according to what’s called theory of mind. The capacity to tell that another person is intending to do something develops in normal children at roughly age three or four. But, in fact, if you look at the autism spectrum, one of the classic syndromes is failure to develop theory of mind. That’s why autistic kids, or adults for that matter, don’t seem to understand what other people’s intentions are. Nevertheless, their language can be absolutely perfect. Furthermore, this capacity to understand the intention of others develops long after the child has mastered almost all the basic character of the language, maybe all of it. So that can’t be the explanation.

  There are other proposals which also just can’t be true, but are still pursued very actively. You read about them in the press, just as you read things about other organisms having language capacity. There’s a lot of mythology about language, which is very popular. I really don’t want to sound too dismissive, but I feel dismissive. I think these ideas can’t be considered seriously.

  Whatever our language faculty is, humans develop it very quickly, on very little data. In some domains, like the meaning of expressions, there’s virtually no data. Nevertheless it’s picked up very quickly and very precisely, in complex ways. Even with sound structure, where there’s a lot of data—there are sounds around, you hear them—it’s still a regular process and it’s distinctively human. Which is striking, because it’s now known that the auditory systems of higher apes, say chimpanzees, appear to be very similar to the human auditory system, even picking out the kinds of sounds that play a distinctive role in human language. Nevertheless, it’s just noise for the ape—they can’t do anything with it. They don’t have the analytical capacities, whatever they are.

  What’s the biological basis for these human capacities? That’s a very difficult problem. We know a lot, for example, about the human visual system, partly through experimentation. At the neural level, we know about it primarily from invasive experiments with other species. If you conduct invasive experiments on other mammals, cats or monkeys, you can find the actual neurons in the visual system that are responding to a light moving in a certain direction. But you can’t do that with language. There is no comparative evidence, because other species don’t have the capacity and you can’t do invasive experiments with humans. Therefore, you have to find much more complex and sophisticated ways to try to tease out some evidence about how the brain is handling all this. There’s been some progress in this extremely difficult problem, but it’s very far from yielding the kind of information you could get from experimentation.

  If you could experiment with humans, say, isolating a child and controlling carefully the data presented to it, you could learn quite a lot about language. But obviously you can’t do that. The closest we’ve come is looking at children with sensory deprivation, blind children, for example. What you find is pretty amazing. For example, a very careful study of the language of the blind found that the blind understand the visual words look, see, glare, gaze, and so on quite precisely, even though they have zero visual experience. That’s astonishing. The most extreme case is actually material that my wife, Carol, worked on, adults who were both deaf and blind. There are techniques for teaching language to the deaf-blind. Actually, Helen Keller, who is the most famous case, invented them for herself. It involves putting your hand on somebody’s face, with your fingers on the cheeks and thumb on the vocal cords. You get some data from that, which is extremely limited. But that’s the data available to the deaf-blind, and they have pretty remarkable language capacity. Helen Keller was incredible, a great writer, very lucid. She’s an extreme case.

  Carol did a study here at MIT. She found in working with people with sensory deprivation that they achieved pretty remarkable language capacity. You have to do quite subtle experiments to find things they don’t know. In fact, they managed to get along by themselves. The primary subject, the one most advanced, was a man who was a tool and die maker, I think. He worked in a factory somewhere in the Midwest. He lived with his wife, who was also deaf-blind, but they found ways to communicate with buzzers in the house and things that you could touch that vibrated. He was able to get from his house to Boston for the experiments by himself. He carried a little card which said on it, “I am deaf-blind. May I put my hand on your face?” so, if he got lost, if somebody would let him do that, he could communicate with them. And he lived a pretty normal life.

  One very striking fact was that all of the cases that succeeded were people who had lost their sight and hearing at about eighteen months old or older—it was primarily through spinal meningitis in those days. People who were younger than that when they became deaf-blind never learned language. There weren’t enough cases to actually p
rove anything, so the results of the study were never published, but this was a pretty general result. Helen Keller fits. She was twenty months old when she lost her sight and hearing. It suggests, at least, that by eighteen or twenty months, a tremendous amount of language is already known. It can’t be exhibited but it’s in there somewhere, and can possibly be teased out later.

  It’s known that the ability to acquire language starts decreasing rather sharply by about the mid-teens.

  That’s descriptively correct, although, again, it’s not 100 percent correct. There is individual variation. There are individuals who can pick up a language virtually natively at a much later age. Actually, one of them was in our department. Kenneth Hale, one of the great modern linguists, could learn a language like a baby. We used to tease him that he just never matured.

  That’s an exception?

  Yes. By and large, what you said is true. The basis is not really known, but there are some thoughts about it. One thing we know is that, from the very beginning, brain development entails losing capacities. Your brain is originally set up so that it can acquire anything that a human can acquire. In the case of language, say, it’s set up so that you can acquire Japanese, Bantu, Mohawk, English, whatever. Over time that declines. In some cases, it declines even after a few months of age. What’s happening across all cognitive capacities, not only in the case of language, is that synaptic connections, connections inside the brain, are being lost. The brain is being simplified, it’s being refined. Certain things are becoming more effective, other things are just gone. There’s apparently a lot of synaptic loss around the period of puberty or shortly beforehand, and that could be relevant.

 

‹ Prev