Book Read Free

The Mismeasure of Man

Page 39

by Stephen Jay Gould


  I believe that human sociobiologists have made a fundamental mistake in categories. They are seeking the genetic basis of human behavior at the wrong level. They are searching among the specific products of generating rules—Joe’s homosexuality, Martha’s fear of strangers—while the rules themselves are the genetic deep structures of human behavior. For example, E. O. Wilson (1978, p. 99) writes: “Are human beings innately aggressive? This is a favorite question of college seminars and cocktail party conversations, and one that raises emotion in political ideologues of all stripes. The answer to it is yes.” As evidence, Wilson cites the prevalence of warfare in history and then discounts any current disinclination to fight: “The most peaceable tribes of today were often the ravagers of yesteryear and will probably again produce soldiers and murderers in the future.” But if some peoples are peaceable now, then aggression itself cannot be coded in our genes, only the potential for it. If innate only means possible, or even likely in certain environments, then everything we do is innate and the word has no meaning. Aggression is one expression of a generating rule that anticipates peacefulness in other common environments. The range of specific behaviors engendered by the rule is impressive and a fine testimony to flexibility as the hallmark of human behavior. This flexibility should not be obscured by the linguistic error of branding some common expressions of the rule as “innate” because we can predict their occurrence in certain environments.

  Sociobiologists work as if Galileo had really mounted the Leaning Tower (apparently he did not), dropped a set of diverse objects over the side, and sought a separate explanation for each behavior—the plunge of the cannonball as a result of something in the nature of cannonballness; the gentle descent of the feather as intrinsic to featherness. We know, instead, that the wide range of different falling behaviors arises from an interaction between two physical rules—gravity and frictional resistance. This interaction can generate a thousand different styles of descent. If we focus on the objects and seek an explanation for the behavior of each in its own terms, we are lost. The search among specific behaviors for the genetic basis of human nature is an example of biological determinism. The quest for underlying generating rules expresses a concept of biological potentiality. The question is not biological nature vs. nonbiological nurture. Determinism and potentiality are both biological theories—but they seek the genetic basis of human nature at fundamentally different levels.

  Pursuing the Galilean analogy, if cannonballs act by cannonballness, feathers by featherness, then we can do little beyond concocting a story for the adaptive significance of each. We would never think of doing the great historical experiment—equalizing the effective environment by placing both in a vacuum and observing an identical behavior in descent. This hypothetical example illustrates the social role of biological determinism. It is fundamentally a theory about limits. It takes current ranges in modern environments as an expression of direct genetic programing, rather than a limited display of much broader potential. If a feather acts by featherness, we cannot change its behavior while it remains a feather. If its behavior is an expression of broad rules tied to specific circumstances, we anticipate a wide range of behaviors in different environments.

  Why should human behaviorial ranges be so broad, when anatomical ranges are generally narrower? Is this claim for behavioral flexibility merely a social hope, or is it good biology as well? Two different arguments lead me to conclude that wide behavioral ranges should arise as consequences of the evolution and structural organization of our brain. Consider, first of all, the probable adaptive reasons for evolving such a large brain. Human uniqueness lies in the flexibility of what our brain can do. What is intelligence, if not the ability to face problems in an unprogramed (or, as we often say, creative) manner? If intelligence sets us apart among organisms, then I think it probable that natural selection acted to maximize the flexibility of our behavior. What would be more adaptive for a learning and thinking animal: genes selected for aggression, spite, and xenophobia; or selection for learning rules that can generate aggression in appropriate circumstances and peacefulness in others?

  Secondly, we must be wary of granting too much power to natural selection by viewing all basic capacities of our brain as direct adaptations. I do not doubt that natural selection acted in building our oversized brains—and I am equally confident that our brains became large as an adaptation for definite roles (probably a complex set of interacting functions). But these assumptions do not lead to the notion, often uncritically embraced by strict Darwinians, that all major capacities of the brain must arise as direct products of natural selection. Our brains are enormously complex computers. If I install a much simpler computer to keep accounts in a factory, it can also perform many other, more complex tasks unrelated to its appointed role. These additional capacities are ineluctable consequences of structural design, not direct adaptations. Our vastly more complex organic computers were also built for reasons, but possess an almost terrifying array of additional capacities—including, I suspect, most of what makes us human. Our ancestors did not read, write, or wonder why most stars do not change their relative positions while five wandering points of light and two larger disks move through a path now called the zodiac. We need not view Bach as a happy spinoff from the value of music in cementing tribal cohesion, or Shakespeare as a fortunate consequence of the role of myth and epic narrative in maintaining hunting bands. Most of the behavioral “traits” that sociobiologists try to explain may never have been subject to direct natural selection at all—and may therefore exhibit a flexibility that features crucial to survival can never display. Should these complex consequences of structural design even be called “traits”? Is this tendency to atomize a behavioral repertory into a set of “things” not another example of the same fallacy of reification that has plagued studies of intelligence throughout our century?

  7.1 A juvenile and adult chimpanzee showing the greater resemblance of humans to the baby and illustrating the principle of neoteny in human evolution.

  Flexibility is the hallmark of human evolution. If humans evolved, as I believe, by neoteny (see Chapter 4 and Gould, 1977, pp. 352–404), then we are, in a more than metaphorical sense, permanent children. (In neoteny, rates of development slow down and juvenile stages of ancestors become the adult features of descendants.) Many central features of our anatomy link us with fetal and juvenile stages of primates: small face, vaulted cranium and large brain in relation to body size, unrotated big toe, foramen magnum under the skull for correct orientation of the head in upright posture, primary distribution of hair on head, armpits, and pubic areas. If one picture is worth a thousand words, consider Fig. 7.1. In other mammals, exploration, play, and flexibility of behavior are qualities of juveniles, only rarely of adults. We retain not only the anatomical stamp of childhood, but its mental flexibility as well. The idea that natural selection should have worked for flexibility in human evolution is not an ad hoc notion born in hope, but an implication of neoteny as a fundamental process in our evolution. Humans are learning animals.

  In T. H. White’s novel The Once and Future King, a badger relates a parable about the origin of animals. God, he recounts, created all animals as embryos and called each before his throne, offering them whatever additions to their anatomy they desired. All opted for specialized adult features—the lion for claws and sharp teeth, the deer for antlers and hoofs. The human embryo stepped forth last and said:

  “Please God, I think that you made me in the shape which I now have for reasons best known to Yourselves and that it would be rude to change. If I am to have my choice, I will stay as I am. I will not alter any of the parts which you gave me.… I will stay a defenceless embryo all my life, doing my best to make myself a few feeble implements out of the wood, iron, and the other materials which You have seen fit to put before me.…” “Well done,” exclaimed the Creator in delighted tone. “Here, all you embryos, come here with your beaks and whatnots to look upon Our first Man. H
e is the only one who has guessed Our riddle.… As for you, Man.… You will look like an embryo till they bury you, but all the others will be embryos before your might. Eternally undeveloped, you will always remain potential in Our image, able to see some of Our sorrows and to feel some of Our joys. We are partly sorry for you, Man, but partly hopeful. Run along then, and do your best.”

  Epilogue

  IN 1927 OLIVER WENDELL HOLMES, Jr., delivered the Supreme Court’s decision upholding the Virginia sterilization law in Buck v. Bell. Carrie Buck, a young mother with a child of allegedly feeble mind, had scored a mental age of nine on the Stanford-Binet. Carrie Buck’s mother, then fifty-two, had tested at mental age seven. Holmes wrote, in one of the most famous and chilling statements of our century:

  We have seen more than once that the public welfare may call upon the best citizens for their lives. It would be strange if it could not call upon those who already sap the strength of the state for these lesser sacrifices.… Three generations of imbeciles are enough.

  (The line is often miscited as “three generations of idiots.…” But Holmes knew the technical jargon of his time, and the Bucks, though not “normal” by the Stanford-Binet, were one grade above idiots.)

  Buck v. Bell is a signpost of history, an event linked with the distant past in my mind. The Babe hit his sixty homers in 1927, and legends are all the more wonderful because they seem so distant. I was therefore shocked by an item in the Washington Post on 23 February 1980—for few things can be more disconcerting than a juxtaposition of neady ordered and separated temporal events. “Over 7,500 sterilized in Virginia,” the headline read. The law that Holmes upheld had been implemented for forty-eight years, from 1924 to 1972. The operations had been performed in mental-health facilities, primarily upon white men and women considered feeble-minded and antisocial—including “unwed mothers, prostitutes, petty criminals and children with disciplinary problems.”

  Carrie Buck, then in her seventies, was still living near Charlottesville. Several journalists and scientists visited Carrie Buck and her sister, Doris, during the last years of their lives. Both women, though lacking much formal education, were clearly able and intelligent. Nonetheless, Doris Buck had been sterilized under the same law in 1928. She later married Matthew Figgins, a plumber. But Doris Buck was never informed. “They told me,” she recalled, “that the operation was for an appendix and rupture.” So she and Matthew Figgins tried to conceive a child. They consulted physicians at three hospitals throughout her child-bearing years; no one recognized that her Fallopian tubes had been severed. Last year, Doris Buck Figgins finally discovered the cause of her lifelong sadness.

  One might invoke an unfeeling calculus and say that Doris Buck’s disappointment ranks as nothing compared with millions dead in wars to support the designs of madmen or the conceits of rulers. But can one measure the pain of a single dream unfulfilled, the hope of a defenseless woman snatched by public power in the name of an ideology advanced to purify a race. May Doris Buck’s simple and eloquent testimony stand for millions of deaths and disappointments and help us to remember that the Sabbath was made for man, not man for the Sabbath: “I broke down and cried. My husband and me wanted children desperately. We were crazy about them. I never knew what they’d done to me.”

  Critique of The Bell Curve

  The Bell Curve

  The Bell Curve by Richard J. Herrnstein and Charles Murray provides a superb and unusual opportunity for insight into the meaning of experiment as a method in science. Reduction of confusing variables is the primary desideratum in all experiments. We bring all the buzzing and blooming confusion of the external world into our laboratories and, holding all else constant in our artificial simplicity, try to vary just one potential factor at a time. Often, however, we cannot use such an experimental method, particularly for most social phenomena when importation into the laboratory destroys the subject of our investigation—and then we can only yearn for simplifying guides in nature. If the external world therefore obliges and holds some crucial factors constant for us, then we can only offer thanks for such a natural boost to understanding.

  When a book garners as much attention as The Bell Curve has received, we wish to know the causes. One might suspect content itself—a startling new idea, or an old suspicion now verified by persuasive data—but the reason might well be social acceptability, or just plain hype. The Bell Curve contains no new arguments and presents no compelling data to support its anachronistic social Darwinism. I must therefore conclude that its initial success in winning such attention must reflect the depressing temper of our time—a historical moment of unprecedented ungenerosity, when a mood for slashing social programs can be so abetted by an argument that beneficiaries cannot be aided due to inborn cognitive limits expressed as low IQ scores.

  The Bell Curve rests upon two distinctly different but sequential arguments, which together encompass the classical corpus of biological determinism as a social philosophy. The first claim (Chapters 1–12) rehashes the tenets of social Darwinism as originally constituted. (“Social Darwinism” has often been used as a general term for any evolutionary argument about the biological basis of human differences, but the initial meaning referred to a specific theory of class stratification within industrial societies, particularly to the idea that a permanently poor underclass consisting of genetically inferior people had precipitated down into their inevitable fate.)

  This social Darwinian half of The Bell Curve arises from a paradox of egalitarianism. So long as people remain on top of the social heap by accident of a noble name or parental wealth, and so long as members of despised castes cannot rise whatever their talents, social stratification will not reflect intellectual merit, and brilliance will be distributed across all classes. But if true equality of opportunity can be attained, then smart people rise and the lower classes rigidify by retaining only the intellectually incompetent.

  This nineteenth-century argument has attracted a variety of twentieth-century champions, including Stanford psychologist Lewis M. Terman, who imported Binet’s original test from France, developed the Stanford-Binet IQ test, and gave a hereditarian interpretation to the results (one that Binet had vigorously rejected in developing this style of test); Prime Minister Lee Kuan Yew of Singapore, who tried to institute a eugenics program of rewarding well-educated women for higher birthrates; and Richard Herrnstein, coauthor of The Bell Curve and author of a 1971 Atlantic Monthly article that presented the same argument without documentation. The general claim is neither uninteresting nor illogical, but does require the validity of four shaky premises, all asserted (but hardly discussed or defended) by Herrnstein and Murray. Intelligence, in their formulation, must be depictable as a single number, capable of ranking people in linear order, genetically based, and effectively immutable. If any of these premises are false, the entire argument collapses. For example, if all are true except immutability, then programs for early intervention in education might work to boost IQ permanently, just as a pair of eyeglasses may correct a genetic defect in vision. The central argument of The Bell Curve fails because most of the premises are false.

  The second claim (Chapters 13—22), the lightning rod for most commentary, extends the argument for innate cognitive stratification by social class to a claim for inherited racial differences in IQ—small for Asian superiority over Caucasian, but large for Caucasians over people of African descent. This argument is as old as the study of race. The last generation’s discussion centered upon the sophisticated work of Arthur Jensen (far more elaborate and varied than anything presented in The Bell Curve, and therefore still a better source for grasping the argument and its fallacies) and the cranky advocacy of William Shockley.

  The central fallacy in using the substantial heritability of within-group IQ (among whites, for example) as an explanation for average differences between groups (whites vs. blacks, for example) is now well known and acknowledged by all, including Herrnstein and Murray, but deserves a restatem
ent by example. Take a trait far more heritable than anyone has ever claimed for IQ, but politically uncontroversial—body height. Suppose that I measure adult male height in a poor Indian village beset with pervasive nutritional deprivation. Suppose the average height of adult males is 5 feet 6 inches, well below the current American mean of about 5 feet 9 inches. Heritability within the village will be high—meaning that tall fathers (they may average 5 feet 8 inches) tend to have tall sons, while short fathers (5 feet 4 inches on average) tend to have short sons. But high heritability within the village does not mean that better nutrition might not raise average height to 5 feet 10 inches (above the American mean) in a few generations. Similarly the well-documented 15-point average difference in IQ between blacks and whites in America, with substantial heritability of IQ in family lines within each group, permits no conclusion that truly equal opportunity might not raise the black average to equal or surpass the white mean.

  Since Herrnstein and Murray know and acknowledge this critique, they must construct an admittedly circumstantial case for attributing most of the black-white mean difference to irrevocable genetics—while properly stressing that the average difference doesn’t help at all in judging any particular person because so many individual blacks score above the white mean in IQ. Quite apart from the rhetorical dubriety of this old ploy in a shopworn genre—“some-of-my-best-friends-are-group-x”—Herrnstein and Murray violate fairness by converting a complex case that can only yield agnosticism into a biased brief for permanent and heritable difference. They impose this spin by turning every straw on their side into an oak, while mentioning but downplaying the strong circumstantial case for substantial malleability and little average genetic difference (impressive IQ gains for poor black children adopted into affluent and intellectual homes; average IQ increases in some nations since World War II equal to the entire 15-point difference now separating blacks and whites in America; failure to find any cognitive differences between two cohorts of children born out of wedlock to German women, and raised in Germany as Germans, but fathered by black and white American soldiers).

 

‹ Prev