Book Read Free

Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time

Page 33

by Michael Shermer


  Kinsey saw the implications of this variation for moral and ethical systems. If variation and uniqueness are the norm, then what form of morality can possibly envelope all human actions? For human sexuality alone, Kinsey measured 250 different items for each of over ten thousand people. That is 2.5 million data points. Regarding the variety of human behavior, Kinsey concluded, "Endless recombinations of these characters in different individuals swell the possibilities to something which is, for all essential purposes, infinity" (in Christenson 1971, p. 5). Since all moral systems are absolute, yet the variation of these systems is staggeringly broad, then all absolute moral systems are actually relative to the group conferring (usually imposing) it upon others. At the end of the volume on males, Kinsey concluded that there is virtually no evidence for "the existence of such a thing as innate perversity, even among those individuals whose sexual activities society has been least inclined to accept." On the contrary, as he demonstrated with his vast statistical tables and in-depth analyses, the evidence leads to the conclusion "that most human sexual activities would become comprehensible to most individuals, if they could know the background of each other individual's behavior" (Kinsey, Pomeroy, and Martin 1948, p. 678).

  Variation is what Kinsey called "the most nearly universal of all biologic principles," but it is one that most seem to forget when they "expect their fellows to think and behave according to patterns which may fit the lawmaker, or the imaginary ideals for which the legislation was fashioned, but which are ill-shaped for all real individuals who try to live under them." Kinsey demonstrated that while "social forms, legal restrictions, and moral codes may be, as the social scientist would contend, the codification of human experience," they are, like all statistical and population generalizations, "of little significance when applied to particular individuals" (in Christenson 1971, p. 6). These laws tell us more about the lawmakers than they do about the laws of human nature:

  Prescriptions are merely public confessions of prescriptionists. What is right for one individual may be wrong for the next; and what is sin and abomination to one may be a worthwhile part of the next individual's life. The range of individual variation, in any particular case, is usually much greater than is generally understood. Some of the structural characters in my insects vary as much as twelve hundred percent. In some of the morphologic and physiologic characteristics which are basic to the human behavior which I am studying, the variation is a good twelve thousand percent. And yet social forms and moral codes are prescribed as though all individuals were identical; and we pass judgments, make awards, and heap penalties without regard to the diverse difficulties involved when such different people face uniform demands, (in Christenson 1971, p. 7)

  Kinsey's conclusions may be applied to race. How can we pigeonhole "blacks" as "permissive" or "whites" as "intelligent" when such categories as black and white, permissive and intelligent, are actually best described as a continuum, not a pigeonhole? "Dichotomous variation is the exception and continuous variation is the rule, among men as well as among insects," Kinsey concluded. Likewise, for behavior we identify right and wrong "without allowance for the endlessly varied types of behavior that are possible between the extreme right and the extreme wrong." That being the case, the hope for cultural evolution, like that of biological evolution, depends on the recognition of variation and individualism: "These individual differences are the materials out of which nature achieves progress, evolution in the organic world. In the differences between men lie the hopes of a changing society" (in Christenson 1971, pp. 8-9).

  In America, we tend to confound race and culture. For instance, "white or Caucasian" is not parallel to "Korean-American" but to "Swedish-American." The former roughly indicates a supposed racial or genetic make-up, while the latter roughly acknowledges cultural heritage. In 1995, the Occidental College school newspaper announced that almost half (48.6 percent) of the Frosh class were "people of color." For the life of me, however, I have a difficult time identifying most students by the traditional external signs of race because there has been so much blending over the years and centuries. I suspect most of them would be hyphenated races, a concept even more absurd than "pure" races. Checking a box on a form for race—"Caucasian," "Hispanic," "African-American," "Native American," or "Asian-American"—is untenable and ridiculous. For one thing, "American" is not a race, so labels such as "Asian-American" and "African-American" are still exhibits of our confusion of culture and race. For another thing, how far back does one go in history? Native Americans are really Asians, if you go back more than twenty or thirty thousand years to before they crossed the Bering land bridge between Asia and America. And Asians, several hundred thousand years ago probably came out of Africa, so we should really replace "Native American" with "African-Asian-Native American." Finally, if the Out of Africa (single racial origin) theory holds true, then all modern humans are from Africa. (Cavalli-Sforza now thinks this may have been as recently as seventy thousand years ago.) Even if that theory gives way to the Candelabra (multiple racial origins) theory, ultimately all hominids came from Africa, and therefore everyone in America should simply check the box next to "African-American." My maternal grandmother was German and my maternal grandfather was Greek. The next time I fill out one of those forms I am going to check "Other" and write in the truth about my racial and cultural heritage: "African-Greek-German-American."

  And proud of it.

  PART 5

  HOPE

  SPRINGS

  ETERNAL

  Hope springs eternal in the human breast;

  Man never Is, but always To be blest.

  The soul, uneasy, and confin'd from home,

  Rests and expatiates in a life to come.

  Lo, the poor Indian! whose untutor'd mind

  Sees God in clouds, or hears him in the wind;

  His soul proud Science never taught to stray

  Far as the solar walk or milky way;

  Yet simple Nature to his hope has giv'n,

  Behind the cloud-topp'd hill, an humbler heav'n.

  —Alexander Pope, An Essay on Man, 1733

  16

  Dr. Tipler Meets Dr. Pangloss

  Can Science Find the Best of All Possible Worlds?

  Alfred Russel Wallace, the nineteenth-century British naturalist whose name is permanently tethered to Charles Darwin's for his co-discovery of natural selection, got himself into trouble in his quest to find a purpose for every structure and every behavior he observed. For Wallace, natural selection shaped every organism to be well adapted to its environment. His overemphasis on natural selection led to his hyper-adaptationism. He argued in the April 1869 issue of the Quarterly Review, much to Darwin's dismay, that the human brain could not entirely have been the product of evolution because in nature there is no reason to have a human-size brain, capable of such unnatural abilities as higher math and aesthetic appreciation. No purpose, no evolution. His answer? "An Overruling Intelligence has watched over the action of those laws, so directing variations and so determining their accumulation, as finally to produce an organization sufficiently perfect to admit of, and even to aid in, the indefinite advancement of our mental and moral nature" (p. 394). The theory of evolution proves the existence of God.

  Wallace fell into hyper-adaptationism because he believed evolution should have created the best possible organisms in this best of all possible worlds. Since it had not, there had to be another active agent—a higher intelligence. Ironically, the natural theologians whose beliefs Wallace's evolutionary theories helped to overturn made a similar argument, the most famous of which is William Paley's 1802 Natural Theology, which opens with this passage:

  In crossing a heath, suppose I pitched my foot against a stone, and were asked how the stone came to be there; I might possibly answer, that, for any thing I knew to the contrary, it had lain there for ever.... But suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place; I should hardly think
of the answer which I had before given—that, for any thing I knew, the watch might have always been there. Yet why should not this answer serve for the watch as well as for the stone? For this reason, and for no other, viz. that, when we come to inspect the watch, we perceive that its several parts are framed and put together for a purpose.

  For Paley, a watch is purposeful and thus must have been created by a being with a purpose. A watch needs a watchmaker, just as a world needs a world-maker—God. Yet both Wallace and Paley might have heeded the lesson from Voltaire's Candide (1759), in which Dr. Pangloss, a professor of "metaphysico-theology-cosmolonigology," through reason, logic, and analogy "proved" that this is the best of all possible worlds: '"Tis demonstrated that things cannot be otherwise; for, since everything is made for an end, everything is necessarily for the best end. Observe that noses were made to wear spectacles; and so we have spectacles. Legs were visibly instituted to be breeched, and we have breeches" (1985, p. 238). The absurdity of this argument was intended on the part of the author, for Voltaire firmly rejected the Panglossian paradigm that all is best in the best of all possible worlds. Nature is not perfectly designed, nor is this the best of all possible worlds. It is simply the world we have, quirky, contingent, and flawed as it may be.

  For most people, hope springs eternal that if this is not the best of all possible worlds, it soon will be. That hope is the wellspring of religions, myths, superstitions, and New Age beliefs. We are not surprised to find such hopes at large in the world, of course, but we expect science to rise above wish fulfillment. But should we? After all, science is done by human scientists, complete with their own hopes, beliefs, and wishes. As much as I admire Alfred Russel Wallace, with hindsight it is easy to see where his hopes for a better world biased his science. But surely science has progressed since then? Nope. A plethora of books, mostly by physicists and cosmologists, testifies to the fact that hope continues to spring eternal in science as well as religion. Fritjof Capra's The Tao of Physics (1975) and especially The Turning Point (1982) unabashedly root for the blending of science and spirituality and hope for a better world. The Faith of a Physicist (1994) by the Cambridge University theoretical physicist turned Anglican priest, John Polkinghorne, argues that physics proves the Nicene Creed, which is based on a fourth-century formula of Christian faith. In 1995, physicist Paul Davies won the $1 million Templeton Prize for the advancement of religion, in part for his 1991 book, The Mind of God. The nod for the most serious attempts, however, has to go to John Barrow and Frank Tipler's 1986 Anthropic Cosmological Principle and Frank Tipler's 1994 The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead. In the first book, the authors claim to prove that the universe was intelligently designed and thus there is an intelligent designer (God); in the second, Tipler hopes to convince readers that they and everyone else will be resurrected in the future by a supercomputer. These attempts provide a case study in how hope shapes belief, even in the most sophisticated science.

  As I read The Physics of Immortality and talked with its author, I was struck by the parallels between Tipler, Wallace, and Paley. Tipler, I came to realize, is Dr. Pangloss in disguise. He is a modern hyper-adaptationist, a twentieth-century natural theologian. (Upon hearing this analogy, Tipler admitted to being a "progressive" Panglossian.) Tipler's highly tutored mind has brought him full circle to Alexander Pope's Indian in his Essay on Man (see the epigraph on the opening page of Part 5), although Tipler finds God not only in the clouds and wind but also on his own solar walk through the cosmos in pursuit of not a humbler heaven but a vainglorious one.

  What in Tipler's background might explain his Panglossian tendencies— his need to make this the best of all possible worlds? From his youth, Tipler was sold on the DuPont motto, "Better living through chemistry," and all that it stood for—unalloyed progress through science. Fascinated by the Redstone rocket program and the possibility of sending a man to the moon, for instance, at age eight Tipler wrote a letter to the great German rocket scientist, Wernher von Braun. "The attitude of unlimited technological progress is what drove Wernher von Braun and it is what has motivated me all my life" (1995).

  Raised in the small rural town of Andalusia, Alabama, where he graduated from high school in 1965 as class valedictorian, Tipler intended to speak out in his graduation speech against segregation—not a popular position to take in the Deep South of the mid-1960s, especially for a youth of seventeen. Tipler's father, an attorney who routinely represented individuals against large corporations and who also opposed segregation, insisted that Frank not go public with such a controversial position since the family had to continue living in the town after Frank went away to college. Despite (or perhaps because of) the fact that he was raised a Southern Baptist with a strong fundamentalist influence, Tipler says he was an agnostic by the age of sixteen. Brought up in an upper-middle-class environment by a politically liberal father and apolitical mother, Tipler is a firstborn with one brother four years his junior.

  What difference does birth order make? Frank Sulloway (1996) has conducted a multivariate correlational study, examining the tendency toward rejection of or receptivity to heretical theories based on such variables as "date of conversion to the new theory, age, sex, nationality, socioeconomic class, sibship size, degree of previous contact with the leaders of the new theory, religious and political attitudes, fields of scientific specialization, previous awards and honors, three independent measures of eminence, religious denomination, conflict with parents, travel, education attainment, physical handicaps, and parents' ages at birth." Using multiple regression models, Sulloway discovered, in analyzing over one million data points, that birth order was the strongest factor in intellectual receptivity to innovation in science.

  Consulting over a hundred historians of science, Sulloway had them evaluate the stances taken by 3,892 participants in twenty-eight disparate scientific controversies dating from 1543 to 1967. Sulloway, himself a later-born, found that the likelihood of accepting a revolutionary idea is 3.1 times greater for laterborns than firstborns; for radical revolutions, the likelihood is 4.7 times higher. Sulloway noted that "the likelihood of this happening by chance is virtually nil." Historically, this indicates that "laterborns have indeed generally introduced and supported other major conceptual transformations over the protests of their firstborn colleagues. Even when the principal leaders of the new theory occasionally turn out to be firstborns— as was the case with Newton, Einstein, and Lavoisier—the opponents as a whole are still predominantly firstborns, and the converts continue to be mostly laterborns" (p. 6). As a "control group" of sorts, Sulloway examined data from only children and found only children wedged between firstborns and laterborns in their support for radical theories.

  Why are firstborns more conservative and influenced by authority? Why are laterborns more liberal and receptive to ideological change? What is the connection between birth order and personality? Firstborns, being first, receive considerably more attention from their parents than laterborns, who tend to receive greater freedom and less indoctrination into the ideologies of and obedience to authorities. Firstborns generally have greater responsibilities, including the care of younger siblings, and thus become surrogate parents. Laterborns are frequently a step removed from parental authority, and thus less inclined to obey and adopt the beliefs of the higher authority. Sulloway has taken this a step further by applying a Darwinian sibling-competition model in which children must compete for limited parental resources and recognition. Firstborns are larger, faster, and older, and so receive the lion's share of the goodies. Laterborns, in order to maximize parental benefits, diversify into new areas. This explains why firstborns tend to go into more traditional careers, whereas laterborns seek out less traditional ones.

  Developmental psychologists J. S. Turner and D. B. Helms noted that "usually, firstborns become their parents' center of attraction and monopolize their time. The parents of firstborns are usually not only young and eager
to romp with their children but also spend considerable time talking to them and sharing their activities. This tends to strengthen bonds of attachment between the two" (1987, p. 175). Quite obviously, this attention would include more rewards and punishment, thus reinforcing obedience to authority and controlled acceptance of the "right way" to think. R. Adams and B. Phillips (1972) and J. S. Kidwell (1981) report that this distribution of attention causes firstborns to strive harder for approval than laterborns, and H. Markus (1981) concluded that firstborns tend to be more anxious, dependent, and conforming than laterborns. I. Hilton (1967), in a mother-child interactive experiment with twenty firstborn, twenty laterborn, and twenty only children, found that at four years of age firstborns were significantly more dependent on and asked more frequently for help or reassurance from their mothers than the laterborn or only children. In addition, mothers were most likely to interfere with a firstborn child's task (constructing a puzzle). Finally, R. Nisbett (1968) showed that laterborns are far more likely to participate in relatively dangerous sports than firstborns, which is linked to risk taking and thus to "heretical" thinking.

  Sulloway is not suggesting that birth order alone determines receptivity to radical ideas. Far from it, in fact, as he notes that "birth order is hypothesized to be the occasion for psychologically formative influences operating within the family" (p. 12). In other words, birth order is a predisposing variable that sets the stage for numerous other variables, such as age, sex, and social class, to influence receptivity. Not all scientific theories are equally radical, of course, and in taking this into consideration, Sulloway discovered a correlation between laterborns and the degree of "liberal or radical leanings" of the controversy. He noted that laterborns tended "to prefer statistical or probabilistic views of the world (Darwinian natural selection and quantum mechanics, for example) to a worldview premised on predictability and order." By contrast, he found that when firstborns did accept new theories, they were typically theories of the most conservative type, "theories that typically reaffirm the social, religious, and political status quo and that also emphasize hierarchy, order, and the possibility of complete scientific certainty" (p. 10).

 

‹ Prev