It was famously argued that male aggression is intrinsically good for the species, since it is always better for the species if the stronger of two males takes control of a favored female. But this is precisely what is not known. Whether an aggressively successful male has genes at other loci that are beneficial to his progeny is an open question that must be answered in each separate case (especially by the choosing female). Perhaps the success of aggressive males spreads genes only for aggressiveness, which are otherwise useless for the species (or a female’s daughters). In any case, male elephant seals fighting for access to females clumped together on breeding islands typically kill about 10 percent of the young every year (fathered by other males) by trampling them to death during fights. In what sense is male aggression good for the species? Are they eliminating inferior genes underfoot?
Close relationships are also easily imagined to be conflict-free. Thus, mother/offspring coevolution is allegedly favored—each party evolving to help the other. As we saw in Chapter 4, nothing like this is actually true of real families. Even in the formation of the placenta, the mother does not help the invading fetal tissue—she puts up chemical and physical obstacles (the better to avoid later excess investment). Likewise, in the 1960s, bird watchers liked to imagine that the families they loved to observe were free of conflict, but this was soon proven wrong when rates of extra-pair paternity exceeding 20 percent were regularly reported.
Thus, for years evolutionary biologists have used a form of argumentation that helped cement in the social sciences and elsewhere the notion that evolution favored what was good for the family, the group, the culture, the species, and perhaps even the ecosystem, while minimizing the reality of conflict within any of these entities. Anthropologists soon rationalized warfare itself as favored by evolution because it too was such a nifty population-regulation device. Note that the error is virtually irrelevant for nonsocial traits. The human locking kneecap allows us to stand erect without wasting energy in tensed legs. It evolved because it benefited the individual with the new kneecap, but if you said it evolved to benefit the species, you would not misinterpret the kneecap. Not so for social traits. Here, as we have seen, we can exactly invert the meaning of a trait by failing to see how it is favored among individuals, even though it may be more costly to others. Instead we imagine that everyone benefits. This often amounts to reaffirming Pangloss’s theorem—that everything is for the best in the best of all possible worlds.
Likewise, altruism toward others presents no great problem for species-advantage thinking, because as long as benefit is greater than cost, there is a net benefit for the species. Of course, at the individual level, altruism is a problem to explain and requires special conditions, such as kinship or reciprocal relations, with internal conflict in both cases. The latter generates a sense of fairness to evaluate nonreciprocal relations, an adaptation unnecessary under a group-selected view.
IS ECONOMICS A SCIENCE?
The short answer is no. Economics acts like a science and quacks like one—it has developed an impressive mathematical apparatus and awards itself a Nobel Prize each year—but it is not yet a science. It fails to ground itself in underlying knowledge (in this case, biology). This is curious on its face, because models of economic activity must inevitably be based on some notion of what an individual organism is up to. What are we trying to maximize? Here economists play a shell game. People are expected to attempt to maximize their “utility.” And what is utility? Well, anything people wish to maximize. In some situations, you will try to maximize money acquired, in others food, and in yet others sex over food and money. So we need “preference functions” to tell us when one kind of utility takes precedence over another. These must be empirically determined, since economics by itself can provide no theory for how the organism is expected to rank these variables. But determining all of the preference functions by measurement in all the relevant situations is hopeless from the outset, even for a single organism, much less a group.
As it turns out, biology now has a well-developed theory of exactly what utility is (even if it misrepresented the truth for some one hundred years) based on Darwin’s concept of reproductive success. If you are talking about utility (that is, benefit) to a living creature, then it is useful to know that this ultimately refers to the individual’s inclusive fitness, that is, the number of its surviving offspring plus effects (positive and negative) on the reproductive success of relatives, each devalued by its relatedness to them. In many situations, the added precision of this definition (compared to reproductive success alone) makes no difference, but by resolutely acting as if they can produce a science out of whole cloth, that is, independent of noneconomic scientific knowledge, economists miss out on a whole series of linkages that may be critical. They often implicitly assume, as we noted in the first chapter, that market forces will naturally constrain the cost of deception in social and economic systems, but such a belief fails to correspond with what we know from daily life, much less biology more generally. Yet such is the detachment of this “science” from reality that these contradictions arouse notice only when the entire world is hurtling into an economic depression based on corporate greed wedded to false economic theory.
The mistake is partly related to the fact that “utility” has ambiguity built into it. It can refer to utility of your actions to you or to others, including the rest of your group. Economists easily imagine that the two kinds of utility are well aligned. They often argue that individuals acting for personal utility (undefined) will tend to benefit the group (provide general utility). Thus they tend to be blind to the possibility that unrestrained pursuit of personal utility can have disastrous effects on group benefit. This is a well-known fallacy in biology, with hundreds of examples. Nowhere do we assume in advance that the two kinds of utility are positively aligned. This must be shown separately for any given case.
One recent effort by economics to link up with allied disciplines is called behavioral economics, a link with psychology that is most welcome. But as usual, economists resolutely refuse to make the final link to evolutionary theory, even when going through the motions. That is, even those economists who propose evolutionary explanations of economic behavior often do so with unusual, counterlogical assumptions. For example, a common recent mistake (published in all the best journals) is to assume that our behavior evolved specifically to fit artificial economic games.
To imagine how bizarre this is, consider the ultimatum game described in Chapter 2. People often reject unfair offers of a split of money by anonymous others (for example, 80 percent to the proposer and 20 percent to the recipient) even though they thereby lose money. Thus, the game measures our sense of fairness: How much are we willing to suffer in order to punish someone acting unfairly toward us? But a group of economists (with some anthropologists thrown in for added rigor) has made the extraordinary argument that people are acting as if they had evolved to fit this unusual lab situation. Put differently, that we reject unfair offers at a cost to ourselves in order to punish the perpetrator in a completely anonymous exchange means to them that the bias evolved to fit exactly this situation—one-time exchanges with no possible return benefit for the actor, or relatedness, only a group benefit. Once again, group trumps individual. But this is as logical as arguing that our terror watching a horror film evolved to fit movie showings. Biologists have brought living creatures into the laboratory for centuries to study their traits, but no one I know of has shortcut the study of the function of the trait by imagining that the trait evolved to fit the laboratory.
A recent Nobel winner in economics wondered how it was possible for his well-developed science to fail completely to predict the catastrophic economic events that started in 2008. One part, of course, is that economic events are intrinsically complex, involving many factors, and the final result, the aggregate of the behavior of an enormous number of people, though not quite as complex as the weather, is almost as difficult to predict. As for the cau
se the economist located, it was infatuation with beautiful mathematics at the cost of attention to reality. Surely this is part of the problem, but nowhere does he suggest that the first piece of reality they should pay attention to—and this has been obvious for some thirty years now—is biology, in particular evolutionary theory. If only thirty years ago economists had built a theory of economic utility on a theory of biological self-interest—forget the beautiful math and pay attention to the relevant math—we might have been spared some of the extravagances of economic thought regarding, for example, built-in anti-deception mechanisms kicking in to protect us from the harmful effects of unrestrained economic egotism by those already at the top.
Finally, when a science is a pretend science rather than the real thing, it also falls into sloppy and biased systems for evaluating the truth. Consider the following, a common occurrence during the past fifteen years. The World Bank advises developing countries to open their markets to foreign goods, let the markets rule, and slash the welfare state. When the program is implemented and fails, the diagnosis is simple: “Our advice was good but you failed to follow it closely enough.” There is little risk of being falsified with this kind of procedure.
CULTURAL ANTHROPOLOGY
Cultural anthropology made a tragic left turn in the mid-1970s from which it has yet to recover (at least in the United States). Before then, the field was called social anthropology and included all forms of human social behavior, especially as displayed by different cultures and peoples. The field was meant to partner with physical anthropology, the study of the body, including fossils and artifacts from the past. But suddenly in the early 1970s, strong social theory emerged from biology and a variety of subjects were addressed seriously for the first time: kinship theory, including parent/offspring relations, relative parental investment, and the evolution of sex differences, the sex ratio, reciprocal altruism and a sense of fairness, and so on. Social anthropologists had a choice: accept the new work, master it, and rewrite their own discipline along the new lines, or reject the new work and protect their own expertise (such as it was). As has been noted, “Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everyone gets busy on the proof.” This is perhaps especially true in academia.
Consider your dilemma as a social anthropologist. You have invested twenty years of your life in mastering social anthropology. Along the way, you have completely neglected biology. Now comes the choice: acknowledge biology (painful), invest three years in catching up (nearly unimaginable), then compete with people twenty years younger than you and better trained (impossible)—or instead ride the old horse for all she is worth, whipping social anthropology until she bleeds? Even in physics, it was famously said that the field advanced one funeral at a time—only death could get people to change their minds. But notice the intermediate path not taken. They could have said, “I will not retool myself; it is too late. But I will make sure my students learn something useful about the new work in biology (they can even teach me) while I continue to do my work.” Complete rejection is redolent of self-deception. Outright denial is the easiest immediate path but entrains mounting costs, now onto the third generation, making it ever harder to resist each new wave of denial.
Certainly the social anthropologists rose to the challenge, even renaming their field “cultural anthropology” to more explicitly rule out the relevance of biology in advance. Now we were no longer social organisms but cultural ones. The justification, in turn, was moral. Out of biological thinking flowed biological determinism (the notion that genetics influences daily life), whose downstream effects included fascism, racism, sexism, heterosexism, and other odious “isms.” To mention natural selection was to imply the existence and perhaps even utility of genes, which was prohibited on the moral grounds just given. Thus an entire new area of social theory would be ruled out based on the alleged pernicious influences of its assumptions, which were, in fact, widely accepted as true (genes exist, they affect social traits, natural selection alters their relative frequencies, and this produces meaningful patterns). Once you remove biology from human social life, what do you have? Words. Not even language, which of course is deeply biological, but words alone that then wield magical powers, capable of biasing your every thought, science itself reduced to one of many arbitrary systems of thought.
And what has been the upshot of this? Thirty-five wasted years and counting. Years wasted in not synthesizing social and physical anthropology. Strong people welcome new ideas and make them their own. Weak people run from new ideas, or so it seems, and then are driven into bizarre mind states, such as believing that words have the power to dominate reality, that social constructs such as gender are much stronger than the 300 million years of genetic evolution that went into producing the two sexes—whose facts in any case they remain resolutely ignorant of, the better to develop a thoroughly word-based approach to the subject.
In many ways, cultural anthropology is now all about self-deception—other people’s. Science itself is a social construct, one among many equally valid ways of viewing the world: the properties of viruses may also be social constructs, the penis may, in some meaningful sense, be the square root of–1, and so on. As a result, most US anthropology departments consist of two completely separate sections, in which, as one biological colleague put it, “they think we’re Nazis and we think they are idiots”—hardly a platform for synthesis and mutual growth.
PSYCHOLOGY
In the 1960s, psychologists often explicitly disavowed the importance of biology. At Harvard, to get a PhD in psychology, you were required to take one semester of physics. This was to give you an idea of what an exact science looked like. No biology was required. Like economists, psychologists were going to create their field out of itself: learning theory, social psychology, psychoanalysis—essentially competing guesses about what was important in human development, none with any foundation. Psychoanalysis was a long-running fraud, as we shall see below, and learning theory made far-reaching and implausible claims about the ability of reinforcement to mold all behavior adaptively. It was soon shown on logical grounds alone that reinforcement could not produce language, or even just associations of actions and their effects when the latter were delayed more than a few moments.
On the positive side, psychology has always concentrated on the individual and was thus congenial to an approach based on individual advantage. Recently a school of evolutionary psychology has developed, while psychology has been increasingly integrated with other areas of biology, sensory physiology long ago but now neurophysiology and immunology. So psychology is rapidly becoming the branch of evolutionary biology it always wished to be.
Social psychology somewhat lags the rest of psychology, another example perhaps of the retarding effects of deceit and self-deception on disciplines with more social content. It has generated artificial methodologies meant to shortcut work and achieve quick results, the curse of psychology for more than a century: wishing to say more than available knowledge permits. A key such method was that of self-reports, or questionnaire-answering behavior—what people say about themselves. In retrospect, it seems unwise to have tried to build a science of human behavior on people’s verbal responses to questions. For one thing, forces of deceit and self-deception—or call them issues of self-presentation and self-perception, if you prefer—loom large. We often do not tell the truth about ourselves to others and we often do not know the truth in the first place. In using these measures, exactly how were they screening out deception, never mind self-deception, to arrive at the truth? And how is this possible in the absence of an explicit theory of deceit and self-deception? Building a science on this foundation led to numerous significant correlations between ill-defined variables that are poorly measured, but little or no cumulative growth over time. Instruments (that is, questionnaires) were said to be well-validated, predictive, and internally consistent, that is, people answer the same way a month apart, the
measures correlate with some other measures, and all questions point in the same direction (or are reverse scored). Not a very impressive nod toward methodology, but fortunately this era is coming to a close, with new methodologies that access unconcious biases directly.
PSYCHOANALYSIS: SELF-DECEPTION IN THE STUDY OF SELF-DECEPTION
Freud claimed to have developed a detailed science of self-deception and human development: psychoanalysis. But one measure of a field is whether it grows and prospers or wilts and withers, and psychoanalysis has not prospered. As it turned out, the empirical foundation for developments in the field was something called clinical lore, essentially what psychiatrists told one another over drinks after a day’s work. That is, when you asked a psychiatrist what his (as he almost always was) basis was for believing that a key part of the female psyche was “penis envy” or that the route to understanding males lay in something called castration anxiety, you were told that the basis was shared experiences, assumptions, and assertions among psychoanalysts about what went on during psychotherapy—something inaccessible to you, unverifiable, and, as a system, providing no hope for improvement. Indeed, the failure to state or develop methodologies capable of producing useful information is almost the definition of nonscience, and in this regard, psychoanalysis has been spectacularly successful. When is the last time you heard of a large, double-blind study of penis envy or castration anxiety?
The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life Page 36