Native Americans were alternately—and sometimes simultaneously—depicted as vermin, human refuse, and as deadly nonhuman predators. The English king James I, after whom Jamestown was named, called them “beastly … slaves to the Spaniards, refuse of the world.” “Once you have but got the track of those Ravenous howling Wolves,” advised Cotton Mather, “then pursue them vigorously; turn not back till they are consumed.”28
Whereas at the inception of their American adventure, the Puritans had considered the Indians to be degenerate human beings snared in the devil’s clutches, it didn’t take long for them to cast the Indians as devils incarnate. The “red devils” of this ethnic demonology were said to possess telltale predatory traits—they were “untamed,” “cruel,” and “bloodthirsty” (the “merciless Indian savages” of the Declaration of Independence). In rhetoric reminiscent of Cotton Mather, George Washington informed a correspondent that Indians and wolves are both “beasts of prey, tho’ they differ in shape.”29
By the dawn of the nineteenth century, the image of the Indian as predator had gained broad currency. As John Wakefield, a U.S. Army private who chronicled the 1832 Black Hawk War, succinctly put it, Indians are “most like the wild beasts than man.” In the same vein, Pennsylvania Supreme Court Judge Hugh Henry Brackenridge referred to “animals, vulgarly called Indians,” and Francis Parkman, one of greatest American historians of the nineteenth century, unblushingly described in his 1847 book The Oregon Trail: Sketches of Prairie and Rocky Mountain Life how dehumanizing the Indians made their extermination morally acceptable.
For the most part a civilized white man can discover very few points of sympathy between his own nature and that of an Indian.… Nay, so alien to himself do they appear that … he begins to look upon them as a troublesome and dangerous species of wild beast, and if expedient, he could shoot them with as little compunction as they themselves would experience after performing the same office upon him.30
By the midcentury, American settlers in Arizona had embarked on what was explicitly described as a war of extermination against the “Apache” (a name that whites used indiscriminately for all of the Indians of the region). Settlers did not merely regard these people as animals. They were characterized as superlatively dangerous predators—“the most savage wild beast”—and were avidly hunted. “Persons were constantly coming in,” observed Judge Joseph Pratt Allen, “who wished to join the party, one and all believing and talking of nothing but killing Indians.…” In accord with their subhuman status, the corpses of dead Indians were treated like game. On one hunting expedition, one participant recorded that the brains of five dead Apache were used to treat a deer hide to make buckskin (“the best buckskin I ever seed was tanned with Injun brains,” he remarked). Both civilian Indian hunters and U.S. military personnel most often characterized the Apache as wolves, just as George Washington and others had done a century before.31
Expeditions became in many military dispatches “hunts”; the Apache inevitably “wolves.” The 1867 report of the U.S. Secretary of War, for example, referred to fighting Apaches as “more like hunting wild animals than any regular kind of warfare” and noted that the Apaches “like wolves … are ever wandering.” As the U.S. Army officer Davis Britton, posted to Arizona a decade later put it, “[W]e hunted and killed them as we hunted and killed wolves.”32
DEFINING THE HUMAN
The history of the relationship between European settlers and Native Americans illustrates how dangerous dehumanization can be. The acts of violence perpetrated by settlers, as well as their tendency to turn a blind eye to the suffering that they caused, were intertwined with their conception of Indians as less than human.
Today, every educated person knows that we are all Homo sapiens—members of a single species—and that the biological differences between one human group and another are trivial at most. But the dehumanizing impulse operates at the gut level, and easily overrides merely intellectual convictions. To understand why it has this power, we’ve got to answer a fundamental question. What exactly goes on when we dehumanize others?
The obvious place to look for an answer is in the writings of psychologists. If anybody understands the psychology of dehumanization, it should be them. However, as Australian psychologist Nick Haslam has recently pointed out, researchers have for the most part neglected the most fundamental question. “Any understanding of dehumanization,” he observes, “must proceed from a clear sense of what is being denied to the other, namely humanness. However … writers on dehumanization have rarely offered one.”33
Can’t we turn to science for an explanation of humanness? Surely, one might think, biologists can tell us exactly what it means to be human. Actually, they can’t, for a reason that is often overlooked. Human doesn’t have any fixed meaning in biology. Some scientists equate human with both modern human beings and Neanderthals, while others speak about the split between Neanderthals and humans (in which case humans are equated with Homo sapiens sapiens). Others describe all members of the genus Homo as human, while still others reserve the term human for all of the species in our lineage after our common ancestor with the chimpanzee.34 In short, biologists’ use of human is all over the map. The reason for this is a simple one. Human belongs to a completely different taxonomy—a pre-Darwinian folk-taxonomy that owes more to the great chain of being than it does to modern biological systematics. The two frameworks are incommensurable.
To appreciate the depth of the problem, let’s do a little thought experiment. Suppose that on a planet that is very much like Earth (call it “Schmearth”) orbiting a distant star, creatures evolved that were anatomically, physiologically, and behaviorally indistinguishable from Homo sapiens. In fact, we can imagine that each earthling alive today has a schmearthling counterpart. However, earthlings have a completely different evolutionary history from schmearthlings. Life emerged independently on the two planets, but once it emerged, evolution took the same course, stumbling on the same anatomical structures and physiological mechanisms (what philosopher Daniel C. Dennett calls “good tricks”).35 As a result, if you were teleported to Schmearth, à la Star Trek, it would seem just like Earth.
In this scenario, no earthling would have an ancestor in common with any schmearthling. Earthlings and schmearthlings would be completely unrelated to one another, even though there would be no way to tell them apart.
What, exactly, is the relationship between earthlings and schmearthlings? If earthlings are human and schmearthlings are indistinguishable from earthlings, then it seems like we ought to say that schmearthlings are human. It would seem churlish to deny humanity to someone exactly like you just because of where they happened to be born. Are schmearthlings Homo sapiens? No, they’re not. Any biologist will tell you that every member of the same species must be part of the same biological lineage—descended from the same ancestor. So, it seems reasonable to say that if schmearthlings existed, they would be humans but not Homo sapiens. And if this is right, then it’s not true to say that all humans are Homo sapiens.
Harvard psychologist Herbert C. Kelman was one of the first, if not the first, psychologists to use the term dehumanization in a scientific context. He pointed out in a landmark article in 1973 that to understand dehumanization, we need to know what it is to perceive another person as human.”36 Kelman wisely refrained from speculating about what it is to be human. Instead, he wanted to illuminate the folk-concept of the human. “I would propose,” he wrote, “that to perceive another as human we must accord him identity and community.…”
To accord a person identity is to perceive him as an individual, independent and distinguishable from others, capable of making choices, and entitled to live his own life according to his own goals. To accord a person community is to perceive him—along with oneself—as part of an interconnected network of individuals, who care for each other, who recognize each other’s individuality, and who respect each other’s rights. These two features together constitute the basis for in
dividual worth.37
Kelman went on to explain some of the implications of his analysis, writing that:
To perceive others as fully human means to be saddened by the death of every single person, regardless of the population, group, or part of the world from which he comes, and regardless of our own personal acquaintance with him. If we accord him identity, then we must individualize his death [and] … if we accord him community, then we must experience his death as a personal loss.38
As UCLA sociologist Leo Kuper remarks, Kelman offers an extremely idealized notion of what it is to regard another person as “fully human.”39 Many people die every day, often under horrendous circumstances, but very few of us lose any sleep over this. We certainly don’t feel these deaths as a personal loss, yet we don’t consider the victims as subhuman either. However, when shorn of its excesses, Kelman’s definition seems to point in the right direction.
Consider his notion of community. It’s true that dehumanized others are socially marginalized—the dehumanized person is not one of us (whoever “us” happens to be). Characteristics that set them apart from the majority are emphasized, a task that is most easily accomplished if obvious differences like skin color come into play. As I mentioned in Chapter One, predominantly white Allied troops treated Japanese people as subhuman much more extensively than their German counterparts. Likewise, Native Americans, African Americans, and Chinese immigrants were all easy targets because their physical characteristics made them stand out from the white majority.
Sometimes the differences aren’t quite so obvious. In these cases, social practices are implemented to isolate the dehumanized group. The historical relationship between Muslims and Jews in North Africa and the Middle East provides an interesting case. Thriving Jewish communities existed in Muslim lands for many centuries. Both Jews and Muslims were of more or less identical stock—they were Semites. So, social rituals and symbols were needed to differentiate Jews from their Muslim overlords.
Jews were accorded the status of dhimmi—people permitted to dwell in the House of Islam in a condition of subservience. The Koran stipulates that dhimmi are to be granted basic liberties, including freedom of worship, but they must pay a special tax (jizya) and live in “willing submission” to the Muslim majority.40 Although initially undefined, this “willing submission” eventually came to include a range of social restrictions and humiliations, some of which were itemized by the eleventh-century Persian theologian Abu Hamid Muhammad ibn Muhammad al-Ghazali as follows:
The dhimmi must … pay the jizya … on offering up the jizya, the dhimmi must hang his head while the official takes hold of his beard and hits [the dhimmi] on the protuberant bone beneath his ear.… [T]heir houses may not be higher than the Muslim’s, no matter how low that is. The dhimmi may not ride an elegant horse or mule; he may ride a donkey only if the saddle is of wood. He may not walk on the good part of the road. They [the dhimmis] have to wear an [identifying] patch [on their clothing].41
Within a few centuries, the practice of requiring Jews to wear a special uniform was adopted all over Christian Europe (it was declared mandatory by Pope Innocent III in 1215). Jews were required to wear a conical hat and a yellow Star of David—yellow to symbolize Judas Iscariot’s betrayal of Jesus for gold (oddly, in light of the fact that Judas was supposed to have betrayed Jesus for thirty pieces of silver).42 After their invasion of Poland in 1939, the Nazis revived this practice, compelling Jews to sew a yellow Star of David inscribed with the word “Jude” (“Jew”) on their clothing. Hitler’s Germany provides an exceptionally clear illustration of the ritualistic paraphernalia of social exclusion. As Duke University historian Claudia Koonz points out in her book The Nazi Conscience:
Nazism offered all ethnic Germans … a comprehensive system of meaning that was transmitted through powerful symbols and renewed in communal celebrations. It told them how to differentiate between friend and enemy, true believer and heretic, Jew and non-Jew. In offering the faithful a sanctified life in the Volk, it resembled a religion. Its condemnation of egoism and celebration of self-denial had much in common with ethical postulates elsewhere. But in contrast to the optimistic language of international covenants guaranteeing universal rights to all people, Nazi public culture was constructed on the mantra “Not every being with a human face is human.”43
Kelman’s point that dehumanized people are shorn of their individuality also rings true. They are typically thought of as fungible, as parts of an undifferentiated mass. Propagandists often exploit this frightening image. Perhaps the most notorious example is a scene from the notorious German film The Eternal Jew portraying Jews as a seething swarm of rats. Similar conceits were used in propaganda cranked out by other nations, including the United States. A 1945 film called Japan: Know Your Enemy, directed by Frank Capra (who directed several popular motion pictures during the 1930s and ’40s, including Mr. Deeds Goes to Town, Mr. Smith Goes to Washington, and It’s a Wonderful Life), presented Japanese people as indistinguishable from one another, “photographic reprints off the same negative.” George Orwell expressed the position even more explicitly. When in Marrakech, he wrote, “it is always difficult to believe that you are walking among human beings … Are they really the same flesh as yourself? Do they even have names? Or are they merely a kind of undifferentiated brown stuff, about as individual as bees or coral insects?” The disconcertingly fecal image of Moroccans as “undifferentiated brown stuff” has a counterpart in imagery used more recently in discussions of illegal immigration from Latin America to the United States, a country alleged to be “awash under a brown tide” of Mexican immigrants (as almost a century earlier, the American anti-immigrationist Lothrop Stoddard had warned that white America was soon to be swamped by a “rising tide of color”). The significance of the expression “brown tide” may not be obvious to all readers. The term refers to an algae infestation specific to the Gulf of Mexico that turns seawater brown.44
More recently still, Norberto Ceresole, an adviser to Venezuelan president Hugo Chavez, expressed the principle with chilling frankness. Ceresole recounts his epiphany that Jews masterminded the 1994 bombing of the Argentine Jewish Mutual Aid Association in Buenos Aires, which killed eighty-five people and injured three hundred. The Jews were, he said, “not as I had known them until then, that is as individuals distinct from one another, but rather as elements for whom individuation is impossible…” At the time of writing there have been no convictions for this crime, although Hezbollah operatives are suspected.45
Research findings support anecdotal examples like the ones I have just described. Social psychologists confirm that we are likely to perceive people outside our own community as more alike than those within it. We perceive members of our own group as individuals, but see other groups as more or less homogenous (psychologists call this the “outgroup homogeneity bias”). When the outgroup homogeneity bias merges with the outgroup bias described in Chapter Two, the result is a dangerous mix. Outsiders are both denigrated and stereotyped: we are a richly diverse community of praiseworthy individuals, but they are all dishonest, violent, filthy, stupid, or fanatical.46
Kelman’s analysis of the concept of the human has a lot in its favor, but it also has some major shortcomings. Before considering them, it will be useful to reflect on what’s required for the satisfactory analysis of a concept. When developing an analysis of a concept, the analysis should be such that it includes everything that ought to come under the concept, while excluding everything that is external to it. Philosophers explain this by citing what they call its necessary and sufficient conditions. Necessary conditions set out what has to be true of a thing for it to come under the concept. Suppose that you wanted to analyze the concept “porcupine.” You might begin by making a statement of the form, “All porcupines are so-and-sos,” for instance: “All porcupines are animals.” This wouldn’t be incorrect, but it wouldn’t be very informative, either, because it includes too much. Sure, porcupines are animals—but s
o are blue whales, beagles, and butterflies. A satisfactory analysis has got to rule out these other creatures. This is where sufficient conditions come in. Sufficient conditions set out what characteristics of a thing are enough for it to come under the concept. To specify the sufficient conditions for being a porcupine, you need a statement of the form “All so-and-sos are porcupines,” for instance: “All spiny mammals native to North America are porcupines.” This would certainly be true—porcupines are the only spiny mammals native to North America—but it would be overly restrictive, because it excludes those species of porcupine that are native to other parts of the world.
A really good analysis specifies conditions that are both necessary and sufficient, and thereby pinpoints exactly what comes under the concept in question. Necessary and sufficient conditions are the Holy Grail of definitions; often sought but seldom found. Mathematical and logical concepts are about the only ones that are precise enough to permit this sort of analysis. Most of our ordinary, workaday notions (as well as most scientific ones) are far too fuzzy. To appreciate the difficulties involved, try to work out necessary and sufficient conditions for the concept of the beautiful. What exactly is it for something to be beautiful? Sunsets, faces, music, and even equations can be described as “beautiful”—but what do all such examples (and only such examples) have in common? We might try something like “pleasing to the senses,” but this doesn’t work. The condition is not even a necessary one, and is miles away from being sufficient. Ideas can be beautiful, but they can’t be perceived with our sense organs. And can’t something be beautiful without anyone ever seeing it? Orchids were every bit as beautiful fifty million years ago as they are today, even though there was nobody around then to marvel at their splendor. Chocolate is pleasing to smell and taste, but I would hesitate to call it beautiful.
Less Than Human Page 10