American Holocaust

Home > Other > American Holocaust > Page 40
American Holocaust Page 40

by David E. Stannardx


  Indeed it is. The purpose of this brief tour across several recent battlegrounds is not simply to condemn what is so easily condemnable, however, but rather to illustrate how close to the surface of everyday life is the capacity for racist dehumanization and consequently massive devastation. For among the many things that warfare does is temporarily define the entire enemy population as superfluous, as expendable—a redefinition that must take place before most non-psychopaths can massacre innocent people and remain shielded from self-condemnation. And nothing is more helpful to that political and psychological transformation than the availability of a deep well of national and cultural consciousness that consigns whole categories of people to the distant outback of humanity.

  But even the worst wars end. Military defeat leads to political surrender, for it is politics that most wars are about. Genocide is different. The purpose of genocide is to do away with an entire people, or to indiscriminately consume them, either by outright mass murder or by creating conditions that lead to their oblivion. Thus, the slave labor projects that worked people to death in the synthetic rubber factory at Auschwitz, or in the nearby coal mines, were no less genocidal than the gas chambers there and in other camps. Moreover, although Arno J. Mayer may well be correct in contending that “from 1942 to 1945, certainly at Auschwitz, but probably overall, more Jews [in the camps] were killed by so-called “natural” causes than by “unnatural” ones—“natural” causes being “sickness, disease, undernourishment, [and] hyperexploitation,” as opposed to “unnatural” causes such as “shooting, hanging, phenol injection, or gassing”—there can be no denying, as Mayer himself insists, that those who died “naturally” were no less victims of genocide than others who were murdered outright.23 Indeed, it is insufficient to stop even here. For as Michael R. Marrus rightly states:

  It is clearly wrong to separate from the essence of the Holocaust those Jews who never survived long enough to reach the camps, or who were shot down by the Einsatzgruppen in the Soviet Union, or who starved in the ghettos of eastern Europe, or who were wasted by disease because of malnutrition and neglect, or who were killed in reprisal in the west, or who died in any of the countless other, terrible ways—no less a part of the Holocaust because their final agonies do not meet some artificial standard of uniqueness.24

  The same is true of the anti-Indian genocide in the Americas. Just as those Jews and others who died of exploitation and disease and malnutrition and neglect and in “countless other, terrible ways”—other, that is, than straightforward cold-blooded butchery—would not have died when and where they did, but for the genocide campaign that was swirling furiously all about them, so too in the Indies and the Americas: the natives of Hispaniola and Mexico and Peru and Florida and Virginia and Massachusetts and Georgia and Colorado and California and elsewhere who died from forced labor, from introduced disease, from malnutrition, from death marches, from exposure, and from despair were as much victims of the Euro-American genocidal race war as were those burned or stabbed or hacked or shot to death, or devoured by hungry dogs.

  To some, the question now is: Can it happen again? To others, as we said in this book’s opening pages, the question is, now as always: Can it be stopped? For in the time it has taken to read these pages, throughout Central and South America Indian men and women and children have been murdered by agents of the government that controls them, simply because they were Indians; native girls and boys have been sold on open slave markets; whole families have died in forced labor, while others have starved to death in concentration camps.25 More will be enslaved and more will die in the same brutal ways that their ancestors did, tomorrow, and every day for the foreseeable future. The killers, meanwhile, will continue to receive aid and comfort and support from the United States government, the same government that oversees and encourages the ongoing dissolution of Native American families within its own political purview—itself a violation of the U.N. Genocide Convention—through its willful refusal to deal adequately with the life-destroying poverty, ill health, malnutrition, inadequate housing, and despair that is imposed upon most American Indians who survive today.26

  That is why, when the press reported in 1988 that the United States Senate finally had ratified the United Nations Genocide Convention—after forty years of inaction, while more than a hundred other nations had long since agreed to its terms—Leo Kuper, one of the world’s foremost experts on genocide wondered in print whether “the long delay, and the obvious reluctance of the United States to ratify the Genocide Convention” derived from “fear that it might be held responsible, retrospectively, for the annihilation of Indians in the United States, or its role in the slave trade, or its contemporary support for tyrannical governments engaging in mass murder.” Still, Kuper said he was delighted that at last the Americans had agreed to the terms of the Convention.27

  Others were less pleased—including the governments of Denmark, Finland, Ireland, Italy, the Netherlands, Norway, Spain, Sweden, and the United Kingdom, who filed formal objections with the United Nations regarding the U.S. action. For what the United States had done, unlike the other nations of the world, was approve and file with the U.N. a self-servingly conditional instrument of ratification. Whatever the objections of the rest of the world’s nations, however, it now seems clear that the United States is unlikely ever to do what those other countries have done—ratify unconditionally the Genocide Convention.28

  For more than forty years another nation with a shameful past, Poland, refused to acknowledge officially what had transpired in the death camps—including Auschwitz, Sobibor, and Treblinka—that had been located on Polish soil. But in the spring of 1991 Poland’s President, Lech Walesa, traveled to Jerusalem and addressed the Israeli Parliament, saying in part: “Here in Israel, the land of your culture and revival, I ask for your forgiveness.”29 At almost precisely that same moment, in Washington, angry members of the U.S. Senate were threatening to cut off or drastically reduce financial support for the Smithsonian Institution because a film project with which it was marginally involved had dared use the word “genocide” to describe the destruction of America’s native peoples. In that instant contrast of ethical principles, in the chasm of moral difference that separated the Polish President and the American Senators, the seamy underside of America’s entire history was briefly but brightly illuminated.

  Illuminated as well at that moment was the persistence in American thinking of what has been termed the syndrome—the racist syndrome—of “worthy and unworthy victims.”30 For at the same time that almost all Americans would properly applaud President Walesa’s long-overdue acknowledgment of and apology for the horrors that were perpetrated against Jewish and other European “worthy” victims in Poland’s Nazi extermination centers during forty ghastly months in the 1940s, they by and large continue to turn their backs on the even more massive genocide that for four grisly centuries was perpetrated against what their apathy implicitly defines as the “unworthy” natives of the Americas.

  Moreover, the suffering has far from stopped. The poverty rate on American Indian reservations in the United States, for example, is almost four times the national average, and on some reservations, such as Pine Ridge in South Dakota and Tohono O’Odham in Arizona (where more than 60 percent of homes are without adequate plumbing, compared with barely 2 percent for the rest of the country) the poverty rate is nearly five times greater than for the nation at large. The destitution and ill health and general squalor that are the norm on many reservations today are no different from conditions that prevail throughout much of the indigent Third World. Indeed, so desperate and demoralizing are life conditions on most reservations that the suicide rate for young Indian males and females aged 15 to 24 years is around 200 percent above the overall national rate for the same age group, while the rate for alcohol-caused mortality—itself a form of suicide—is more than 900 percent higher than the national figure among 15 to 24 year-old Indian males and nearly 1300 percent higher than the c
omparable national figure among 15 to 24 year-old Indian females.31

  Meanwhile, the reservations themselves—the last chance for the survival of ancient and cherished cultural traditions and lifeways, however viciously deprived of resources they are by the overseeing state and federal governments—remain under relentless assault, at the same time that the United States with much fanfare about human rights is encouraging ethnic and national sovereignty movements in Eastern Europe and the former Soviet Union. Today, American Indian tribal lands total in size less than half of what they were in 1890, following the massacre at Wounded Knee and the supposed end of the euphemistically-named Indian wars.32 And much of the tribal land that still exists, constituting a little more than 2 percent of what commonly is the most inhospitable acreage in the United States, is in perpetual jeopardy of political disentitlement. Most of the Western Shoshoni people’s land, for example, was long ago confiscated for underground nuclear testing, while individual states routinely drive Indians off their land by denying tribes access to traditional water supplies and other necessary resources. The states are free to carry out such policies of confiscation because the federal government, in the disingenuous guise of granting Indians “self-determination,” steadfastly continues to abdicate its legal responsibility for defending tribes against state encroachment. Thus, the Indians’ ongoing struggle for a modicum of independence and cultural freedom is turned against them in a classic governmental maneuver of blaming the victim, while the campaign to terminate tribal sovereignty once and for all continues.

  Greatly varied though the specific details of individual cases may be, throughout the Americas today indigenous peoples continue to be faced with one form or another of a five-centuries-old dilemma. At the dawn of the fifteenth century, Spanish conquistadors and priests presented the Indians they encountered with a choice: either give up your religion and culture and land and independence, swearing allegiance “as vassals” to the Catholic Church and the Spanish Crown, or suffer “all the mischief and damage” that the European invaders choose to inflict upon you. It was called the requerimiento. The deadly predicament that now confronts native peoples is simply a modern requerimiento: surrender all hope of continued cultural integrity and effectively cease to exist as autonomous peoples, or endure as independent peoples the torment and deprivation we select as your fate.

  In Guatemala, where Indians constitute about 60 percent of the population—as elsewhere in Central and South America—the modern requerimiento calls upon native peoples either to accept governmental expropriation of their lands and the consignment of their families to forced labor under criollo and ladino overlords, or be subjected to the violence of military death squads.33 In South Dakota, where Indians constitute about 6 percent of the population—as elsewhere in North America—the effort to destroy what remains of indigenous cultural life involves a greater degree of what Alexis de Tocqueville described as America’s “chaste affection for legal formalities.” Here, the modern requerimiento pressures Indians either to leave the reservation and enter an American society where they will be bereft and cultureless people in a land where poor people of color suffer systematic oppression and an ever-worsening condition of merciless inequality, or remain on the reservation and attempt to preserve their culture amidst the wreckage of governmentally imposed poverty, hunger, ill health, despondency, and the endless attempts of the federal and state governments at land and resource usurpation.34

  The Columbian Quincentennial celebrations have encouraged scholars worldwide to pore over the Admiral’s life and work, to investigate every rumor about his ancestry and to analyze every jotting in the margins of his books. Perhaps the most revealing insight into the man, as into the enduring Western civilization that he represented, however, is a bland and simple sentence that rarely is noticed in his letter to the Spanish sovereigns, written on the way home from his initial voyage to the Indies. After searching the coasts of all the islands he had encountered for signs of wealth and princes and great cities, Columbus says he decided to send “two men up-country” to see what they could see. “They traveled for three days,” he wrote, “and found an infinite number of small villages and people without number, but nothing of importance.”35

  People without number—but nothing of importance. It would become a motto for the ages.

  APPENDIXES

  APPENDIX I

  On Pre-Columbian Settlement and Population

  Until the 1930s, it generally was believed that the earliest human inhabitants of the Americas had moved from the Alaskan portion of Berengia to what is now known as North America no more than 6000 years ago. Following the development of radioactive carbon dating techniques in the 1940s and 1950s, this date was pushed back an additional 6000 years to the end of the Wisconsin Ice Age, around 12,000 years ago. During this time the most recent interstadial, or regional dissipation of the massive continent-wide glaciers that previously had blocked passage to the south, opened up an inland migratory corridor. Once settled in what is now the upper midwestern United States, it was supposed, these migrants branched out and very slowly made their way overland, down through North, Central, and South America to the Southern Andes and Tierra del Fuego at the southernmost tip of the southern continent.

  Some scholars had long suspected that even this projected date of first arrival was too recent, but it wasn’t until the latter 1950s and early 1960s that they began being taken seriously.1 For it was then, slowly but steadily, that human habitation dates of 12,000 B.C. and earlier from the most southerly parts of the hemisphere began turning up in the archaeological record. In addition, dates of 20,000 to 30,000 B.C. were being placed on sites to the north of these, while more problematic dates of 30,000 B.C. in Chile and Brazil and 40,000 to 50,000 B.C. for skeletal remains discovered in southern California were being claimed.2 By the late 1970s it was becoming clear to many archaeologists that regions throughout all of North and South America were inhabited thousands of years earlier than traditionally had been believed, with some scholars suggesting a date of 70,000 B.C. as the possible time of first human entry into the hemisphere.3

  Skeptics remained unconvinced, however. Then Monte Verde was discovered—a human habitation site in a remote Chilean forest with unambiguous evidence (including a preserved human footprint) of a complex human community at least 13,000 years old. The excavated site revealed a dozen wooden structures made of planks and small tree trunks, the bones of butchered mammals, clay-lined hearths, mortars and grinding stones, and a variety of plant remains, some of which had been carried or traded from a locale 15 miles distant, that the community’s inhabitants had cultivated and used for nutritional and medicinal purposes.4 Clearly, since no scientists seriously doubt that the first human passage into the Americas was by way of Berengia, this meant that humans must have entered areas to the north of Chile thousands of years earlier, a fact that was at the same time being confirmed by reported datings of 27,000 to 37,000 B.C. from animal remains butchered by humans in Old Crow Basin and Blue-fish Caves in the Yukon, of 17,000 to 19,000 B.C. for human habitation in the Meadowcroft Rock Shelter in Pennsylvania, of 13,000 to 16,000 B.C. for a site in Missouri, of 11,000 B.C. for human activity at Warm Mineral Springs in southwestern Florida, and elsewhere.5

  Then, a few years later, at Monte Verde in Chile again, archaeologist Tom D. Dillehay discovered definite human artifacts that dated to at least 30,000 B.C.—an age that corresponds closely to dated charcoal remains from what are believed to have been human hearths at Pedra Furada in northeast Brazil.6 Since it is a truism of archaeological research that the earliest sites discovered today are always unlikely to be anything temporally close to the first sites that actually were inhabited—both because of the degradation of ancient materials and a site discovery process that makes finding a needle in a haystack a comparatively easy task—there increasingly is little doubt from the archaeological evidence that the northerly parts of the Americas had to have been inhabited by humans at least 40,000 years
ago, and probably earlier.7

  A welcome recent trend in this research is the attention scholars from a variety of other disciplines, including linguistics and genetics, have been paying to data in their fields regarding the first human occupation of North America. As a result, the earliest dates suggested by the archaeological evidence are now receiving independent confirmation. At present the most intense controversies regarding the early settlement of the Americas in these fields surround work that is being done on DNA linkages and language analysis. Geneticists and biochemists who have studied mitochondrial DNA samples from widely separated native American peoples today have come to equally widely separated conclusions: one group of scientists finds a high level of shared heritage, suggesting that the great majority of American Indians are descended from a single population that migrated from Asia up to 30,000 years ago; another group, studying the same type of data from different sources, contends that their findings point to at least thirty different major population movements, by different peoples, extending back about 50,000 years.

  A similarly structured debate exists among the linguists. It commonly is agreed that the people living in the Americas prior to 1492 spoke at least 1500 to 2000 different languages, and probably hundreds more that have been lost without a trace. These languages derived from a cluster of more than 150 language families—each of them as different from the others as Indo-European is from Sino-Tibetan.8 (By comparison, there are only 40–odd language families ancestral to Europe and the Middle East.) Some linguists claim, however, to have located a trio of language families, or “proto-languages,” from which that great variety of languages developed: Amerind, Na-Dene, and Eskimo-Aleut. Others contend that these three proto-languages can be further reduced to a single language that was spoken by one ancestral group that entered North America about 50,000 years ago—while still others argue that the multitude of Indian languages cannot be traced to fewer than the 150 known language families, and that there is no way convincingly to link that knowledge to estimates of the earliest human entry into North America.9

 

‹ Prev