Book Read Free

Everyone Is African

Page 10

by Daniel J. Fairbanks


  At first, all cases of sickled cells were identified exclusively among African Americans, and the disease became widely characterized as a “Negro disease.” Although a few cases were identified in people of European, Middle Eastern, and south Asian ancestry, they were often attributed to supposed undocumented African ancestry under the flawed assumption that the genetic factor causing sickled cells could only be African in origin. By the 1940s, scientists were in the midst of a concerted effort to determine the extent of cell sickling in Africa. They found high prevalence of people with sickled cells but apparently few cases of sickle-cell anemia. Most physicians working in Africa ascribed this observation to three factors: 1) inadequate diagnosis, 2) high mortality rates in Africa for children with sickle-cell anemia, and 3) difficulties distinguishing the symptoms of sickle-cell anemia from those of malaria. However, others attributed the supposed higher prevalence of sickle-cell anemia in African Americans to so-called racial mixing in their ancestry.

  It was well known at the time that most African Americans had some European ancestry, as is now well documented by modern genetic evidence.24 This was, in large part, a result of sexual abuse by slave owners and masters, abundantly recorded in the dictated recollections of former slaves.25 African Americans, therefore, were viewed during the middle of the twentieth century as a mixed or hybrid race, as opposed to native people who resided in Africa. In 1950, A. B. Raper published an extensive review of the scientific literature available at the time. Referring to this supposed higher incidence of sickle-cell anemia in African Americans when compared to native Africans, he wrote that “some factor imported by marriage with white persons, is especially liable to bring out the haemolytic [anemic] aspect of the disease, while the anomaly remains a harmless one in the communities in which it originated.”26 This view, widely held at the time, supposedly justified the assertion that African Americans were members of a genetically inferior mixed race, inferior both to “pure” Europeans and “pure” Africans. It further lent erroneous support to the notion of white supremacy, and laws mandating antimiscegenation and racial segregation.

  Scientific research later dispelled these fallacies. Sickle-cell anemia was indeed present in Africa, with symptoms as severe as elsewhere. It also appeared in people with no African ancestry, particularly in the Arabian Peninsula, south Asia, and the Mediterranean region, areas where malaria was prevalent. And the notion of so-called pure races had no support in genetic data. Nonetheless, dogmatically held opinions regarding white supremacy, the supposed inherent inferiority of African Americans, and the presumption that racial purity was essential remained strong well into the 1960s and ‘70s, and even to the present, often fallaciously supported through inaccurate suppositions about sickle-cell disease.

  As the civil rights movement gained momentum in the late 1960s, political agendas to improve racial equality in the United States emphasized sickle-cell anemia as a priority for research and treatment. Prior to that time, genetic conditions that were more common in European Americans had received the lion's share of governmental and philanthropic research funding. Robert B. Scott, a physician and professor at the Howard University College of Medicine in Washington, DC, was one of the strongest and most vocal advocates for increasing research funding on the biological basis of and treatment for sickle-cell anemia. In an influential 1970 article, he lamented the broad neglect of sickle-cell anemia and the lack of funding to support research and treatment:

  In 1967 there were an estimated 1,155 new cases of SCA [sickle-cell anemia], 1,206 of cystic fibrosis, 813 of muscular dystrophy, and 350 of phenylketonuria. Yet volunteer organizations raised $1.9 million for cystic fibrosis, $7.9 million for muscular dystrophy, but less than $100,000 for SCA. National Institutes of Health grants for many less common hereditary illnesses exceed those for SCA.27

  Underfunding for sickle-cell anemia was more a political issue than a scientific or medical one. Sickle-cell anemia was medically more serious than many less prevalent diseases yet was not a high priority for government or philanthropic support. According to Melbourne Tapper, writing in retrospect in 1999, “African Americans sought to increase funding for sickling research by turning to telethons, modeled on those for cystic fibrosis, muscular dystrophy, and cerebral palsy. These telethons were unsuccessful not because of the clinical nature of sickling, but because they were unable to neutralize the historical difference of the population in which sickling was primarily found—African Americans.”28

  A biochemical test that could accurately diagnose both sickle-cell anemia and sickle-cell trait had been available since 1949.29 Several states implemented testing programs—voluntary in some cases, mandatory in others—targeting the African American population, often with well-documented instances of racism in the administration of these tests. Against this backdrop, President Richard M. Nixon proposed increased funding for sickle-cell research, and Congress responded by passing the National Sickle Cell Anemia Control Act (the word Control was later changed to Prevention), which Nixon signed into law in 1972. Although testing was encouraged, it was voluntary, partially overcoming some of the earlier claims of racist coercion associated with mandatory testing.

  The purpose of testing was to inform potential parents who both were heterozygous carriers (in other words, who both had sickle-cell trait) of the possibility of having a child with sickle-cell anemia. As envisioned by Scott, “Whether a young couple will decide to have no children, or plan a limited family size, or disregard the risk would be entirely their own decision.”30

  Despite its lofty goals, the impact of this act quickly faded. Congress failed to appropriate sufficient funding for it, and the act expired three years after it was signed into law. Several clinics established under it had to be closed for lack of funds. Federal funding for sickle-cell anemia was later incorporated into funding for genetic diseases in general, so it once again had to compete with diseases that were most prevalent among European Americans, such as cystic fibrosis and muscular dystrophy.31

  Research in 1986 showed that early intervention with treatments for infants with sickle-cell anemia could significantly improve their lifelong outlook for health. This finding offered an impetus for mandatory newborn screening. Newborns identified with sickle-cell anemia could be immediately identified and provided treatment, thereby increasing their lifetime outlook for health. Over the next twenty years, states began implementing sickle-cell testing for newborns, with universal testing in all fifty states and the District of Columbia by 2006. With the support of sickle-cell organizations, healthcare professionals, and the National Association for the Advancement of Colored People (NAACP), Congress passed the Sickle Cell Treatment Act of 2003, which President George W. Bush signed into law. The act provided federal funding for research, counseling, education, matching funds for Medicaid to assist with treatment, and establishment of sickle-cell centers throughout the United States.

  Sickle-cell disease is, without doubt, the most prominent example of how health, inheritance, and ancestry have become entangled with racial tensions. Those tensions have persisted for more than a century and are still with us, as the latest controversy regarding testing athletes for sickle-cell trait attests. There are other examples as well. Although perhaps not as well known, they, too, illustrate how ignorance of scientific information can result in discrimination, whether intended or not.

  An example is lactose intolerance—the inability to fully digest dairy products, especially fresh milk—which is common throughout the world, affecting more than 65 percent of the world's population. In fact, it is the original ancestral state of humanity. Mammals, including humans, consume milk during infancy, then are weaned from milk as they begin consuming other foods. The principal sugar in milk is lactose, and the body must break it down into other sugars to digest it. A single gene in our DNA, called LCT, encodes a protein called lactase, which carries out the first step of lactose metabolism. This gene is active during infancy but, in many people, is genetically progr
ammed to shut down after weaning because anciently, before humans domesticated milk-producing animals, the gene was no longer needed in children who were weaned. Some people carry a derived variant that disrupts this shutdown, retaining LCT gene activity into adulthood and allowing them to continue consuming milk, a condition known as lactase persistence. Several derived variants that confer lactase persistence have arisen independently in humans, and they are mostly found in people whose ancestry traces to populations that relied on domesticated animals for milk, such as cattle, sheep, and goats.

  There is good evidence that these variants were strongly favored through natural selection in people from regions where domesticated milk-producing animals were raised. Milk and other dairy products are highly nutritious, an excellent source of calories, vitamins, and minerals, especially when food is in short supply, as it often was during ancient times. In regions where humans used domestic animals for milk, people who could consume milk and milk products as a source of food had an advantage for survival and reproduction over those who could not. Natural selection favored these variants, rapidly increasing their prevalence in milk-consuming societies, and this evolutionary pattern has repeated itself independently in several parts of the world.

  For instance, in East Africa, in what is now Kenya and Tanzania, nomadic herders began using domestic animals for milk more than seven thousand years ago. A large proportion of their modern descendants, most of them still in Africa, carry a specific variant that allows the LCT gene to remain active into adulthood.32 In the Arabian Peninsula, where milk use was and is common, a different variant conferred lactose persistence to people who lived there anciently and their modern descendants. Yet another variant that confers lactose persistence is common in people whose ancestry is northern European, where milk from cows and goats has long been used as a source of food. This variant is very common in North America, present in approximately 77 percent of North Americans whose ancestry is predominantly European, and accounts for the high consumption of dairy products in Europe, the United States, Canada, Australia, and other parts of the world where large proportions of people have European ancestry.

  Ancient Native Americans, however, never domesticated animals for milk production. Not surprisingly, the variants that confer lactose persistence in other parts of the world are rare in Native Americans, who typically begin losing the ability to digest milk by age three. Ignorant of the high proportion of lactose-intolerant people among Native Americans, and the underlying science, officials promoting US government food assistance programs distributed surplus milk products to people living on Indian reservations, where the products made most people sick. Shirley Hill Witt, a Native American anthropologist and administrator for the US Commission on Civil Rights, described the situation on a Navajo reservation this way:

  What is good for the Anglo body may not in fact be good for everyone else. This may be another mindless prejudice yet to be purged: nutritional ethnocentrism. To put it another way, the consequences of ethnocentrism may be more tenacious and deep-seated than we have thought. In the animal pens near Navajo hogans you can usually find the remains of milk products from the commodities program: butter, cheese, dried milk.

  But as more and more investigations are reported, the fact is becoming incontrovertible that for many or most of the world's people, milk is not our most valuable food, or “nature's way,” or so say the slogans of the milk industry. These studies indicate that most of us cannot drink milk after early childhood without suffering gastric upset, cramps, bloating, diarrhea and nausea.33

  Promotion of milk and other dairy products for consumption by children in public school cafeterias likewise ignores the pervasive nature of lactose intolerance among many children who do not descend from ancient milk-reliant cultures. This is especially relevant in schools located on or near reservations. According to Witt,

  In schools across the nation, children are browbeaten into ingesting vast quantities of milk whether or not they have the genetic equipment to do so. In 1972, a study I conducted in one of the New Mexican pueblos showed that only one person out of a hundred over the age of six was able to tolerate lactose without strong digestive reactions.34

  Reliable laboratory tests for lactose intolerance are available for administration under the supervision of a physician. Most are not genetic tests but, rather, tests that directly measure a person's ability to digest lactose. For those who have lactose intolerance, lactose-reduced and lactose-free dairy products are available, as are supplements that assist the body with lactose digestion, allowing a larger number of people with lactose intolerance who desire to consume dairy products to do so.

  Because ancestry is closely tied to a wide variety of health issues, physicians have historically used racial categories to recommend different tests and treatments. The intent is not racial bias but, rather, a means to more efficiently direct medical interventions according to information published in the medical literature about health issues and ancestry. For instance, targeting diagnosis and genetic testing of cystic fibrosis in people with predominantly European ancestry, or sickle-cell disease in people with significant African ancestry, made economic sense to many healthcare organizations.

  Simple blood tests that are inexpensive, rapid, and easy to administer can readily detect as many as twenty-nine genetic conditions in infants. Although some of the conditions they detect are more common in people whose ancestries trace to particular parts of the world, the American College of Medical Genetics has recommended universal screening of infants for all twenty-nine of these conditions, rather than targeted screening by ethnic group. The reasons are to avoid missed diagnoses and to treat these conditions in time to avert the most serious consequences associated with them.35 Targeting by ethnic group inevitably misses cases because the ethnic classifications of infants are not accurate assessments of ancestry. Moreover, the history of targeted screening has shown that discrimination and stigmatization are unavoidable consequences, whether intended or not. There is no valid medical reason to consider racial or ethnic classification for such testing.

  Until 2005, no medication had been approved by the US Food and Drug Administration (FDA) for treatment of any particular ethnic group. That year, a drug known as BiDil was approved in the United States for treatment of congestive heart failure specifically in African Americans, based on the results of clinical trials. An initial clinical trial for BiDil had included people of different ancestral backgrounds. The results initially showed little advantage for the drug until the researchers revisited the data, parsing the analysis according to self-identified racial classification. They then discovered a possible benefit for subjects who self-identified as African American. This prompted another clinical trial with only African American participants. All participants were already suffering from congestive heart failure at the beginning of the study and were on other medications to treat their condition. The researchers randomly assigned each of them to receive BiDil or a placebo in addition to the medications each was currently taking. The trial was to continue for eighteen months but was terminated early because those taking BiDil had a lower rate of death. The FDA approved BiDil for use in African American patients in 2005 on the basis of this trial.

  Some groups, such as the NAACP, praised this action for focusing medical research on an ethnic group that had long suffered medical discrimination. In fact, medical research targeting African Americans has an appalling historical record, exemplified by the unsubstantiated claims and outright errors made throughout the twentieth century with sickle-cell disease. Perhaps the most infamous case, however, was research conducted from 1932 through 1972 in Tuskegee, Alabama, in which African American men were misled into enrolling as participants in a study purportedly about blood disorders. The real purpose of the study, which was kept secret from the participants, was to research syphilis. The majority of the men enrolled in the study already had the disease when the study began, and others were intentionally infected without their know
ledge. None were told that they had syphilis, and none were treated for it, even though penicillin was found to be an effective treatment during the early years of the study. Not only was syphilis allowed to progress unabated in these men, many of their spouses became infected, as did infants born to these women. The study was finally terminated after forty years when a whistleblower took the story to the press after being rebuffed when he reported its abuses to the responsible government agencies. In the aftermath, Congress mandated substantial changes in legal and ethical requirements for government-sponsored research.

  Clinical trials for BiDil, with their exclusive focus on African Americans, seemed to offer a hint of reversal after decades of past injustices. According to company officials, “BiDil was ‘the antithesis of Tuskegee’” and “the approval of BiDil was about putting Tuskegee to rest.”36 However, geneticists, medical researchers, ethicists, legal experts, and a considerable number of physicians criticized the testing and release of BiDil as a marketing strategy carefully crafted to generate corporate profits. The drug was a combination of two drugs already available in generic forms, but the dose used in the study could not be easily formulated with available doses of generic alternatives. Had physicians been able to readily prescribe a generic alternative, it could have been equally effective and much less expensive. There was no research to indicate whether other doses, including those generically available, were less, more, or equally effective.

 

‹ Prev