Mosquito Soldiers

Home > Other > Mosquito Soldiers > Page 2
Mosquito Soldiers Page 2

by Andrew McIlwaine Bell


  Medicine in the 1860s was a hodgepodge of ancient shibboleths, folk remedies, and sound scientific practices. As inheritors of the Greek tradition, Civil War–era physicians viewed the body as a precariously balanced concoction of chemicals which any number of external and environmental stimuli could alter. The key to preserving a patient’s health lay in maintaining his or her internal equilibrium. Drugs and therapies that induced vomiting, sweating, or the evacuation of the bowels or bladder were thought to assist in bringing a diseased body back into balance. Because patients and their families insisted on some sort of treatment for serious medical conditions, most “regular” doctors were willing to prescribe chemicals and harsh therapies that we now know are harmful to the human body. Modern audiences can be forgiven for cringing at the crudest nineteenth-century therapeutics (dosing a patient with mercury, for example, seems cruel by today’s standards). But we should avoid condemning the physicians who employed them, especially since these treatments proved effective from time to time. Understanding why requires an appreciation of the role faith plays in promoting healing. For those who lived before microbiology became an established science, doing something for a patient was usually better than doing nothing at all, especially if a person’s life were at stake. If a sick person pulled through after ingesting a certain compound or chemical, the attending physician and his observers would naturally credit the patient’s recovery to the prescribed therapy, which ensured its continued use within the community. Medicine, like politics, was a mostly local affair in the 1800s.2

  Trial-and-error therapeutics of this sort arose in part because of the poor quality of medical education available at the time. Of the eighty or so medical schools that were open on the eve of the Civil War, most were run as for-profit ventures rather than cutting-edge research centers. Anyone willing to pay school administrative fees and purchase lecture “tickets” from professors could earn a medical degree. Students sat through the same lectures two years in a row (administrators believed that this approach would allow the information to sink in better) before apprenticing themselves to a practicing physician. Schools did not regulate these apprenticeships or even require that their students undergo supervised clinical training.3

  Government oversight was equally lax. The leveling spirit of the Jacksonian era convinced a number of Democratic state legislatures that medical licensing laws were elitist conspiracies designed to perpetuate aristocratic privilege. The repeal of these laws in the 1830s and 1840s allowed the profession to slide even further into chaos. By the time the Civil War began, nearly any person who wanted to call himself or herself a doctor could hang out a shingle and treat patients.4

  But within this chaotic climate there were also clear signs of scientific progress. Practitioners at Massachusetts General Hospital discovered the benefits of anesthesia in 1846, which allowed Western physicians to perform humane surgery for the first time. In Paris Americans studying under Pierre Louis were learning to place a greater emphasis on natural healing and to question the usefulness of therapeutics that had not been subjected to close clinical scrutiny. This greater reliance on empirical observation led to changed perceptions of disease. By the middle of the nineteenth century the most enlightened practitioners “began to think more in terms of discrete disease entities and disease-specific causation and less in terms of general destabilizing forces that unbalanced the body’s natural equilibrium.”5

  With no understanding of bacteriology or virology, however, physicians theorized that certain diseases were caused by invisible poisons that floated through the air. Malarial fevers were blamed on “miasmas” produced by decomposing organic material (especially in and around swamps), while yellow fever was attributed to a mysterious filth that attached itself to certain types of clothing and traveled aboard ships. The most potent weapon Civil War surgeons had in their fight against malaria was quinine. The advantage this drug gave to Union forces cannot be overstated (in fact, one could argue without too much hyperbole that a more appropriate subtitle for this book might have been “How Quinine Saved the North”). Yankee surgeons doled out more than nineteen tons of the medicine and used it as both a specific and prophylactic among troops stationed in the sickliest regions of the South. The Confederacy, on the other hand, experienced quinine shortages for most of the war, which meant malarial fevers among Rebels went unchecked more often than not. Southern civilians also suffered. Those who believed that life in their part of the country depended on the drug were less inclined to support a government that could not supply it. The paucity of quinine was one more inconvenience imposed upon a southern public that was already frustrated over impressment, enemy raids, conscription, and bread shortages.6

  Of course, nineteenth-century southerners were not the only people in history ever to have endured war and pestilence at the same time. In fact, epidemic diseases seem to flourish whenever human beings come together to slaughter one another in the name of an agenda or set of beliefs. Thirty years ago William McNeill described how smallpox had allowed a handful of Spanish conquistadors to subjugate an entire continent, and more recently, Elizabeth Fenn has shown that this disease also influenced the events of the American Revolution. For whatever reason Civil War historians have shied away from similar studies even while acknowledging that far more soldiers died in sickbeds than on battlefields.7

  This lack of scholarly attention has produced one of the few remaining gaps in the historiography of the war. More studies are needed which explain how specific diseases affected military operations and strategy during the conflict. Few other ailments sickened as many troops and civilians as malaria, and no other disease (except perhaps cholera) produced more public panic throughout the nineteenth century than yellow fever. To date most medical histories of the conflict have given short shrift to both illnesses. As a result, historians are left with an inchoate picture of daily life in the wartime South and an incomplete understanding of why various campaigns turned out the way they did.

  The importance of this study, then, is threefold; first, it reinterprets familiar events from an epidemiological perspective and adds to a medical historiography that has long been in need of a transfusion of scholarly interest. Even though disease-related fatalities outnumbered combat deaths two to one, far more intellectual energy has been expended on dissecting individual battles than in researching what one prominent Civil War historian fittingly referred to as “the grimmest reaper.” It is the author’s hope that this work will encourage other students of the period to investigate the various other diseases that beset both soldiers and civilians and uncover the causal links that exist between human health and history.8

  Second, malaria and yellow fever made a difference in the outcome of several Civil War campaigns. These diseases not only sickened thousands of Union and Confederate soldiers but also affected the timing and success of certain key military operations. Some commanders took seriously the threat posed by the southern disease environment and planned accordingly; others reacted only after large numbers of their men had already fallen ill. Simply stated, had mosquitoborne illness not been part of the South’s landscape in the 1860s, the story of the war would be different.

  Finally, by focusing on two specific diseases, which share a similar vector, rather than a broad array of Civil War medical topics, this study provides readers with a clearer understanding of how environmental factors serve as agents of change in history. Regrettably, our arrogance has deceived us into believing that our actions alone determine what happens in the course of human events. But in truth Homo sapiens inhabit a broader natural world built on a series of complex, interdependent relationships that, when altered, can produce catastrophic results. War represents not only a breakdown of human social and political relations but also the disintegration of the existing environmental order. The large armies required by both the Confederate and Union governments accelerated the development of diseases that thrive on human hosts, which in turn affected the ability of these forces to ca
rry out the instructions they received from military commanders.

  Although malaria and yellow fever were only two of a multitude of maladies that afflicted Union and Confederate troops, few diseases were greater agents of change. These illnesses not only affected military planning and influenced medical practices on both sides but also helped change the lives of nearly every American who witnessed the war. Shortages of malarial medicine transformed the ideal southern woman of leisure into a black market smuggler and made plantation life increasingly arduous. Infections among African-American troops weakened pseudoscientific claims that for generations had served as convenient justifications for slavery. Southern urbanites learned the value of rigorous sanitation and quarantine practices during the Union occupation and then endured the horror of new yellow fever outbreaks once it ended. Federal soldiers were infected with malarial parasites that had largely disappeared from their home communities by the 1860s and reintroduced them into nonimmune northern areas after the war. Confederate quartermasters watched helplessly as yellow fever plagued important port cities, disrupting critical supply chains and creating public panics. All of these changes were wrought by a tiny insect whose omnipresence in the United States seemed perfectly natural and nonthreatening to previous generations.

  In 1861 the coasts and prairies, woodlands and wetlands, and bayous and bogs of the newly formed Confederacy were teeming with mosquitoes that would eventually stymie military campaigns, kill thousands of soldiers and sailors, and cause pain and suffering for countless southern civilians. The important role these insects played cannot be ignored by any scholar aspiring to understand the Civil War in all its wonderful and dizzying complexity.

  {1}

  AEDES, ANOPHELES, AND THE SCOURGES OF THE SOUTH

  THE SUMMER OF 1835 was a stressful time for Mrs. Zachary Taylor. In June she learned her daughter Sarah Knox had married a dashing young army lieutenant named Jefferson Davis and dutifully followed her new husband from Kentucky to his virgin estate on the Mississippi River below Vicksburg. Although Colonel Taylor expressed fatherly doubts about the wisdom of letting “Knoxie” become a soldier’s wife, Mrs. Taylor was less concerned with her daughter’s choice of a spouse than with the dangerous diseases she knew lurked in the shadowy, stagnant bogs that surrounded Davis’s “Brierfield” plantation. Fifteen years earlier she and her four children, including Sarah, had fallen ill while stationed with her husband’s regiment in the swamps of Louisiana. The two youngest girls did not survive, and Mrs. Taylor herself had nearly died. The thought of losing another child to the mysterious and unhealthy climate of the Deep South was more than she could bear.

  Sarah tried to assuage her mother’s fears. “Do not make yourself uneasy about me,” she wrote in August. “The country is quite healthy.” The following month, however, both she and her husband were stricken by a severe illness while visiting relatives in West Feliciana, Louisiana. On the fifteenth of September, at the height of their agonizing ordeal, Davis staggered out of his sickbed to comfort his ailing wife, but Sarah, delirious with fever, failed to recognize him. Instead, in a fit of madness she sang a popular nineteenth-century song called “Fairy Bells” before closing her eyes forever. Davis never got a chance to say good-bye to his twenty-one-year-old bride and was forced to mourn her loss while still very ill. The future president of the Confederacy grieved in seclusion at Brierfield for the next eight years.1

  Like most Americans of her era, Mrs. Taylor would have likely attributed her daughter’s tragic death to the poisonous air that was said to pervade unhealthy areas of the country such as the Mississippi Delta. Conventional wisdom since colonial times held that strange and virulent vapors continually wafted through the atmosphere of the warmer regions of North America and created health problems for anyone unfortunate enough to breathe them in, especially those born in healthier climates. What exactly caused the air to turn lethal, however, was still in dispute among physicians for most of the nineteenth century. Endless theories circulated in medical journals or were discussed at conventions at a time when medicine was more art than science. Some practitioners, perhaps a majority, believed decomposing animals and plants produced the noxious “miasmas” that sickened their patients. Others thought electrical charges in the ozone were the culprit. Still others rejected the “bad air” theory altogether and instead blamed excess hydrocarbons in the blood.2

  In reality Sarah Taylor Davis had died of an insect bite. One night in late August or early September a female mosquito carrying a dangerous strain of malaria surreptitiously sliced through her skin, sucked up her red corpuscles through its straw-like proboscis, and unwittingly released into her bloodstream a dozen or so malarial sporozoites that it had picked up from a previous victim. Within minutes these sporozoites found their way into Sarah’s liver, where they transformed over the next two weeks into schizonts, each containing thousands of smaller organisms called merozoites. When the mature schizonts eventually burst, the merozoites poured out like tiny soldiers and invaded her red cells in order to reach the next stage of their development. As these parasites rapidly multiplied, dead and dying corpuscles clung to the walls of Sarah’s capillaries and healthy cells, creating a dam that blocked the flow of blood to her vital organs.3

  Malaria is a parasite transmitted by Anopheles mosquitoes, a genus that prefers to breed in stagnant, sunlit pools of fresh water and can be found in most regions of the country. The adult female requires a blood meal to ovulate and can lay between one and three hundred eggs at a time. Symptoms of malaria include chills, shakes, nausea, headache, an enlarged spleen, and a fever that spikes every one to three days depending on the type of malaria and its parasitic cycle. In all likelihood Sarah Davis was killed by Plasmodium falciparum, one of four types of malaria that infect human beings. Of the other three— vivax, malariae, and ovale—only Plasmodium vivax was once common in the United States. Vivax alone rarely proved fatal to its victims, but Plasmodium falciparum was often deadly.4

  Nineteenth-century physicians categorized malaria according to how often these fever spikes, or “paroxysms,” occurred. A “quotidian” fever appeared once every twenty-four hours, a “tertian” every forty-eight, and a “quartan” every seventy-two. Plasmodium vivax was commonly referred to as “intermittent fever,” “ague,” “dumb ague,” or “chill-fever,” while Plasmodium falciparum was known as “congestive fever,” “malignant fever,” or “pernicious malaria” because of its lethal effect.5

  Patients diagnosed with “remittent fever” experienced febrile symptoms that, as the name suggests, periodically went into remission. But they did not disappear entirely (and temporarily) in the same way that so-called intermittent symptoms did. Like most nineteenth-century descriptions of disease, the term remittent was somewhat nebulous. Yet a handful of studies conducted in the 1830s and 1840s suggest that many of these fevers were the result of repeat plasmodial infections.6

  Sarah Davis was one of countless Americans who contracted malaria during the nineteenth century. As settlers cleared virgin forests from Savannah to St. Louis to make way for the cotton and wheat farms that drove the antebellum economy, they inadvertently created a plethora of new breeding sites for anopheles. At a time when mosquito-control measures such as spraying the insecticide chlorophenothane (DDT) were still unknown, clouds of insects swarmed wagon trains and slave coffles, sparking complaints about “fever” and “ague” in the South and West, where crude housing, poor drainage, and regular flooding aided in the spread of the disease. Daniel Brush’s experience on the frontier was typical. In 1820 he and his family moved from Vermont to southern Illinois in search of better economic opportunities and wound up with a handful of other Yankee families in a small settlement called “Bluffdale,” four miles east of the Illinois River. When the entire community came down with malaria during the first harvest, Brush recorded his fellow settlers’ suffering: “Many had the real ‘shakes’ and when the fit was fully on shook so violently that they could not hold a glass of
water with which to check the consuming thirst that constantly beset them while the rigor lasted, nearly freezing the victim.” He went on to describe the fevers that followed these fits as putting “the blood seemingly at boiling heat and the flesh roasting.”7

  Other observers noticed the prevalence of the disease in the West. During a tour of the United States in the 1840s, the English author Charles Dickens encountered so many “hollow-cheeked and pale” malaria victims that he forever remembered the region where the Mississippi and Ohio rivers converge as “a breeding-place of fever, ague, and death.” St. Louis also seemed unhealthy to Dickens, despite its residents’ claims to the contrary. One Illinois physician’s frequent contact with malaria convinced him that he could accurately diagnose patients just by learning where they lived, while another practitioner thought the malaria victims he saw, even children, looked “prematurely old and wrinkled.” Country doctors from all over the Northwest published articles in medical journals on the best ways to identify and combat the mysterious disease that plagued their communities.8

  But while malaria made life difficult for Westerners, it made life nearly intolerable for southerners at certain times of the year. Plasmodium falciparum occurred almost exclusively below the thirty-fifth parallel and was especially problematic in the states of the Deep South such as South Carolina and Georgia. Short, mild southern winters substantially lengthened the breeding season for anopheline mosquitoes and made Plasmodium vivax infections as common as colds in some areas. White southerners dreaded the annual arrival of the “fever season” (which lasted for a variable length of time between late spring and early autumn depending on the location), and those who could afford it escaped to seaside cottages or fled northward in search of healthier climates. During a visit to lowland South Carolina in the 1850s, landscape architect Frederick Law Olmstead noticed that the overseer on one plantation moved inland to higher ground during the “sickly season” (outside the flight range of anopheles) to escape the “swamps” and “rice-fields” that made life at night “dangerous for any but negroes.” The widespread belief among whites that blacks were immune to malaria, which served as a convenient justification for slavery, had some basis in scientific truth. West Africans inhabited malarial environments for thousands of years before being brought to America and developed a degree of genetic resistance which they passed on to their offspring. But by the mid-nineteenth century Africans from all over the subcontinent were intermixing with one another as well as with Indians and Europeans, which meant that many blacks were also susceptible to malaria.9

 

‹ Prev