In spite of such confusion and inefficiency, James Lind and other medical men in the British navy pioneered a number of other significant improvements in health administration during the latter decades of the eighteenth century. Lind was instrumental, for instance, in installing sea-water distilleries on board ship to assure a supply of fresh drinking water. The adoption of the practice of quarantining new recruits until they had been bathed and equipped with a new set of clothes was another simple procedure that reduced typhus dramatically. Use of quinine against malaria, and rules against going ashore after dark on malarial coasts, were also introduced under Lind’s direction.
Parallel improvements in army health administration, with conscious attention to water supplies, personal cleanliness, sewage, and the like met with larger obstacles, inasmuch as soldiers were never so well insulated from external sources of infection as sailors aboard ship could be. Yet there, too, eighteenth-century European armies, being the pets and playthings of Europe’s crowned heads, were both too valuable in the eyes of authority and too amenable to control from above not to benefit from a growing corpus of sanitary regulations. From protection of soldiers to medical regulation of the public at large was an easy step which had been made on the Continent, in principle if not fully in practice, by systematically minded servants of German monarchs. The most influential was Johann Peter Frank, whose six volumes on medical policy, published between 1779 and 1819, attracted wide and favorable attention among rulers and government administrators who recognized that the number and vigor of their subjects was a fundamental component of state power.
Interaction between Europe’s political history and the health of professionalized standing armies and navies deserves more consideration than historians have commonly devoted to the subject. Obviously, the rise of absolutism on the European continent hinged on the availability of well-trained armies to do the sovereign’s will; and the preservation of such armies, in turn, rested on the development of rules of sanitation and personal hygiene that reduced losses by epidemic disease to relatively minor proportions, winter and summer, in the field and in cantonments. “Spit and polish” and ritual attention to cleanliness was, of course, the way European armies achieved this goal, and the eighteenth century was clearly the time when such practices became normal, altering the experiential reality of soldiering in far-reaching ways. But no one seems to have investigated the intersection of high medical theory, as expressed by doctors like Johann Peter Frank, with the routines that inconspicuous drill sergeants and junior officers invented to occupy soldiers’ time, keep them healthy and train them to battle efficiency.
As in most matters of military administration, the French were pace-setters. Early in the eighteenth century, the French royal administration set up military hospitals and medical training schools. In the 1770s a medical corps of a modern type was established. The key innovation was that doctors served their entire careers in the new corps, and could aspire to ascend a ladder of rank just like regular officers, instead of coming, as before, into military service from civilian practice at the invitation of a regimental colonel when some emergency or impending campaign required it.
The benefit of the professionalization of the French military medical corps was demonstrated during the wars of the revolutionary and Napoleonic period. Young men conscripted from remote farms and from the slums of Paris mingled in the ranks of the new and vastly expanded armies of the French Republic. Yet despite the fact that the recruits brought widely different disease experience and resistances into the army, the medical corps was able to prevent massive epidemic outbreaks, and took swift advantage of new discoveries, like Jenner’s vaccination (announced in 1798), to improve the health of the soldiers in their charge. The expanded scale of land warfare, characteristic of the Napoleonic period, could not have oc- curred otherwise. Equally, the capacity of the British navy to blockade French ports for months and years on end, depended quite as much on lemon juice as on powder and shot.61
In view of the achievements of military medicine, therefore, the problem as it presented itself to sanitary reformers of the 1830s and 1840s was less one of technique than of organization. In England, at any rate, a libertarian prejudice against regulations infringing the individual’s right to do what he chose with his own property was deeply rooted; and as long as theories of disease and its propagation remained under dispute, clear imperatives were hard to agree upon. In this situation the fear of cholera acted as a catalyst. To do nothing was no longer sufficient; old debates and stubborn clashes had to be quickly resolved by public bodies acting literally under fear of death.
The first outbreak of cholera in Britain (1832) promoted establishment of local boards of health. Being unpaid and locally elected, the personnel of these boards often lacked expertise as well as legal power to alter living conditions; indeed, not everyone agreed that filth and ill health went together. Far more significant was the reaction to the reappearance of cholera in 1848. In that year Parliament authorized the establishment of a Central Board of Health exactly one week before cholera appeared in England for a second time. The dreaded approach of Asiatic cholera had been a matter of public notice for more than a year, and there can be no doubt that it was the expectation of its return that precipitated Parliament’s action.
The Board of Health instituted far-reaching programs of public sanitation that had been championed by a noisy group of reformers for a decade or more. Being staffed with some of the most prominent advocates of sanitary reform, the board used its extensive legal powers to remove innumerable sources of defilement from British towns and cities, and began installation of water and sewer systems all over the country.
Sewers were nothing new, being at least as old as the Ro- mans; but until the 1840s a sewer was simply an elongated cesspool with an overflow at one end. Such sewers collected filth and had to be dug out periodically. The flow of water through them, save in periods of cloudburst, was sluggish because water supplies were sharply limited. The new idea of the 1840s, championed principally by an earnest Benthamite reformer named Edwin Chadwick, was to construct narrow sewers out of smooth ceramic pipe and pass enough water through to flush the waste matter toward some distant depository, far removed from human habitation. There Chadwick expected that the sewage could be processed and sold to farmers for fertilizer.
To work, the plan required installation of completely new systems of water pipes and of sewer pipes; development of more powerful pumps to deliver water into houses under pressure; and compulsory elimination of older sewage systems. Intrustion upon private property to allow water mains and sewer pipes to maintain the straight lines needed for efficient patterns of flow was also necessary. To many Englishmen at the time this seemed an unwarranted intrusion on their rights and, of course, the capital expenditures involved were substantial. It therefore took the lively fear that cholera provoked to overcome entrenched opposition.62
Half of Chadwick’s initial vision failed, for he was unable to make financially successful arrangements for the sale of sewage to farmers for fertilizer. The reason was that guano from Chile and artificially synthesized fertilizer became available in forms more convenient for farmers to use than anything Chadwick could do with sewage. The practical solution was to discharge the new sewer pipes into accessible bodies of water—often with unpleasant results. Development of effective ways of processing sewage to make effluvia inoffensive took another half century; and installation of such plants on a large scale waited until the twentieth century, even in prosperous and carefully administered cities.63
Yet even though Chadwick was unable to realize the full scope of his plan, the Central Board of Health, under his direction, did demonstrate during the years of its existence, 1848–54, how the new cities called into existence by the industrial revolution could be made far healthier than cities of earlier times had ever been. Moreover, the new arterial-venous system of water supply and sewage disposal was not so appallingly expensive as to be prohibitive for ur
ban communities in Europe and lands of European settlement overseas. In Asia, however, where use of human excreta for fertilizer was of long standing, the new system of sewage disposal never became general.
Spread to other countries occurred relatively rapidly, though not infrequently it took the same stimulus of an approaching epidemic of cholera to compel local vested interests to yield to advocates of sanitary reform. Thus, in the United States, it was not until 1866 that a comparable Board of Health was established in New York City, modeled on the British prototype and inspired by identical apprehensions of the imminence of a new cholera epidemic.64 In the absence of this sort of stimulus, such a great city as Hamburg persisted in postponing costly improvements of its water supply until 1892, when a visitation of cholera proved beyond all reasonable doubt that a contaminated water supply propagated the disease. What happened was this: as an old free city, Hamburg remained self-governing within the new German Reich and drew its water from the Elbe without special treatment. Adjacent lay the town of Altona, part of the Prussian state, where a solicitous government installed a water-filtration plant. In 1892, when cholera broke out in Hamburg, it ran down one side of the street dividing the two cities and spared the other completely. Since air and earth—the explanations preferred by the miasmatists were identical across the boundary between the two cities, a more clear-cut demonstration of the importance of the water supply in defining where the disease struck could not have been devised.65 Doubters were silenced; and cholera has, in fact, never returned to European cities since, thanks to systematic purification of urban water supplies from bacteriological contamination.
Obviously, there was always a considerable lag between decision to introduce improved water and sewage systems and the completion of necessary engineering work. But by the end of the nineteenth century all major cities of the western world had done something to come up to the new level of sanitation and water management that had been pioneered in Great Britain, 1848–54. Urban life became far safer from disease than ever before as a result. Not merely cholera and typhoid but a host of less serious water-borne infections were reduced sharply. One of the major causes of infant mortality thereby trailed off toward statistical insignificance.
In Asia, Africa, and Latin America, cities seldom were capable of making sanitary water and sewage systems available to all the population; yet even there, as the risks of contaminated water became more widely known, simple precautions, like boiling drinking water, and periodical testing of water supplies for bacteriological contamination, introduced a quite effective guard against wholesale exposure to water-borne infections. Administrative systems were not always capable of sustaining an effective bacteriological watch, of course; and enforcement was even more difficult in many situations. But means and knowledge needed to escape large-scale outbreaks of lethal disease became almost universal. Indeed, when local epidemics of cholera or some other killing disease occurred, it soon became common for richer countries to finance international mobilization of medical experts to help local authorities in bringing the outbreak under control. Hence even in cities where a water-sewage circulatory system had never been installed, some of the benefits of public sanitation were swiftly brought to bear.
By 1900, therefore, for the first time since cities had come into existence almost five thousand years previously, the world’s urban populations became capable of maintaining themselves and even increasing in numbers without depending on in-migration from the countryside.66 This was a fundamental change in age-old demographic relationships. Until the nineteenth century, cities had everywhere been popula- tion sumps, incapable of maintaining themselves without constant replenishment from a healthier countryside. It has been calculated, for example, that during the eighteenth century, when London’s Bills of Mortality permit reasonably accurate accountancy, deaths exceeded births by an average of 6,000 per annum. In the course of the century, London therefore required no less than 600,000 in-migrants for its mere maintenance. An even larger number of in-migrants was needed to permit the population increase that was a conspicuous feature of the city’s eighteenth-century history.67
Implications of this change are profound. As cities became capable of sustaining growing populations, older patterns of migration from rural to urban modes of life met new obstacles. Rural in-migrants had to compete with a more abundant, more thoroughly acculturated population of city-born individuals, capable of performing functions formerly relegated to newcomers from the countryside. Social mobility thereby became more difficult than in times when systematic urban die-off opened niches in the cities of the world for upwardly mobile individuals coming in from rural backgrounds. To be sure, in regions where industrial and commercial development proceeded rapidly, this new relation between country and city was masked by the fact that so many new occupations opened in urban contexts that there was room for city-born and rural in-migrants alike. In regions where industrialization has lagged, on the other hand, the problem of social mobility has already assumed visible form. In Latin America and Africa, for example, vast fringes of semi-rural slums commonly surround well-established cities. These are the squatting grounds for migrants from the countryside who are seeking to become urban, yet cannot find suitable employment and so must eke out a marginal existence amid the most squalid poverty. Such settlements give visible form to the collision between traditional patterns of migration from the countryside and an urban population that no longer, as aforetime, withers away so as to accommodate the newcomers crowding at the gate.
More significant still: in all stable rural communities, cus- torn prescribed controls on marriage that had the effect of reducing birth rates to levels that more or less matched up with prevailing death rates and rates of migration away from the village. Various elaborations of dowry rules, for instance, had the effect of postponing marriage in many communities until bride and bridegroom had in hand enough property to assure the new family of a standard of living equivalent to that their parents had known. In city environments, where wastage of population had traditionally prevailed, similar restraints on early marriage and procreation were characteristically limited to propertied classes. Poor urban youths, among whom employment was not usually hereditary, had no reason to wait for their parents to attain an age for retirement, as peasant dowry rules in effect often provided.68 Hence older restraints upon early marriage and procreation were weakened or decayed entirely in urban settings. This, together with the withdrawal of epidemic disease as a serious drain upon human populations since 1900 (or, in Asia, since 1945) underlies the truly extraordinary upsurge of human numbers in our own time.69
Other implications of the demographic relation between city and country extend to redefinitions of what work is; divorce between social rank and possession of land; psychological reactions to crowding, etc. To explore these further would take us too far from the theme of this book; but the transformation of traditional relationships between town and country is surely a fundamental axis of humanity’s encounter with the twentieth century all around the globe. Behind this change lies the series of medical and administrative improvements in urban housekeeping triggered by Europeans’ fear of cholera in the nineteenth century.
International medical co-operation also achieved new efficiency as a result of Europe’s encounter with cholera. International medical congresses date back to 1851 when experts met in Paris to try to settle the disputed question of quarantine, and whether it was effective against cholera and other diseases. Mediterranean doctors and governments, inheriting the methods that had been developed against plague, continued, by and large, to believe in contagion and the effectiveness of quarantine; the sanitary reformers of Britain and northern Europe were scornful of such antiquated ideas, believing that miasma from stinking refuse and sewage was the principal cause of disease. The conference therefore effected nothing but an exchange of views.
Nevertheless, international co-operation against cholera and plague was not entirely fruitless. The main theatre of
cooperation was at first in Egypt. As early as 1831, when cholera first approached, the consuls of the European powers stationed in Alexandria had been asked by Egypt’s modernizing ruler, the Albanian adventurer, Mehemet Ali, to constitute themselves a Board of Health for the city.70 They continued to constitute a sort of special health outpost for western Europe thereafter, keeping track of the epidemiological fate of the Mecca pilgrims, and issuing warnings of the appearance and disappearance of potentially dangerous outbreaks of disease in Egypt. Accordingly, when cholera returned to Egypt in 1883 it seemed no more than a prudent advance upon earlier prophylaxis to dispatch teams of European doctors to the scene, seeking to bring the new resources of bacteriology to bear upon the problem.
The result was spectacular: within a few weeks, the German, Robert Koch, announced that he had discovered the bacillus causing cholera, thereby, as we have seen, giving enormous new impetus to the germ theory of disease. Not only that: methods for guarding against cholera became self-evident as soon as the nature of the infection was known. Chemical disinfectants and heat could kill the bacillus; careful handling of sufferers could guard against passing the disease to others; and by 1893 a vaccine against cholera had been developed. Hence by the end of the nineteenth century, scientific medicine had discovered effective means to counter the dread disease.
Even quite simple administrative actions could have far-reaching consequences when they were guided by the new understanding of infection. Thus in Egypt, official regulation of the Moslem pilgrimage began in 1890, when smallpox vaccination was decreed for all pilgrims entering the country. This eliminated a formerly significant disease from the Moslem pilgrimage. In 1900, mandatory quarantine for all transients was ordered, and in 1913 Egyptian authorities instituted compulsory inoculation against cholera. Thereafter, cholera ceased to disfigure the Moslem pilgrimage.71 The disease remained common in India and sporadically affected China and some other parts of Asia and Africa until after World War II. But as a world scourge, the infection that had transgressed its traditional bounds because of the application of scientific principles to mechanical transport early in the century, was effectively defeated by the application of similar scientific principles to health administration at its close. As such, the career of cholera offers an unusually tidy paradigm of the nineteenth century’s intensified encounter with infectious disease and the triumphant containment of the risks implicit in a megalopolitan, industrialized style of life.
Plagues and Peoples Page 29