The Coming Plague

Home > Other > The Coming Plague > Page 31
The Coming Plague Page 31

by Laurie Garrett


  From the beginning, however, McCormick knew that impoverished societies like Sierra Leone could never afford to buy enough ribavirin, build enough hospitals, and train adequate personnel to curtail Lassa fever deaths. As he struggled to find a solution, including searching for ways to eliminate Mastomys rats, McCormick began to appreciate the scale of the problem. As had many Europeans and North Americans before him, and as would others afterward, McCormick was acquiring a deep sense of the “infrastructure” problem. Every day his midwestern can-do mettle was put to the test as his time was wasted repairing an electrical generator, rebuilding a washed-out bridge, sewing up holes in mosquito nets, negotiating receipt of illegible carbon copies of documents from self-important bureaucrats, training bright unskilled people in basic hospital practices, and traveling from one incredibly remote location to another.

  “It does sometimes matter whether you know how many logs it takes to float a Land-Rover,” McCormick would tell his colleagues back in Atlanta.

  In the late 1970s Sierra Leone had a population of 4 million people, representing a polyglot mixture of more than ten tribes, at least five distinct language groups, and three mutually hostile religions. Most Sierra Leonians survived on marginal or subsistence agriculture. What wealth existed in the country was concentrated in the very few hands that had a role in the management of the nation’s diamond or bauxite mining and exportation industries.

  The average baby born in Sierra Leone in 1977 had about a one-in-ten chance of surviving a host of infectious diseases and chronic malnutrition and reaching adulthood; having approached that milestone, men could expect to live to the ripe old age of forty-one, women six years longer. Infant mortality was high: 157 of every 1,000 babies died before their first birthday. And for those older children and adults who fell ill, scant curative facilities were available. Fewer than 150 doctors, many of them foreigners, treated the 4 million citizens of Sierra Leone in a patchwork of hospitals and clinics nationwide that could provide only about 4,000 hospital beds. Not surprisingly, most of the population sought medical help from traditional herbalists and sorcerers, rather than what was offered in these meager Western-style facilities.

  Though the British prided themselves on leaving the stamp of English civilization upon all their colonies, less than 10 percent of the Sierra Leone population was literate when the country gained its independence in 1961. In 1787 Britain had founded the nation of Sierra Leone where no such country had previously existed, carving out boundaries for a slave-free state. Though the British continued to play an active role in the slave trade well into the nineteenth century, the government was compelled by domestic English dissent in the late eighteenth century to provide a safe haven for escaped slaves and the descendants of interracial couples. Thus, Freetown was created.

  Nearly two hundred years later, the creole descendants of those freed slaves were a distinct but tiny minority population of some 60,000 people, representing the bulk of the well-educated elite of the country. In the first decade of self-rule, Sierra Leone’s creole-dominated government spun wildly out of control, with corruption, graft, and mismanagement rife throughout state-operated sectors of the society. Roads, schools, and hospitals deteriorated, new construction was concentrated in the country’s three main cities—Freetown, Bo, and Kenema—and subsistence existence in the villages became even more difficult.

  By the time McCormick and Webb set up their remote Lassa laboratory, Sierra Leone was coming out of a ten-year period of political instability and violence, had established a one-party republic, and was so far in debt to the International Monetary Fund, the World Bank, and other, largely British, creditors that annual national revenues were diverted from the people and projects in desperate need nationwide to pay interest to lenders in London, Geneva, New York, and Paris.

  Unfortunately, there was nothing unique about Sierra Leone. The lack of basic infrastructures, such as roads, schools, hospitals, shipping and supply routes, electricity, and telephone systems, was hobbling African development. Political instability and the corruption that seemed to go hand in hand with militarism and elitist oligarchic government were draining the lifeblood of once proud agrarian societies from Casablanca to Cape Town.

  While the wealthy nations made large commitments to infrastructural development in Latin America and parts of Asia, no such obligation seemed to be felt toward Africa. The continent most ravaged by colonialism, resource exploitation, slavery, and cultural destruction was, as a result, now starving and dying of so many different infectious diseases that even sophisticated physicians often found it impossible to assign specific causes of death to their patients. Twenty-six of the world’s seventy most impoverished nations were in Africa. In most of these countries, the daily caloric intake of the average person was below that considered essential to support health.6

  Several factors well outside the control of even the best-managed underdeveloped countries were suffocating the economies of the world’s poorest nations. Western scientists like Karl Johnson, Joe McCormick, Pat Webb, Pierre Sureau, and Uwe Brinkmann constantly felt the frustrations of working in what was then termed the Third World. Though the plight of the poor nations was hardly a revelation to their own populations, the genuine causes and effects were often surprising and disturbing to well-intentioned foreigners.

  This paradigm of perpetual poverty became obvious to McCormick and others like him the moment some piece of essential machinery broke down: a car, generator, centrifuge, microscope, autoclave, or respirator unit. Rarely could someone be found to service the broken equipment because such wealthy-nation gadgetry was simply too scarce to support a domestic service economy. So scientists like McCormick often spent hours under the hoods of their Land-Rovers trying to identify the culprit in malfunctioning engines.

  Once the faulty transmission, for example, was identified, the next step was finding a replacement. If no other Land-Rover was available to cannibalize for spare parts, McCormick would have to order a new transmission, shipped from London at enormous cost. Because he had U.S. dollars, McCormick could pay the British exporters for the needed transmission: a Sierra Leone resident, whose leones were worth less than U.S. $0.08, had no currency that the British exporter would accept.

  Even with the advantage of valued foreign exchange—“4-X,” as it was colloquially called—in the form of U.S. dollars, McCormick’s difficulties in obtaining a new transmission would only have begun. Once the desired part arrived at the Freetown docks or airport, already having cost McCormick an enormous amount of money in purchasing and shipping, it would remain locked up for days or weeks in a government warehouse while the American negotiated a maze of bureaucratic paperwork and duty fees. If any single piece of paperwork was deemed improper, McCormick might never be allowed his transmission.

  And during that time, the precious transmission, whose value in any African country in 1977 was extraordinary, would rest in a loosely guarded warehouse, ripe for pilfering.

  This scenario was not unique to Sierra Leone, or to Africa. Rather, it was the state of affairs in nearly all of the world’s poorest nations in the 1970s, and would remain so well into the 1990s.

  While their populations exploded in size, national debts mounted, and political instability increased, the world’s poorest countries searched for ways to raise foreign exchange capital that would enable them to purchase essential goods for infrastructural development, such as generators, highway construction materials and equipment, and hospitals. Those nations that possessed mineral resources of value to the West mined their bauxite, copper, diamonds, gold, silver, and other ores and gems at a furious pace, selling the materials in exchange for strong foreign currencies or gold. If no such prized goods could be gleaned from the soils or waters of the country, governments sought ways to exploit their agricultural, forestry, or fishing resources for the highly prized foreign dollar, pound, franc, yen, or mark.

&nbs
p; But they soon discovered that the buyers for all their goods were far better organized than were the scattered competing sellers. The buyers set the prices, and throughout the 1970s global pricing for most resources fluctuated wildly. Corn, rice, coffee, cocoa, wheat, sugar, bananas—all the classic export crops raised in developing countries—sold at radically variable prices year by year.7 The variation made it almost impossible for these countries to plan domestic economic development.

  Despite such market irregularities, the World Bank, the International Monetary Fund, and major foreign aid spenders on both sides of the Iron Curtain continued to fund and promote investments in large-scale projects such as enormous hydroelectric dams, international airports, and containerized shipping ports. Such projects, which would often be named after the receiving nation’s head of state or a recent political hero, appealed to national pride and the prestige of both donors and recipient political leaders.

  But they usually had no ameliorating impact on the health of average citizens, and all too often worsened conditions, giving further advantages to the microbes.

  For example, malnutrition was a widespread and increasingly severe problem throughout the least developed parts of the world in the 1970s, and would continue to be serious, occasionally reaching famine conditions, as the millennium approached. Among the cells of the human body most dependent upon a steady source of nutrients are those of the immune system, most of which live, even under ideal conditions, for only days at a time. As nutritional input declines, these vital cells literally run out of fuel, fail to perform their crucial disease-fighting tasks, or, in worst cases, die off. The body may also lack nutritional resources to make replacement cells, and eventually the immune deficiency can become so acute that virtually any pathogenic microbe can cause lethal disease.

  Yet the primary economic change in most of the world’s poor countries in the 1960s and 1970s involved the creation of export crop systems. Regardless of their political tendencies, governments allocated even more prime agricultural land to production of crops intended for export sale, all in pursuit of foreign exchange. The result was a decline in domestic food production and higher local market prices for grains, vegetables, dairy products, and meat.

  Noting that five corporations controlled 90 percent of all international grain sales, four corporations monopolized 90 percent of the world’s banana trade, and one multinational had cornered 80 percent of the global markets in corn, soy oil, and peanut oil, American critics Frances Moore Lappé and Joseph Collins warned that “multinational agribusiness corporations are now creating a single global agricultural system in which they would exercise integrated control over all stages of production from farm to consumer. If they succeed, they—like the oil companies—will be able to effectively manipulate supply and prices on a worldwide basis through monopoly practices.”8

  In the early 1970s the world’s poorest countries formed a voting bloc in the United Nations, dubbed the Group of 77. They sought to force an open discussion of world economic reform issues and use their UN voting leverage to create a “strategic solidarity” against the multinational corporate interests of wealthier nations.

  Though the Group of 77 effort quite effectively disrupted United Nations activities for years and resulted in dramatic personnel changes throughout the entire system, it did not fundamentally alter the course of events in crop exportation and food distribution. The Western capitalist governments generally ignored the Group of 77’s demands when possible, and effectively counterargued when necessary. The two primary counterarguments were that food scarcity was a function of swelling population sizes rather than of global food distribution patterns. And second, that restricting the activities of multinational corporations was not only unfair to those companies and their stockholders but also counterproductive. In the face of hostile restrictions, they argued, corporate investors would simply abandon the poorest nations altogether.

  Further, in the tense Cold War atmosphere of the 1970s all debate about the wisdom and fairness of various policies of development reform was sharply polarized, and it was nearly impossible for countries to navigate independent nonaligned pathways toward advancement. In capitalist circles and at the world’s leading lending agencies, the general view was that nations had to modernize first, developing industrial capacities and sizable consumer classes. The benefits of economic modernization would eventually trickle down throughout society, resulting in improvements in education, transportation, housing, and health.

  The staunchest advocates of modernization pointed to the Marshall Plan recovery of post-World War II Europe and the MacArthur Plan’s efficacy in rebuilding Japan. They argued that a concerted path toward free market capitalist industrialism was the ideal way to raise the standards of life and health of a nation’s people.

  Stalinist modernists also promoted the notion of industrial development first, social advancement second. Throughout the Soviet Union and the Eastern bloc, massive steel and iron production foundries were glorified, the workers depicted as strong, healthy human beings. According to official Soviet statistics submitted to the World Health Organization in the 1970s, virtually every imaginable infectious disease was on the decline or had disappeared, thanks to communist policies. It was widely believed in international health circles at the time that these statistics were wholly fabricated. 9

  Both superpowers and their allies favored funding projects that had high propaganda value, and most funding was strategically directed. For example, in 1978, half of all World Bank lending went to Brazil, Indonesia, Mexico, India, the Philippines, Egypt, Colombia, and South Korea.10 Half of all U.S. nonmilitary foreign aid went to ten strategic nations, five of which were also on the World Bank’s list of key beneficiaries: Egypt, Israel, India, Indonesia, Bangladesh, Pakistan, Syria, the Philippines, Jordan, and South Korea.11 In 1952 none of the U.S. foreign aid budget went to Africa. By 1968 U.S. nonmilitary aid to the continent had increased only slightly: excluding Egypt (which was considered of Middle East strategic interest), 8 percent of all nonmilitary foreign aid went to African countries.

  Though details of Soviet nonmilitary foreign aid policies were rarely disclosed, the bulk of its donated largesse went to Cuba, Vietnam, Laos, the Eastern bloc nations, and key strategic points of mixed Cold War allegiance, notably Egypt and India.

  From both sides of the Iron Curtain, donors’ monetary contributions to poor nations were all too often linked to prestigious showpieces: hydroelectric dams, international airports, university complexes, tertiary care hospitals. Usually ignored were community-based projects, such as schools, medical clinics, skills training programs, or public health campaigns. Worse yet, donors preferred one-shot investments, and disappeared for the long-haul maintenance of their high-profile efforts; even the dams, airports, and massive construction projects soon took on a shoddy, potentially dangerous reality under their previously polished veneers. Lacking the foreign exchange to purchase replacement parts, hire expertise, or carry out routine maintenance, the poor countries had no choice but to let cracks go unchecked in their dams, watch helplessly as the tarmacs of their runways deteriorated, and use staircases when the elevators of their fancy office buildings broke down. Over a third of typical developing country budgets was eaten up by recurrent costs, while donors insisted on funding only new, prestigious programs.

  Nongovernmental investment in developing countries came exclusively from the capitalist and social democratic states of North America and Europe, and was heavily targeted toward the acquisition of vital resources. In Africa, in 1977, 56 percent of U.S. private investment was in petroleum, 26 percent in mining, and 6 percent in manufacturing.12

  From the socialist and nationalist movements and intellectual circles of South America and Africa emerged the dependency theory of development. Overall, the dependency theorists provided cogent criticisms of Western modernization strategies and investment policies,
avoided issues related to Soviet activities, and had no consensus on an alternative approach to raising the standards of living and health of the people of the Third World. They represented more a force of opposition than an alternative scheme for development.

  Most of these critics (notably such intellectuals as André Gunder Frank, Theotonio dos Santos, Fernando Henrique Cardoso, and Enzo Faletto) argued that acceptance of loans and aid from multinational corporations and lending agencies led to cycles of ever-greater dependency and debt. For example, the poor country that wishes to build a hospital turns to a wealthy nation for donations and loans. Once granted, the hospital’s construction leads to a new dependency on Western-style medicine, drugs, and machines. Purchasing replacement parts for American X-ray machines or French autoclaves exhausts the country’s small foreign exchange resources. Eventually, the hospital becomes a drain rather than a boon to the society, adding a budget line to the Ministry of Health’s already overdrawn accounts. The dependency theorists argued that poor nations lost out in two ways: they were compelled to purchase all equipment and expertise from the richer countries, and whatever products they, in turn, produced had to be sold back to those same wealthy-nation interests at prices set by the purchasers. This, they insisted, represented a lose-lose situation.

  By the late 1970s even Western investors were beginning to recognize that modernization wouldn’t inevitably bring twentieth-century European standards of living and health to the Third World. In the early 1970s the U.S. Agency for International Development had focused on gross national product (GNP) growth as the crucial measure of success for Third World countries. By 1977 the agency’s administrator, John J. Gilligan, was compelled to reverse that policy.

 

‹ Prev