Book Read Free

We Sell Drugs: The Alchemy of US Empire

Page 20

by Suzanna Reiss


  War policy and subsequent defense mobilization illustrated the power of the synthetic drug not merely in political and economic terms, but also, importantly, in terms of their potential impact on the human body. The US military was an important site for drug experimentation as well as a critical consumer market for US-manufactured drugs. Often research that began in the context of helping soldiers overcome ailments would subsequently become incorporated into civilian medical practice. It was often military needs initially that determined which drugs were developed and to what ends. Doctors began testing procaine (novocaine) on injured soldiers at Fort Myer, Virginia, to minimize the pain resulting from “acute sprains and strains of ankles, knees and backs.” The success of these initial experimental uses of the drug were made public by Newsweek as it enthused, “Men who had hobbled and been helped to the hospital were able to walk naturally immediately after treatment and were quickly returned to heavy duty with no ill effects.”25 Research on procaine’s possible uses was extensive and offered other advantages beyond its painkilling powers. Dr. Ralph M. Tovell of Yale University and chief of anesthesiology at Hartford General Hospital, “among the first to persuade the Army of the United States to treat soldiers’ wounds with procaine,” found in his experiments one advantage of the drug was that it “was less habit forming than morphine.”26 Dr. Tovell and other researchers at universities, hospitals, and military clinics, during and after the war, experimented with procaine for a wide range of therapeutic possibilities. By 1947 while “the subject was one on which much work by anesthetists and other doctors must still be done,” procaine as described by the president of the International Anesthesia Research Society “gave promise of developing into an aid for sufferers of arthritis, gangrene, diabetes and similar afflictions.”27

  FIGURE 5. Merck and Co., Inc. Louis Lozowick's artistic depiction of an aerial view of a Merck chemical manufacturing plant, commissioned by the company for an advertisement. The lithographic print captures American modernist infatuation with the machine age and industrial innovation, and the pharmaceutical giant's iconic place within it [Smithsonian American Art Museum, Gift of Adele Lozowick © 1944, Lee Lozowick].

  Clearly the ailments such drugs might relieve made them beneficial to realms outside of the military; however, it is important to note that this early emphasis shaped the landscape of drug production and was tied to an anticipated consumer market. Along with the creation of painkillers, the first major breakthrough in synthetic drug manufacturing, hailed as a revolution in pharmacy, was to combat malaria among the armed forces deployed to tropical countries during World War II. In a study of the history of this development the WHO explained: “[W]hen Anglo-American forces landed in North Africa, Indonesia [a natural source of quinine provided by the chinchona tree] was in the hands of the enemy. The health authorities no longer had free choice of drug and so quinacrine was prescribed. . . . It can be said that the ‘era of the synthetic antimalarials’ dates from that time.”28 The US Office of Scientific Research and Development (OSRD) launched mass production of penicillin during the war as part of the agency’s mandate to “initiate and support scientific research on medical problems affecting the national defense.” In a report to the US president entitled Science: The Endless Frontier (which became the basis for the establishment in 1950 of the National Science Foundation), the director of the OSRD, Vannevar Bush, linked government-sponsored drug production to the success of the war effort. What Bush termed the “physiological indoctrination” of soldiers (with drugs) provided critical support against “the disastrous loss of fighting capacity or life.”29

  As military priorities led to innovations in drug development, the laboratory gained increasing importance as a source for manufacturing drugs synthetically, to avoid dependence on raw material flows which might be disrupted by war or political instability, and to empower soldiers in their work. The production of laboratory-synthesized drugs for military consumption also made them available for other consumer markets. Bush celebrated how the war’s “great production program” made penicillin “this remarkable drug available in large quantities for both military and civilian use.”30 In the civilian realm drug control and development priorities also reflected the unequal distribution of power internationally. For instance, while synthetic drug innovation would remain valuable for future military deployments, the same drugs became valuable resources especially for use by other travelers, most frequently tourists or business employees working for North American or European companies.31 The WHO described how by 1953 synthetic antimalarials made “possible traveling, staying or working in the endemic regions with results equal or even superior to quinine.” It was clear the benefits derived from such drugs were not distributed equally. The WHO advised that it was necessary to extend “this protection and not to limit it to non-immune, non-indigenous persons or those working for them.” As the study concluded, “Among the indigenous population the children are those who are non-immune. It is certain that so far few children have benefited from preventive medication. . . . Although the era of synthetic antimalarials has arrived, the social position has not greatly changed.”32

  This “social position” of drugs was true of both the sites and bodies upon whom their development relied for testing as well as the populations initially envisioned as their primary consumers. Thus drugs tested on soldiers for soldiers and other military personnel would also become useful to the corporate and pleasure-seeking visitors in colonized or “undeveloped” countries, traveling from the countries out of which the soldiers initially came. The international “social position” of a drug was influenced by the objectives spurring its initial development, and also by disparities in distribution and popular access to it. Until the “distribution problem” was solved in less developed parts of the world, if present conditions were not “greatly changed and if economic development is not accelerated, only temporary and non-indigenous residents will greatly benefit from the advances made.”33 Scientists working in the field at mid-century were aware of inequalities in access to newly manufactured drugs, yet the promise such drugs held was not questioned. This then created an opportunity for drug diplomacy, whereby symbolic and material efforts to redress uneven access, particularly among populations in the non-industrial world, became a central component of US (and increasingly Soviet) efforts to foreground health initiatives as exemplars of benevolent superpower intent.

  Public health diplomacy—including the celebration and distribution of wonder drugs as markers of the pinnacle of Western medical advancement—became a prominent public component of projections of American power in the world. In October 1950, Assistant Secretary of State for Economic Affairs Willard L. Thorp declared, “World-health improvement has become a major concern of American foreign policy. Health has become recognized as a major factor in economic and social progress throughout the world—and thus in the preservation of peace.” The US surgeon general echoed such sentiments, explaining that US Army and Navy wartime involvement in civilian health problems in “far-flung combat theaters” and in “liberated or conquered areas” provided a strategic precedent for the ways in which the “promotion of world health came to be recognized as a major instrument for attaining our goals of world peace and prosperity.”34 As the United States sought to step into the power vacuums left by World War II and collapsing European empires, public health initiatives provided a seemingly neutral and unimpeachable realm of intervention. In a geopolitical context animated by anticolonial movements and burgeoning Cold War rivalries, drug trade regulations ensured industrial powers’ virtual monopoly over the manufacturing of legal drug commodities, while providing a formidable weapon in competitions for global influence.

  The wonder drugs were hailed by private and public spokespeople alike as critical tools for gaining allies and securing US power and influence in the world. In 1955, Business Week celebrated the “fantastic growth” of US drug sales in foreign markets by emphasizing the humanitarian implications: “Mi
llions of people in the underdeveloped parts of the world . . . have become acquainted with ‘miracle’ drugs since the end of World War II.”35 The advances of Western science were ultimately (if unevenly) to be exported to the rest of the world to assist in its “development.” The article’s message was dramatized in a split image where a graph depicting the growth in US exports is directly related to the work of Western medical practitioners in the “underdeveloped” world, in this case with the administration of eyedrops on a small “desert child.” The ideology that accompanied US economic expansion often relied on such representations of the benevolent and unquestioned progress US products brought to peoples of the world—people who were depicted as being unable to provide for themselves. While many drugs developed did indeed transform life expectancy and alleviate illness, often the resources invested in them inherently structured drug development not only initially toward helping the ailments of privileged populations, but also toward creating a global dependence on Western manufacturers whose drugs replaced indigenous medicinal plants in the very regions where they were cultivated.

  Beyond this structuring of the international drug economy to the disadvantage of raw materials–producing countries and providing powerful diplomatic leverage to drug-manufacturing countries, it also valorized Western science often in disregard of local belief, custom, and experience. As Marcos Cueto has described initiatives to introduce Western medical practice and medicines in Peru: “In many Andean localities Western medicine was absent; and where it was available, it was applied in an essentially authoritarian way, with an unlimited confidence in the intrinsic capacity of technological resources and little regard for the education of the Indian people. Practitioners of modern medicine . . . assumed that in a ‘backward,’ nonscientific culture, disease could be managed without reference to the individual experiencing it.”36

  FIGURE 6. Graphics from a 1955 Business Week article celebrating the “fantastic growth” of US pharmaceuticals' foreign market.

  Within an emerging international system for the manufacturing and controlled distribution of drug commodities, such issues were not of central concern to the confident circle of scientists and experts working in the field of development connected to poverty, nutrition, and health. Indigenous populations’ own beliefs about the foundations of medicine and health were rarely taken into consideration. Nevertheless, as with the “desert child” invoked above, they often embodied in the US public imaginary proof of the beneficence of US capitalist expansion, even reframing it as bringing health and progress to “less fortunate” parts of the world. The regime not only increasingly entrenched an international economic hierarchy between states but also provided a rationale justifying and perpetuating inequality between peoples within states. The ready objectification of the “desert child” as a site for the performance of Western benevolence and the easy dismissal of alternative cultural understandings of health were indicative of the ways in which drug control policy infused race, class, and geography into a new imperial ideology. While the history of science as a bolstering force behind European and American colonialism stretches back at least into the nineteenth century, as Ashish Nandy has argued, the post–World War II moment marked a shift as science and development became increasingly central categories of national security, and science itself became “a reason of state,” potently on display in US Cold War policy.37 As Shiv Visvanathan further elaborated: “Progress and modernization as scientific projects automatically legitimate any violence done to the third world as objects of experimentation.”38 As scientists, government officials, pharmaceutical executives, and international organizations debated the parameters of drug control, they approached indigenous people of the Third World and the poor and marginalized of the industrial world much like the raw material coca, with a laboratory-like gaze where these people were not considered independent political actors, but rather as (often childlike) malleable objects ripe for socially and chemically engineering other synthetic futures. This dynamic was clear as debates over drug control provided a setting for the working-out of great power rivalries, while reasserting the First World’s dominating influence over the economic and political trajectories of “underdeveloped” countries and peoples.

  COLD WAR PROTOCOLS

  For US and UN officials concerned with international drug control, the profusion of wonder drugs posed a new regulatory challenge as they worked on devising oversight mechanisms to channel manufactured drugs’ productive power—their promise and peril—to their own sanctioned ends. The dreamer behind “Man’s Synthetic Future,” the “scientific statesman” Roger Adams who was deeply involved in advancing chemistry’s role in both government and business, having served among many other posts as consultant for the National Defense Research Committee and the Coca-Cola Company,39 captured the fear lurking at the edges of the wonder: “The future may bring us a series of drugs that will permit deliberate molding of a person, mentally and physically. When this day arrives the problems of control of such chemicals will be of concern to all. They would present dire potentialities in the hands of an unscrupulous dictator.”40

  This dystopian vision of nefarious forces using drugs to manipulate human bodies and social organization was the logical counterpoint to celebrations of their ability to bring “peace and prosperity.” Both projections accepted the proposition—at once celebrated and feared—that governments might use drugs to influence society (and implicitly, that the consumption of drugs—the physical impact—had predetermined social consequences). The distinction—one good, one bad—between the US military’s reliance on drugs for the “physiological indoctrination” of soldiers and a “dictator’s” use of drugs for the “deliberate molding of a person,” rested on moral, cultural, and political arguments to justify the regulation and policing of drugs, even while advancing a belief in the power of drugs to transform publics.

  Such arguments held enormous weight when drug control officials sought to dictate the trajectory of drug production, distribution, and consumption, from the raw materials through to the finished goods. The “two serious new problems” first identified by the UN Commission on Narcotic Drugs (the primary body governing the international drug trade) stemmed from the “habit of chewing coca leaves” and the new abundance of “man-made drugs.” Reporting on the CND’s activities for the Washington Post, Adelaide Kerr explained how a duality intrinsic to the drug revolution generated the need for regulation: “Rightly used, many of these drugs are boons to mankind, but wrongly used they can wreck health, destroy men’s moral sense to such an extent they often turn into criminals, ruin their ability for constructive work, impoverish them, reduce them from producers and wage earners to charity charges of the state and because of these and other reasons, produce extremely serious economic and social problems for their countries.”41

  The belief in the capacity of drugs to improve “mankind” relied on the depiction of drugs as powerful agents: capable of turning people into wage earning, productive members of society, or, in contrast, of transforming them into destructive elements and economic drains on the state. The government had a primary interest in securing economic advantage within the drug trade, which entailed influencing the consuming habits of the population. When framed in this way, the challenges confronting the drug control regime were twofold in the quest to realize “boons to mankind.” First was the question of determining which drugs were most valuable. Second was the need to implement regulations and oversight to ensure consumer demand for all drugs remained in legitimate channels. Drug control was not geared towards eliminating dangerous drugs; rather, it was oriented toward harnessing the productive potential of drugs and delegating the relationship of various countries and populations to the “legal” international drug trade. Controlling the flow of raw materials to limit the nature, extent, and geography of manufactured drug production was one component. Controlling the circulation and consumption of manufactured drugs themselves was another. And so, along with initia
tives in the Andes to control coca leaf production, a concerted international campaign was launched, spearheaded by US representatives at the United Nations, to extend the regulatory regime’s jurisdiction to encompass new synthetic “man-made” drugs.

  Public officials attending the United Nations aired these preoccupations in late 1948 when they convened to draft, debate, and ultimately adopt the Protocol Bringing under International Control Drugs Outside the Scope of the Convention of 13 July 1931 for Limiting the Manufacture and Regulating the Distribution of Narcotic Drugs (the 1948 Protocol). This treaty launched international regulation of synthetic drugs—substances previously “outside the scope” of legal supervision. Eleanor Roosevelt, the former president’s widow and US delegate in attendance, contributed to the sense of urgency as she recounted how synthetic drug production “was so easy that a single factory could flood the world market with products of that category,” and insisted, “the machinery for controlling narcotic drugs should be extended and modernized.” Roosevelt reported that the “United States would give its full support to the draft protocol,” and delicately tried to overcome a central point of contention among world powers about whether the protocol would apply to colonial and other non-self-governing territories: “It is hoped that the General Assembly would approve the protocol during the current session and that all Governments would apply it without delay in their dependent territories.” In a series of exchanges that augured the role drug control would increasingly play in anticolonial and Cold War conflict (addressed more extensively in the next chapter), conflict surrounding passage of the protocol mirrored those accompanying global power realignments.

 

‹ Prev