In the course of this establishment of a world market, a great reversal took place between 1850 and 1900: famine definitively disappeared from Western Europe but spread with devastating effects in the colonial world. Two series of famines between 1876 and 1898, linked to an El Niño climate episode, caused between 30 and 50 million deaths across the world, principally in China and India. Neither of these countries had previously known a disaster such as this. Similar droughts in eighteenth-century China had been satisfactorily managed by the Qing dynasty, thanks to the system of imperial granaries, long-distance transport by the Grand Canal linking north and south China, and emergency grain distribution.
To understand the human impact of this climate episode, therefore, we need to look beyond natural causes: the vulnerability of both Indian and Chinese societies was due to the dislocation of systems of resilience and assistance. China was emerging from the two Opium Wars and the terrible Taiping Civil War (due largely to the weakening of the Middle Kingdom under the hammer blows of European colonialism). As for India, the aim of British policy was to increase its agricultural exports despite famine. This great disaster must therefore be understood as the combination of a regular and quite commonplace climate phenomenon with the construction of the world cereals market centred on London and Chicago (Indian harvests being already purchased on a futures market), and finally, the dismantling of Asian societies by colonialism.69 Thus, in the midst of the famine, an ever-greater share of India’s agricultural products was destined for export: jute, cotton and indigo, but also wheat and rice for the world market. Rice exports in particular grew from less than 700,000 tonnes to over 1.5 million tonnes in the course of the last third of the nineteenth century.70
The ecological consequences of the second industrial revolution in the peripheral countries were equally dramatic. The gutta-percha tree disappeared from Singapore in 1856, then from several Malaysian islands.71 At the end of the nineteenth century, the stampede for rubber took hold of Amazonia, causing deforestation and massacres of Indians. In the early twentieth century, rubber production moved from Brazil to Malaysia, Sri Lanka, Sumatra and then Liberia, where British and American companies (Hoppum, Goodyear, Firestone) established immense plantations. These destroyed millions of hectares of indigenous forest, causing the exhaustion of the soil and introducing malaria.72 In the Congo in the 1920s, the development of rubber plantations, mining exploitation and railways caused the first regional spread of HIV.73
It was in this way, in the last third of the nineteenth century, that ‘underdevelopment’ was born. The massive economic gap between Europe and North America on the one hand, and Asia on the other, dates from this time. Between 1800 and 1913, European per capita income rose by 222 per cent, that of Africa by 9 per cent and that of Asia by only 1 per cent.74 The last third of the nineteenth century, and the beginning of the twentieth, finally saw the emergence of rival powers that undermined British hegemony. The United States first of all, but also Germany, France and then Japan. The rise of competition accelerated imperial projects: in 1800 the European powers controlled 35 per cent of the Earth’s surface politically, 67 per cent in 1878 and no less than 85 per cent in 1914.75 Empire played a key role in world economic development, keeping the British world system afloat. India in particular constituted an immense captive market, becoming the leading importer of British products. Without Asia, which generated 73 per cent of British commercial credit in 1910, this country would have been forced to abandon free trade with its commercial partners (the United States, the white dominions, Germany and France), which would have then experienced a loss of outlets and a slow-down in economic growth. The world economy would have fragmented into autarchic trading blocs, similar to what happened in the wake of the 1929 economic crisis.76
The unequal world-ecology of the Great Acceleration
After two world wars and a great economic depression, the world entered a period of historically exceptional growth after 1945, marking the ‘Great Acceleration’ of the Anthropocene. Whereas in the first half of the twentieth century, an annual increase of 1.7 per cent in the use of fossil fuel was required for an economic growth rate of 2.13 per cent, between 1945 and 1973 an annual rise of 4.48 per cent in fossil fuels (not to mention uranium) corresponded to an economic growth rate of 4.18 per cent. Between 1950 and 1970, the world population grew by 46 per cent, world GDP by 2.6 times, the consumption of minerals and mining products for industry by 3.08 times, and that of construction materials 2.94 times.77 By substituting mineral resources for biomass in construction, petroleum products for animal energy and fertilization in agriculture, and synthetic products for vegetable dyes and agriculture fibres for clothing, it was only the consumption of biomass that grew more slowly than economic growth, a sign of the globalization of a switch from an organic economy to a fossil one. The number of humans who moved from the metabolism of an agricultural society (annual energy consumption per capita of about 65 gigajoules) to an industrial metabolism based on fossil energies (223 gigajoules per capita) grew from 30 per cent of the world population in 1950 to 50 per cent by 2000.78
The Great Acceleration was thus not a uniform phenomenon of accelerated growth, but a qualitative change in lifestyle and metabolism, tying strong world growth to an even stronger growth in fossil fuels (especially oil, which dethroned coal) and mineral resources, and so representing a loss of matter and energy efficiency on the part of the world economy. This process was also unequal geographically and socially, shaped by the dynamic of a world-system that was now dominated by the United States in the context of the Cold War. Emerging from the Second World War, American power was at its apogee. While the European economy was ruined, the GDP of the United States had more than quadrupled since 1939, and the country possessed immense currency reserves. At the end of the 1940s, the US made up 60 per cent of world industrial production, produced nearly 60 per cent of world oil (and consumed as much) and made up a third of world GDP, whereas Great Britain at its apogee in 1870 had only a 9 per cent share.79
In the immediate post-war years, the US government was concerned to create conditions favourable to the expansion of its economy, and to the growth of the Western camp in general. It was in this context that a new international economic order was established, based on free trade and growth: the Bretton Woods Agreements of 1944 established the dollar as world currency, the General Agreement on Tariffs and Trade (GATT) liberalized trade in 1947, coupled with the Marshall Plan and the fourth point of the Truman Doctrine on development aid. This world order made it possible to find outlets for the United States’s gigantic industrial and agribusiness production and ensured full employment and social pacification after the great strikes of 1946. It also aimed to stabilize the Western camp socially by drawing it into growth. The Fordist and consumerist social compromise was then viewed as the best rampart against Communism.80 It was also the goal to ‘develop’ the Third World so as to avoid its turn to Communism, while ensuring cheap raw materials for the United States and its industrialized allies. In the 1950s and ’60s, a gigantic exploitation of natural and human resources enabled the Eastern bloc to put up a good show in the arms race, in space, in production and even in consumption, which was by no means the least important terrain of Cold War confrontation (see Chapter 7). To outrun the Communist camp, the OECD (heir to the Marshall Plan) constituted the strategic arm of the Western camp’s growth policies.
The creation of abundance in Europe and Japan, and the Pax Americana, depended on a key product, oil, to which 10 per cent of Marshall Plan funds were devoted. This oil aid greatly enriched the US oil majors (Standard Oil, Caltex, Socony-Vacuum Oil, etc.) from whom three-quarters of the oil financed by the Marshall Plan was bought, at higher-than-world-market price.81 But oil was also a major geopolitical weapon, by disempowering left-wing European workers’ movements tied to coal (Chapter 5) and stimulating the growth of the Western allies.
The Soviet Union, for its part, could not provide its allies with fossil fuels
but instead drew on the resources of Eastern Europe. Oil also transformed European agriculture, which adopted tractors, chemical fertilizer and pesticides. This ‘petro-farming’ proved highly costly in terms of energy: the rate of energy return from agriculture (number of calories obtained in food per calorie used in its production) fell in Britain from 12.6 in 1826 to 2.1 in 1981, in France from 3 in 1929 to 0.7 in 1970, and to 0.64 in the United States and Denmark by 2005.82
Whereas in the age of empires Western Europe had to import grain, meat and oilseeds, a new world ‘food regime’ set in after 1945. Stimulated by cheap oil and supported by state policy and export aid (the US Public Law 480 of 1954), the agriculture of the industrial countries (including continental Western Europe) became an exporter of agricultural products to the Third World, cereals in particular. This transformation promoted a rural exodus and a low labour cost in the countries of the South seeking a path of industrialization, while the agribusiness multinationals conquered the world and shifted eating habits.
The geopolitical and economic success of this growth-oriented Pax Americana was equalled only by the enormity of its ecological imprint, weighing on the entire planet. The indicator of global human ecological imprint83 rose from 63 per cent of the planet’s bioproductive capacity in 1961 to 97 per cent in 1975,84 reaching today a level of 150 per cent, or a consumption of 1.5 planets per year. Imports of materials, measured in tonnes and aggregating all products together (minerals, energy carriers, biomass, construction materials and manufactured goods) rose by 7.59 per cent per year between 1950 and 1970 in the Western industrial countries – North America, Western Europe, Australia, New Zealand and Japan.85 Almost self-sufficient in iron, copper and bauxite during the first half of the twentieth century, by 1970 the Western industrial countries presented a negative balance of 85 million tonnes of iron, 2.9 million tonnes of copper and 4.1 million tonnes of bauxite.86 In total, measured in tonnes, the imports of these countries rose from 299 million tonnes in 1950 to 1,282 million tonnes in 1970.87
If we consider the evolution of the balance of trade in materials (Figure 14) between the different parts of the world, it seems that the basic ecological difference between the Communist and the capitalist systems lay in the fact that the Communist camp, for its development, exploited and degraded its own environment above all, whereas the Western countries built their own growth on a gigantic draining of mineral and renewable resources from the rest of the non-Communist world, emptying this of its high-quality energy and materials.
Figure 14: Material balance of six major groups of countries since 1950
This colossal extraction of material from the peripheral regions of the world-system was the object of a deliberate strategic attention on the part of the US political leaders. In May 1945, Secretary of the Interior Harold Ickes wrote to President Roosevelt: ‘It is essential … that we fulfill the Atlantic Charter declaration of providing access, on equal terms, by all [Western] nations to the raw materials of the world’.88 Continuing the supply logic of wartime, access to such crucial resources as uranium, rubber and aluminium (as the key ingredient for modern aircraft) now became a matter of state, with policies implemented to secure energy and access to resources, from Venezuelan and Middle Eastern oil to Indian manganese and Congolese uranium. Whereas its rise in economic power between 1870 and 1940 was largely based on the intensive use of its own domestic resources (wood, coal, oil, iron, copper, water, etc.), after the war the United States moved from the position of net exporter of raw materials and energy to one of net importer: Congressional reports and commissions (the Paley Commission of 1951–52), backed up by private think-tanks (Resources for the Future), now proposed the mobilization of world resources to secure the West while preserving American resources for the future.
The United States supported the movement of decolonization as a means for securing its supplies by direct access to resources without the mediation of the European colonial powers. It initiated the ‘UN Scientific Conference on the Conservation and Utilization of Natural Resources’ (UNSCCUR, 1949). Representatives of forty-nine countries called for the inventory and ‘rational use’ of the planet’s natural resources that were as yet unexplored or underutilized for lack of adequate technologies, or (in rare cases) deemed overexploited for want of scientific knowledge. The United States and the UN’s Western experts thus set themselves up as masters of world resources and guardians of their ‘right use’.89
US corporations also played a predominant role in the reorganization of world metabolism. With the advantage of more advanced know-how (particularly in connection with oil, atom and chemical technologies, but also in marketing techniques) and solid networks in the Pax Americana, the globalization of US corporations was fostered by the Second World War and ensuing Cold War. During the Second World War, the US Army was deployed on every continent, bringing with it the great supply corporations. The construction of military bases was alone worth $2.5 billion in contracts, to the profit of Morrison-Knudsen, Bechtel, Brown & Root, and the like. Added to this were the enormous needs in food and oil supply, logistics, etc. These companies developed the capacity to project themselves across the world and produce on a large scale, as well as the connections with military and political decision-makers that would transform them after the war into great multinationals. They established military bases, oil installations, pipelines, dams, refineries or petrochemical installations, nuclear power stations and mines, as well as factories for cement, fertilizers, pesticides and food products.90 Between 1945 and 1965, US corporations were responsible for as much as 85 per cent of the world’s new foreign direct investment.91
The control gained in this way permitted access to world resources in highly favourable conditions. While according to Paul Bairoch, the terms of trade improved for the Third World between the end of the nineteenth century and 1939, the striking phenomenon of the post-war years is the clear deterioration for those ‘developing’ countries that were exporters of primary products to the industrial countries and importers of manufactured goods from these: a deterioration of nearly 20 per cent between 1950 and 1972. For the oil producers, this phase came to an end with the oil shock of 1973, but it continued until the 1990s for the countries exporting mining products or renewable raw materials.92 Economic growth and the social model of the Western industrial countries would have been impossible without this unequal exchange. Economists have recently shown that two-thirds of the growth of the Western industrial countries has been due simply to an increasing use of fossil fuel, with only one-third resulting from sociotechnical progress.93 The incomes of states and their ability to finance investment and social redistribution were also based on oil. In 1971, when the oil majors agreed with OPEC to raise the price per barrel from $2 to $3, the refined product was being sold in Europe at $13, 60 per cent of which went in tax to the consuming country. That means that on each barrel of oil sold the European states received three times more than the OPEC producers.
This economically unequal exchange was also ecologically unequal. Among the three great countries rich in resources, the USSR reached 100 per cent of its biocapacity in 1973, and China reached this level in 1970 (and has continued to grow since, arriving at 256 per cent in 2009), whereas the US footprint was already 126 per cent of its territory’s biocapacity in 1961 and reached 176 per cent in 1973.94 Comparable figures for 1973 are 377 per cent for Great Britain, 141 per cent for France, 292 per cent for West Germany and 576 per cent for Japan, while many Asian, African and Latin American countries had a ratio below 50 per cent at that time, showing that the driving phenomenon of the Great Acceleration embarked on in 1945–73 was the tremendous ecological indebtedness of the Western industrial countries. These literally emptied the rest of the world of its materials and high-quality energy, a phenomenon that is the key to the Cold War (Figure 14). With access to these cheap resources, they embarked on an unsustainable model of development, with their massive emissions of pollutants and greenhouse gases dependent on t
he restorative capacities of the rest of the world’s ecosystemic functioning.
The Great Acceleration thus corresponds to a capture by the Western industrial countries of the ecological surpluses of the Third World. It then appears as the construction of an ecological gap between national economies that generated a great deal of wealth without subjecting their own territories to excessive impacts, and the countries of the rest of the world whose economies were burdened by a heavy footprint on their territory. Figure 15 gives a striking representation of this.95
Figure 15: Creditor and debtor countries in terms of ecological footprint in 1973
This map illustrates the unequal relations of ecological credit and debt that set in with the Great Acceleration. A team from the University of California at Berkeley has measured both the unequal ecological footprints of nations and the regions that these footprints especially burdened. It showed that the poorest countries have a small footprint, with very little effect on the spaces of the rich countries, whereas the rich countries have a large footprint with a heavy burden on the spaces of the poorest countries.96 For one dollar of GDP today, Mali and Bolivia, for example, have to extract twenty times more material from their territory than does the United States, and India and China ten times more.97
Take for example the situation of the forests during the Great Acceleration. Since the last glaciation, 10 million square kilometres of the world’s forest cover have been lost (forty-three times the area of Great Britain), half of this in the twentieth century alone, reducing the planet’s capacity for capturing carbon dioxide and increasing the risk of major climatic disturbance, as well as transforming the soil equilibrium and rainfall of the regions affected.98 But whereas the seventeenth and eighteenth centuries saw a major deforestation in Western Europe (continuing until 1920 in the United States), in the twentieth century, and especially since 1945, there has been an increase in forest cover in Western Europe and a quasi-stabilization in the United States – meaning that the 5 million hectares of forest lost in the twentieth century has been all in the economically poorest countries,99 generating forest and agricultural products consumed largely in Europe and the United States, which in this same period have improved the ecological quality of their own territories.
The Shock of the Anthropocene Page 27