The Locavore's Dilemma
Page 13
Clearly, if we are serious about decreasing the overall environmental impact of food production, food miles are nothing but a misleading distraction. To quote one of the world’s leading authorities on the LCA analysis of agricultural productions, Dr. Adrian Williams of the National Resources Management Centre at Cranfield University (U.K.), the “concept of food miles is unhelpful and stupid. It doesn’t inform about anything except the distance travelled.”39 A much more constructive approach to further minimize the environmental impact of agriculture would instead focus on further reducing production and postharvest losses as well as educating consumers on their food handling behaviors.
Blame It on the Poor People40
In recent years, about 40% of the U.K.’s air-freighted fresh fruit and vegetable imports have originated in sub-Saharan countries such as South Africa, Ghana, Tanzania, Uganda, Zambia, and Kenya. These goods drew the ire of uncompetitive European producers and activists who claimed that they were the epitome of unsustainable consumption and therefore deserving of retaliatory measures. In the words of Patrick Holden, Director of the U.K. Soil Association (the main U.K. organic certification organization and lobbyist), Britain should aim “to produce most of its organic food domestically and import as little as possible” because there “is a strong demand” for this “from the public and many of our licensees.”41 Yet, as the basic facts surrounding Kenyan products convincingly illustrate, in order to truly do what is best for the environment, one should avoid decisions based on emotional reactions and poorly disguised protectionist rhetoric and embrace instead price signals.
In 2004, Kenya’s export of vegetables, roots, tubers, and other edible plants totaled $161 million, making it the 27th largest exporting country in this category.42 Kenyan producers also exported $470 million worth of live trees, plants, bulbs, and cut flowers, making the country the 7th largest such exporter in the world at the time. Indeed, Kenyan cut flower exports amounted to $250 million, accounted for about 10% of the agricultural sector’s contribution to the country’s gross domestic product that year, and had a 25% market share in the European Union.43 Not surprisingly, the livelihood of millions of Kenyans has come to depend on those export-based industries.
Because of their light weight, high value, and perishable nature, 91% of the fresh fruits and vegetables exported from Kenya to the U.K. were air freighted,44 adding, for example, an additional 2–18 pence to the cost of each pack of organic Kenyan green beans.45 Intercontinental air freight adds to the atmosphere 8 kilograms of carbon dioxide per kilogram transported—about 200 times more emissions and 12 times more energy than sea transport.46 However, a much larger volume of carbon dioxide emissions is released by U.K. passenger flights each year. In fact, passenger flights amount to 90% of all emissions from airlines, with cargo amounting to about 5%. Furthermore, air freighted imports of fresh fruits and vegetables account for less than 0.1% of the total U.K. emissions of carbon dioxide. Interestingly, 60 to 80% of Kenyan fresh agricultural products are transported in the cargo hold of passenger flights.47 When the passenger-related emissions are factored in, carbon dioxide emission levels for air freighted exports are actually much lower.
A study from 2007 provided another striking illustration of the impact of environmental differences between production locations. Here the authors considered the contrast between cut flowers grown in Kenya and the Netherlands and destined for the U.K. market.48 For each 12,000 cut roses produced, Kenyan producers released 6,000 kilograms of carbon dioxide as opposed to 35,000 kilograms of carbon dioxide for their Dutch competitors. Overall, Kenyan rose production is said to be much more efficient and environmentally friendly compared to Dutch production, reflecting, among other things, the fact that 99% of Dutch emissions were caused by heating and lighting intensive production systems, whereas Kenyan flower production relies mostly on sunshine. In contrast, 91% of Kenyan emissions were attributed to the 4,000-mile air-freight transport from Kenya to the U.K.
When the food miles controversy over African perishable products reached its peak in early 2007, supporters of Kenyan exporters were quick to point out that greenhouse gas emissions associated with air-freighted produce were miniscule in comparison with the impact of tourist air travel by citizens of importing nations. They further argued that Kenyan agriculture typically relied on manual labor and organic fertilizers because they couldn’t afford sophisticated farm machines and chemical pesticides and fertilizers. As such, the carbon dioxide emissions attributable to the production phase are rather negligible. Another relevant fact is that while carbon dioxide emissions per capita vary widely from country to country, the global average is currently estimated to be about 3.6 tons per person per year, with the U.K. average being approximately 9.2 tons, the African average 1.04 tons and the Kenyan average 0.2 ton.49
Green Cities and Trade
Another environmental consideration that is directly relevant to any discussion of locavorism is urbanization. As we argued in chapter 1, there can be no sustained economic development without cities, and, for at least a few millennia, urbanization has been impossible without significant long distance trade in food. Locavorism, though, is inherently anti-urban as it effectively mandates low density settlements distributed over the arable landscape (in our experience, though, a number of locavores do not grasp this implication). This vision of small and self-sufficient communities obviously holds much appeal for people who are not fond of the greater densities and sprawling suburbs of metropolitan areas, to say nothing of the associated higher air pollution and noise levels, crime, and failing public schools.
While most people, including those who are not willing to live there, will acknowledge the unique economic and cultural opportunities offered by thriving metropolitan areas, their environmental benefits are less readily understood, especially if one pictures congested, unpleasant, and unhealthy third world shantytowns. And yet, as observed by commentators whose basic argument parallels that of defenders of high-yield farming, thriving cities are not an environmental problem, but rather the best means to lighten up humanity’s impact on nature. To quote the applied scientists and policy analysts Peter W. Huber and Mark P. Mills, the skyscraper is “America’s great green gift to the planet” for it “packs more people onto less land, which leaves more wilderness undisturbed in other places, where the people aren’t. . . . The less real estate we occupy for economic gain,” they add, “the more we leave undisturbed as wilderness. And the city, though profligate in its consumption of most everything else, is very frugal with land. The one thing your average New Yorker does not occupy is 40 acres and a mule.”50 In the words of economist Edward L. Glaeser, “residing in a forest might seem to be a good way of showing one’s love of nature, but living in a concrete jungle is actually far more ecologically friendly . . . If you love nature, stay away from it.”51 The journalist David Owen further observed that because spreading people thinly across the landscape would increase environmental damage, “even part-time agricultural self-sufficiency . . . would be an environmental and economic disaster.” 52 The basic point made by the likes of Huber, Mills, Glaeser, and Owen is thus that, by virtually any measure, residents of high-density urban areas drive, pollute, consume, and throw away much less than people living in greener surroundings.53 Apart perhaps from self-selected migrants to environmentalist meccas such as Portland, Oregon, or Missoula, Montana, urbanites are not intrinsically greener than rural inhabitants, but when space is at a premium, wastefulness turns out to be prohibitively expensive.
True, growing cities have always been surrounded by lower density suburbs (suburbium originally referred to the area beyond the walls of Ancient Rome), but these always become denser in good economic times.54 This phenomenon has arguably accelerated in the last few decades with the development of “edge cities” (or suburban downtowns) and row housing and garden apartments in new residential developments located far from older urban centers. Actually, for quite a few years the densest metropolitan area in the Unite
d States (including both downtown and suburban areas) has been Los Angeles—and by a fair margin—a result that can be traced back to its numerous high-rise buildings spread out over its territory, high population numbers per individual housing unit, and costly water supply infrastructure. Overall, though, cities, suburbs, roads, and highways cover perhaps less than 5% of the land area of the lower 48 American states, and for several decades, because of high yield technologies, far more American agricultural land has been reverting to wilderness than has been converted to suburbia. Worldwide, cities occupy approximately 2% of the earth’s surface, an area that should double to 4% in the next half century.55 If the world’s 7 billion individuals were living at a density comparable to New York City, all of them could be housed in Texas.56 Provided that economic growth is strong and local governance reasonably effective, large metropolitan areas will prove to be significant environmental assets rather than liabilities.
Although agriculture will continue to have a major impact on the landscape, what is clear from the available evidence is that the world envisioned by locavores would have a significantly larger surface area devoted to growing food (and therefore a much more severe impact on the landscape) than a world where farming is practiced in the most suitable production zones. Moving ever closer to a world dominated by modern agriculture technologies and international trade will not eliminate our impact on the environment, but will nonetheless drastically curtail it. Because of increased competitive pressures, food producers will have no other choice but to constantly find new and better ways of doing things, including generating economies of scale and relocating their operations or increasing their purchases from businesses located in more suitable areas, in the process sparing nature and increasing output. Because it would be inherently wasteful in its use of resources, the world of locavores can only, by contrast, deliver greater environmental damage.
5
Myth #4: Locavorism Increases Food Security
We have had unmistakable warnings, too, in the last few years, that we cannot afford to be dependent for the staples of our food and industry on any single place or production. The potato disease was one of those warnings.
—THOMAS EDWARD CLIFFE LESLIE. 1862.
“The Reclamation of Waste.”The Saturday Review of Politics,
Literature, Science and Art 356 (14) (AUGUST 23): 225
One of the greatest benefits to be expected from the improvements of civilization is that increased facilities of communication will render it possible to transport to places of consumption much valuable material that is now wasted because the price at the nearest market will not pay freight. The cattle slaughtered in South America for their hides would feed millions of the starving population of the Old World, if their flesh could be economically preserved and transported across the ocean.
—GEORGE PERKINS MARSH. 1864.
Man and Nature; or Physical Geography as Modified by Human Action.
Charles Scribner, p. 37
“Food security” has traditionally been defined as providing access at all times to enough sufficient, safe, and nutritious food to allow people to maintain healthy and active lives.1 This goal long remained elusive as historically most people were malnourished most of the time and frequently struggled with food shortages and famines.2 These perennial worries have disappeared from the collective memory of the citizens of advanced economies and are now confined to the least developed and more conflict-prone parts of our planet.3 Even wartime tragedies such as the hongerwinter (hunger winter) of 1944–1945, in which at least 20,000 Dutch citizens and one in six babies starved to death, and the roughly 100,000 individuals who suffered the same fate in Tokyo in the three months that followed the Japanese surrender in 1945, are now almost completely forgotten.
True, important food challenges have yet to be met. For instance, even if in the aggregate enough food is produced to feed each human being substantially more than the minimum caloric intake required for survival, close to a billion people remain malnourished and approximately nine million reportedly die every year of hunger, malnutrition, and related diseases.4 Significant food price spikes have also made a noticeable comeback in recent years.5 Not surprisingly, locavores blame these problems on unfair international trade and the greed of large multinational corporations. Along with so-called “food sovereignists,” their preferred solution is that “people and community” should be given the “ability to sustain themselves” and the right “to define their own agricultural, labor, fishing, food and land policies” without outside interference.6 What these activists envision, however, is not increased individual freedom to patronize either domestic or foreign suppliers, but mandatory government policies to subsidize and protect local producers (with the proviso that their policies do not hurt other countries) despite the wishes of individual consumers or taxpayers.7 Moving in this direction, they claim, will not only make the provisioning of vulnerable communities more secure, but also more “culturally appropriate,” “dignified,” respectful of the environment, and “socially just.”8
The case for increased food sovereignty now essentially revolves around three basic “security” claims. First, because local food systems must not only be smaller in scale but also more diversified (after all, you cannot feed a community a healthy diet by producing only a few commodities), they are inherently more resilient to pests of all kinds than large monocultures. Second, the sudden decline in the demand for or collapse in the production of an agricultural commodity in which a community has overspecialized means that it will be left unable to import the nonlocal food on which it has come to depend. Finally, in times of rapidly rising commodity prices, political turmoil, or all-out war, no community will be better served than by itself. In addition to these claims, many locavores further promote their prescription by invoking an impending oil shortage and drastic climate change. Better anticipate and prepare for the unavoidable, they argue, by accelerating the inexorable transition towards local food systems. We will examine each argument in turn, but before we do so a brief overview of the history of famines in mostly “self-sufficient” local economies is warranted.
The Third Horseman
The third horseman of the Apocalypse of John (otherwise known as the Book of Revelation) famously carried a pair of weighing scales, which would have been used to weigh bread during hungry times. Like his fellow riders (conquest, war, and death from pestilence), he was until recently a familiar presence in most human societies. As the geographer Brian Murton observes, famines have plagued humankind for at least 6,000 years and have long been used by scholars and chroniclers to “slice up history into manageable portions.”9 While researchers still disagree on the widespread, recurring, and severe character of prehistorical hunger, there is a general consensus that, with the invention of agriculture, famines typically resulted from a succession of mediocre harvests rather than from an isolated crop failure. Some could be traced back to human factors such as wars, ethnic and religious persecution, price controls, protectionism, excessive taxation, and lack of respect for private property rights. Others were due to natural origins, such as unseasonable temperatures, excessive or insufficient rainfall, floods, insect pests, rodents, pathogens, soil degradation, and epidemics that made farmers or their beasts of burden unfit for work.10 In many cases, a number of these factors were involved.
To give a glimpse of past horrors and calamities, suffice it to say that during the Hundred Years War in Medieval Western Europe (1337–1453), a combination of crop failures, epidemics and warfare is thought to have reduced the population by two-thirds. Chinese inhabitants suffered an average of perhaps 90 famines per century in the last two thousand years. Between 1333 and 1337, approximately six million Chinese died of starvation whereas perhaps as many as forty-five million perished in the first half of the 19th century. In the 1920s, Chinese peasants “recalled an average of three crop failures during their lifetimes that were serious enough to cause famines. These famines lasted on average about ten
months, and they led up to a quarter of the affected population to eat grasses and strip bark from trees” while forcing “one in every seven people to leave their hungry villages in search of food.”11 In the recent past, at least forty-three million people are now thought to have died during the famine of 1959–1962 as a direct result of the “Great Leap Forward” policies of Mao Zedong, making it the single largest famine of all time.12
Political and individual strategies for coping with famines have always been similar the world over. In the absence of charitable giving and emigration opportunities, authorities could call upon heavenly assistance, impose price controls and seize private reserves, lower import tariffs, expel strangers, identify and make an example out of scapegoats and “profiteers,” and dispatch envoys to find additional supplies. Among private citizens, wealthier individuals could reduce discretionary spending and tap into or stop accumulating savings in order to purchase increasingly scarce and expensive food while poorer people were more likely to stretch available resources by temporarily lowering their food intake. Whenever possible, too, individuals would borrow money to purchase food. Much harder times would command the consumption of seed grains, farm animals, and famine foods, including grass, leaves, bark, clay, and dirt; the selling or mortgaging of familial assets, from clothes and furniture to animals, land, and children (once a famine had receded, children were sometimes sold into slavery so that parents could buy back their land13); reducing the number of mouths to feed through infanticide or senilicide; and cannibalism. In most places the burden of food shortages historically fell disproportionately on the shoulders of the elderly, pregnant and lactating women, and poor and landless people. In many cultures, too, the survival of boys was given priority over that of girls.