The Locavore's Dilemma
Page 15
Commenting on Malthus’s dire predictions, the British economist Kenneth Smith observed in 1952 that the clergyman’s previsions on the fate of the English and Welsh populations (he considered both regions full and on the verge of collapse with a population of approximately 10 million individuals—there are now approximately six times more people living in Wales and England today) had failed to materialize “because the development of overseas territories opened up an enormous source of additional food.” He added that individuals “are not compelled to subsist on the supplies grown in the area where they live; in a trading community it is only the total which must suffice” and that just as “townspeople live on the products of the countryside, so do industrialized nations draw their supplies from more primitive countries which specialize in the production of raw materials and foodstuff.” Englishmen and Welshmen were thus “able to draw on the whole of the world.”41
Although Smith’s core economic insight was eminently sensible, his insistence that “primitive” regions should specialize in the production of a particular foodstuff has long been challenged on the grounds that a sudden drop in demand for a local specialty item—whether because of new competitors, changing consumer tastes, or the development of better substitutes—or an epidemic disease of massive proportion will rapidly make it impossible for laid off workers to purchase the food imports on which they have come to rely. The concern is valid, but the problem is not unique to agriculture. In a time of rapid technological change, all of us invest in personal skills that are likely to become obsolete during our lifetimes. Similarly, as the economist Alfred Marshall observed more than a century ago, “a[n industrial] district which is dependent chiefly on one industry is liable to extreme depression, in case of a falling-off in the demand for its produce, or of a failure in the supply of the raw material which it uses.”42 Fisheries can collapse. The remains of once prosperous mining communities litter the American landscape. The real issue is therefore not whether a poorly diversified economic base is undesirable, but rather whether specialized agricultural regions should revert back to greater self-sufficiency and subsistence economies to prevent economic downturns. The answer is an unequivocal no, but again, one needs to look at the bigger picture to appreciate why.
First, while virtually all commercial products and economic skills will eventually sink into obsolescence, they are still worth producing as long as there is a market for them. Phonographs, vinyl disks, and tape cassettes are now at best collectors’ items, but they created ample employment in earlier times by providing consumers a greater range of musical experiences than would have been the case in their absence. Should investors have refrained from putting their capital into this line of work? Should employees have refrained from acquiring once valuable skills? Obviously not. Similarly, profitable monocultures should be pursued as long as they remain viable in a particular location. If they are suddenly no longer worth pursuing, other alternative crops can often be grown in their place. In the 19th century, coffee production in what is now Sri Lanka was wiped out by a fungus infection, yet the local economy nonetheless forged ahead as tea, rubber (through rubber tree plantations), and coconut production proved profitable.43 At the turn of the 20th century, Pierce’s disease was a significant factor in dooming the wine-making industry of Southern California (the other was the poor quality of the product), but citrus fruits better suited to the local climate more than made up for it.
But, critics readily object, what about people being thrown out of work now when no viable substitutes have been found? The short answer is that agricultural workers are ultimately no different than the former employees of horse-drawn carriage manufacturers. Old jobs need to be terminated so that resources can be redeployed to better uses, which will in turn create more and better jobs. Besides, skills developed in one context can often be used in another. Well then, activists typically add, what about the fate of local communities? Shouldn’t people have a right to live where they want and where they belong? As we see things, humans gave up that “right” the moment they left Africa a very long time ago. True, having to leave one’s rural community in search of better opportunities elsewhere might not be everybody’s wish, but it sure beats the traditional “starving in fresh air” fate of subsistence farmers. Besides, the fact that humans are not plants and can escape from droughts, floods, and other natural and economic calamities should be viewed as a blessing, not a curse. It could also be pointed out that the food price spike of 2008 was somewhat softened by record remittances dispatched by migrants to their countries of origin that totaled close to $340 billion, a 40% increase from the $240 billion sent in 2007.44 In the end, too, abandoned agricultural lands quickly revert to a more “natural” state, a fact that should please environmental activists.
Like financial investors, producers in a monoculture region can reduce the risk of economic collapse through the diversification of their “economic portfolio.” More than a century ago Alfred Marshall observed that the economic meltdown of mono-industrial districts could “in a great measure [be] avoided by those large towns or large industrial districts in which several distinct industries are strongly developed.” Regional diversification, however, doesn’t imply giving up on specialization, but rather developing multiple profitable specializations. Unfortunately, mainstream economists have long been in the habit of discussing the benefits of the geographical division of labor using the simple model of two countries, each only able to manufacture two different commodities. Basic economic reasoning then leads to the conclusion that each region or nation should specialize in the production of only one good, but this result is for all intents and purposes built into their unrealistic assumptions. Why they persist in using this example is something we never quite understood given the realities of vast geographical entities made up of diverse landscapes and millions of people with different abilities. What ultimately matters is the fact that individuals with different aptitudes and interests living in specific places specialize and trade with other individuals, in the process profitably concentrating on all kinds of endeavors and making abstract “entities” such as cities, regions, and nations more rather than less diverse over time.45
Many diverse cities are found throughout the American cornbelt, yet corn producers in this area remain highly specialized. Should things go wrong with corn farming, local producers could find other lines of employment in their region, although this might entail a long commute or relocation to another city or town. The key point to improve food security in the long run is to ensure that as many resources as possible are invested in the development of the profitable activities of tomorrow rather than squandered in a vain attempt to cling to the industries of yesterday. As long as new lines of work are developed and people are free to move, the fate of agricultural workers in declining monoculture regions and towns will be positive by any historical standard and certainly better than in a world shaped by locavore ideals.
One more way to convey this point is to look at the circumstances of the inhabitants of regions that were once agriculturally diversified and regularly subjected to hunger and famines, but which later became large-scale monocultures and practically famine-proof. Because almost all monoculture regions in advanced economies would qualify, we will limit ourselves to the case of a few square miles in the so-called “American bottom,” the approximately 175-square-mile Mississippi flood plain east of Saint Louis in southern Illinois. This area—once the home of the largest Native American settlement north of what is now Mexico—includes a six-square-mile complex of man-made earthen mounds, the Cahokia Mounds, the largest earthwork built during pre-Columbian times.
At its peak in the 13th century, Cahokia might have had a population of as many as 40,000 people (a high-end estimate that would have made it larger than London at the time), a figure that would only be exceeded in the United States by Philadelphia at the turn of the 19th century. As with all sizeable urban settlements in history, evidence has been found of goods that were brou
ght in from long distances, in this case from as far away as the Gulf Coast (shells), Lake Superior (copper), Oklahoma (chert, a flintlike rock), and the Carolinas (mica). The local inhabitants grew corn, squash, goosefoot, amaranth, canary grass, and other starchy seeds, beans, gourds, pumpkins, and sunflowers which they supplemented by wild fruits and nuts, fish, waterfowl, and a few game animals. Despite corn storage in granaries, though, the Cahokians were subjected to recurring hunger and famine.46 By contrast, today, the main mounds are part of the city of Collinsville, Illinois, which is not only the home of the largest ketchup bottle in the world, but is also the self-described “world’s horseradish capital.” Even if America’s (and the world’s) fondness for horseradish were somehow to fade away, the area’s agricultural labor force could find work elsewhere or help local horseradish producers switch to other crops. As such, they are much more food secure than the ancient inhabitants of the site.
As the above cases illustrate, monocultures can only be a serious threat to food security in the absence of broader economic development, scientific and technological advances, trade, and labor mobility. The Irish potato famine, the standard case used by opponents of monocultures, is also a more telling illustration in this respect than most imagine. Without getting into too many details of a complex and still controversial story,47 a key feature of the Irish famine of the 1840s is that it was not the result of a uniquely “Irish” disease, but rather of a problem that was then rampant in North America and Continental Europe. A little appreciated feature of the Irish economy at the time is that it was the home of a thriving export-oriented food sector that had long shipped out goods such as dairy products, grains, livestock, fish, and potatoes to the rest of Europe and various American colonies.48 Not surprisingly, the best lands were devoted to the most lucrative products while a rudimentary form of potato cultivation was concentrated on less fertile soils and had displaced oats as the main staple of poor people because it delivered much higher yields. Potatoes, rich in vitamin C, also proved to be a fitting complement to then-abundant dairy products rich in vitamins A and D. As a result, despite many local and partial potato harvest failures and significant famines in 1799 and 1816, the Irish population grew faster in the 18th century than in any other European country, from around 2 million people in 1750 to about 8.2 million people in 1845.
The downside of this demographic boom, however, was that on the eve of the great famine, about a third of the population depended on potatoes for most of their food intake and that relatively few potato varieties had been introduced from the Americas. As serious disease problems began to emerge in Western Europe and North America in the late 18th century, new South American varieties were introduced in an attempt to increase resistance. Unfortunately, they probably brought in or heightened vulnerability to the so-called late blight of potato caused by the oomycete (a fungus-like microorganism) Phytophthora infestans that attacked both tubers and foliage.
The disease that would forever be associated with Ireland actually first showed up in central Mexico and then reached Pennsylvania in 1843, from which it swept across an area stretching from Illinois and Virginia to Nova Scotia and Ontario over the next three years. It probably entered Belgium in 1845 through a shipment of American seed potatoes and soon ravaged potato fields all the way to Russia. As if things were not dire enough, below average wheat and rye crops also plagued Europe at the time, giving rise to the moniker “the Hungry Forties.” In Ireland, the disease destroyed a third of the potato crop in 1845 and most of the harvest in 1846 and 1848. The resulting loss of foodstuff was of such magnitude (approximately 20 million tons for human consumption alone) that the banning of ongoing Irish grain and livestock exports at the time, a measure requested by many, would have covered at most one-seventh of it. Nearly a million people died in total, the majority from hunger-related diseases such as typhus, dysentery, and typhoid rather than outright starvation. Another notable fact is that the areas that specialized in livestock and cereal production were largely unaffected by the famine.
In Continental Europe, the potato blight resulted in perhaps 100,000 deaths while, to our knowledge, no specific death toll was recorded in North America. While the European number was large, it was literally and proportionately only a small fraction of Ireland’s despite the fact that many poor Western Europeans were also heavily dependent on potatoes for their sustenance. Much evidence suggests that the key difference between Ireland and Western Europe at the time was that the latter was by then offering more employment and food options to its inhabitants, such as artisanal or cottage production, as salesmen working for local markets, or as part-time workers in various industries. Many of the relatively poor Europeans were thus able to purchase other food commodities that were then no longer available to the most impoverished segments of the Irish population. Many individuals whose nutritional intake was also heavily dependent on potatoes moved permanently to industrializing areas, for example from the Scottish highlands to the Scottish lowlands. By and large, however, the potato blight did not result in massive emigration from Western Europe. In New England, farmers gave up on potatoes (often the only crop that grew in their poor soil), culled their cattle and swine for want of feed, and moved away to rich grain (wheat and corn) lands further west or else found manufacturing or other employment in the then-rapidly developing American industrial belt.
Apart from massive death and emigration, the Irish famine had at least two significant consequences. One was that it put an end to quasi-subsistence domestic food provisioning among the poorer classes of Irish people, a welcome development inasmuch as it ensured that serious failures of the potato harvests in the early 1860s and late 1870s did not increase mortality even though they did cause local hardship. The other was that it accelerated efforts to develop more productive and disease-resistant varieties that, together with the development of fungicides, laid the foundation for massive potato production in both Europe and North America in subsequent decades. Potato monocultures are found in many regions of the world today (being essentially water and extremely cheap, potatoes are not traded in large volumes between continents) and their producers are still struggling with a number of diseases and pests, but a repeat of the Irish tragedy is unthinkable in our globalized world.
Locavorism and Military Security
Writing in the year following the end of the First World War, the American geographer Joseph Russell Smith observed that two generations of Americans and Europeans had become so used to an abundant food supply that they no longer considered the possibilities of famine nor understood “the troubles of the past, nor as yet the vital problems of the present.” Dependence on world trade, he argued, had in the end given modern man “the independence of a bird in a cage, no more.” “The world market is excellent,” Russell Smith added, “when it is well supplied.” In wartime, however, the places where food is produced “determined the lives of nations.”49
This experience drastically shaped European food politics in later years. In Italy, the fascist dictator Benito Mussolini launched a “Battle for Grain” in 1925 that, through high tariffs, farm subsidies of various kinds, “local content” milling requirements, newer seeds, and technical education, was supposed to free Italy from the “slavery” of food imports. In practice, however, his policy came at the cost of converting a lot of the Italian landscape from profitable export crops such as fresh produce, citrus fruits, and olives to grain production, resulting in a more monotonous and costlier diet for Italian consumers. In the words of the historian Denis Mack Smith, the battle for grain was ultimately won “at the expense of the Italian economy in general and consumers in particular.” 50 Meanwhile, in Germany, national socialist ideology promoted both agricultural autarky, or self-sufficiency, and Lebensraum—the required vital space of Eastern Europe from which inferior races were to be cleared and food produced to supply the German Fatherland.51 We all know how this one ended. The leaders of the Soviet Union also pursued agricultural autarky until 1973 whe
n confronted with a severe domestic grain shortfall that forced them to open up to food imports from their main competitor for world influence and domination.
The appeal of autarky for imperial and totalitarian regimes is easily understood. As the Austrian economist Ludwig von Mises observed several decades ago: “A warlike nation must aim at autarky in order to be independent of foreign trade. It must foster the production of substitutes irrespective of [economic] considerations. It cannot do without full government control of production because the selfishness of the individual citizens would thwart the plans of the leader. Even in peacetime the commander-in-chief must be entrusted with economic dictatorship.” 52 Many economists otherwise supportive of trade liberalization have also been willing to make an exception to their stance when national security was thought to be at stake. Perhaps the most famous was Adam Smith, who observed, “defence… is of much more importance than opulence.”53 In short, Smith implied, autarkic policies come at a significant price, but it pales in comparison to starvation in time of conflicts. We will now argue, contra Adam Smith himself, that the “autarky for food security” rationale doesn’t stand up to scrutiny.
First, we currently live in what is undoubtedly the most peaceful time in human history.54 Reverting to autarky and, in the process, making life more difficult in countries not well endowed with agricultural resources is therefore more likely to promote military problems than prevent them. As the old saying goes, if goods don’t cross borders, armies eventually will. Second, while geopolitics can always take a turn for the worse, nothing prevents a country from stockpiling large quantities of food and agricultural inputs purchased on the international market while ramping up local production if the threat of prolonged conflict becomes real. Third, putting all of one’s food security eggs in a single geographically limited agricultural basket rather than purchasing food from multiple foreign suppliers is antithetical to any notion of spreading risks. Food policy observers are periodically reminded of this reality when protectionist countries experience domestic production problems. For example, Finland, a country with a grain self-sufficiency policy, suffered its wettest crop year in recorded history in 1987. Not only were yields low, but the wet grain that was somehow harvested in muddy fields soon sprouted in storage. The Finnish government then had no choice but to quietly purchase two-thirds of the country’s yearly wheat supply on world markets (and for only half the price they would have had to pay their own farmers).55