Book Read Free

An Edible History of Humanity

Page 21

by Tom Standage


  The ability to synthesize ammonia, combined with new “high-yield” seed varieties specifically bred to respond well to chemical fertilizers, removed this constraint and paved the way for an unprecedented expansion in the human population, from 1.6 billion to 6 billion, during the course of the twentieth century. The introduction of chemical fertilizers and high-yield seed varieties into the developing world, starting in the 1960s, is known today as the “green revolution.” Without fertilizer to nourish crops and provide more food—increasing the food supply sevenfold, as the population grew by a factor of 3.7—hundreds of millions of people would have faced malnutrition or starvation, and history might have unfolded very differently.

  The green revolution has had far-reaching consequences. As well as causing a population boom, it helped to lift hundreds of millions of people out of poverty and underpinned the historic resurgence of the Asian economies and the rapid industrialization of China and India—developments that are transforming geopolitics. But the green revolution’s many other social and environmental side effects have made it hugely controversial. Its critics contend that it has caused massive environmental damage, destroyed traditional farming practices, increased inequality, and made farmers dependent on expensive seeds and chemicals provided by Western companies. Doubts have also been expressed about the long-term sustainability of chemically intensive farming. But for better or worse, there is no question that the green revolution did more than just transform the world’s food supply in the second half of the twentieth century; it transformed the world.

  THE MYSTERY OF NITROGEN

  The origins of the green revolution lie in the nineteenth century, when scientists first came to appreciate the crucial role of nitrogen in plant nutrition. Nitrogen is the main ingredient of air, making up 78 percent of the atmosphere by volume; the rest is mostly oxygen (21 percent), plus small amounts of argon and carbon dioxide. Nitrogen was first identified in the 1770s by scientists investigating the properties of air. They found that nitrogen gas was mostly unreactive and that animals placed in an all-nitrogen atmosphere suffocated. Yet having learned to identify nitrogen, the scientists also discovered that it was abundant in both plants and animals and evidently had an important role in sustaining life. In 1836 Jean-Baptiste Boussingault, a French chemist who took a particular interest in the chemical foundations of agriculture, measured the nitrogen content of dozens of substances, including common food crops, various forms of manure, dried blood, bones, and fish waste. He showed in a series of experiments that the effectiveness of different forms of fertilizer was directly related to their nitrogen content. This was odd, given that atmospheric nitrogen was so unreactive. There had to be some mechanism that transformed nonreactive nitrogen in the atmosphere into a reactive form that could be exploited by plants.

  Some scientists suggested that lightning created this reactive nitrogen by breaking apart the stable nitrogen molecules in the air; others speculated that there might be trace quantities of ammonia, the simplest possible compound of nitrogen, in the atmosphere. Still others believed that plants were somehow absorbing nitrogen from the air directly. Boussingault took sterilized sand that contained no nitrogen at all, grew clover in it, and found that nitrogen was then present in the sand. This suggested that legumes such as clover could somehow capture (or “fix”) nitrogen from the atmosphere directly. Further experiments followed, and eventually in 1885 another French chemist, Marcelin Berthelot, demonstrated that uncultivated soil was also capable of fixing nitrogen, but that the soil lost this ability if it was sterilized. This suggested that nitrogen fixation was a property of something in the soil. But if that was the case, why were leguminous plants also capable of fixing nitrogen?

  The mystery was solved by two German scientists, Hermann Hell-riegel and Hermann Wilfarth, the following year. If nitrogen-fixing was a property of the soil, they reasoned, it should be transferable. They put pea plants (another kind of legume) in sterilized soil, and they added fertile soil to some of the pots. The pea plants in the sterile soil withered, but those to which fertile soil had been added flourished. Cereal crops, however, did not respond to the application of soil in the same way, though they did respond strongly to nitrate compounds. The two Hermanns concluded that the nitrogen-fixing was being done by microbes in the soil and that the lumps, or nodules, that are found on the roots of legumes were sites where some of these microbes took up residence and then fixed nitrogen for use by the plant. In other words, the microbes and the legumes had a cooperative, or symbiotic, relationship. (Since then, scientists have discovered nitrogen-fixing microbes that are symbiotic with freshwater ferns and supply valuable nitrogen in Asian paddy fields; and nitrogen-fixing microbes that live in sugarcane, explaining how it can be harvested for many years from the same plot of land without the use of fertilizer.)

  Nitrogen’s crucial role as a plant nutrient had been explained. Plants need nitrogen, and certain microbes in the soil can capture it from the atmosphere and make it available to them. In addition, legumes can draw upon a second source of nitrogen, namely that fixed by microbes accommodated in their root nodules. All this explained how long-established agricultural practices, known to maintain or replenish soil fertility, really worked. Leaving land fallow for a year or two, for example, gives the microbes in the soil a chance to replenish the nitrogen. Farmers can also replenish soil nitrogen by recycling various forms of organic waste (including crop residues, animal manures, canal mud, and human excrement), all of which contain small amounts of reactive nitrogen, or by growing leguminous plants such as peas, beans, lentils, or clover.

  These techniques had been independently discovered by farmers all over the world, thousands of years earlier. Peas and lentils were being grown alongside wheat and barley in the Near East almost from the dawn of agriculture. Beans and peas were rotated with wheat, millet, and rice in China. In India, lentils, peas, and chickpeas were rotated with wheat and rice; in the New World, beans were interleaved with maize. Sometimes the leguminous plants were simply plowed back into the soil. Farmers did not know why any of this worked, but they knew that it did. In the third century B.C., Theophrastus, the Greek philosopher and botanist, noted that “the bean best reinvigorates the ground” and that “the people of Macedonia and Thessaly turn over the ground when it is in flower.” Similarly, Cato the Elder, a Roman writer of the second century B.C., was aware of beneficial effects of leguminous crops on soil fertility, and he advised that they should “be planted not so much for the immediate return as with a view to the year later.” Columella, a Roman writer of the first century A.D., advocated the use of peas, chickpeas, lentils, and other legumes in this way. And the “Chhi Min Yao Shu,” a Chinese work, recommended the cultivation and plowing-in of adzuki beans, in a passage that seems to date from the first century B.C. Farmers did not realize it at the time, but growing legumes is a far more efficient way to enrich the soil than the application of manure, which contains relatively little nitrogen (typically 1 to 2 percent by weight).

  The unraveling of the role of nitrogen in plant nutrition coincided with the realization, in the mid-nineteenth century, of the imminent need to improve crop yields. Between 1850 and 1900 the population in western Europe and North America grew from around three hundred million to five hundred million, and to keep pace with this growth, food production was increased by placing more land under cultivation on America’s Great Plains, in Canada, on the Russian steppes, and in Argentina. This raised the output of wheat and maize, but there was a limit to how far the process could go. By the early twentieth century there was little remaining scope for placing more land under cultivation, so to increase the food supply it would be necessary to get more food per unit area—in other words, to increase yields. Given the link between plant growth and the availability of nitrogen, one obvious way to do this was to increase the supply of nitrogen. Producing more manure from animals would not work, because animals need food, which in turn requires land. Sowing leguminous plants to enrich
the soil, meanwhile, means that the land cannot be used to grow anything else in the meantime. So, starting as early as the 1840s, there was growing interest in new, external sources of nitrogen fertilizer.

  Solidified bird excrement from tropical islands, known as guano, had been used as fertilizer on the west coast of South America for centuries. Analysis showed that it had a nitrogen content thirty times higher than that of manure. During the 1850s, imports of guano went from zero to two hundred thousand tons a year in Britain, and shipments to the United States averaged seventy-six thousand tons a year. The Guano Islands Act, passed in 1856, allowed American citizens to take possession of any uninhabited islands or rocks containing guano deposits, provided they were not within the jurisdiction of any other government. As guano mania took hold, entrepreneurs scoured the seas looking for new sources of this valuable new material. But by the early 1870s it was clear that the guano supply was being rapidly depleted. (“This material, though once a name to conjure with, has now not much more than an academic interest, owing to the rapid exhaustion of supplies,” observed the Encyclopaedia Britannica in 1911.) Instead, the focus shifted to another source of nitrogen: the huge deposits of sodium nitrate that had been discovered in Chile. Exports boomed, and in 1879 the War of the Pacific broke out between Chile, Peru, and Bolivia over the ownership of a contested nitrate-rich region in the Atacama Desert. (Chile prevailed in 1883, depriving Bolivia of its coastal province, so that it has been a landlocked country ever since.)

  Even when the fighting was over, however, concerns remained over the long-term security of supply. One forecast, made in 1903, predicted that nitrate supplies would run out by 1938. It was wrong—there were in fact more than three hundred years of supply, given the consumption rate at the time—but many people believed it. And by this time sodium nitrate was in demand not only as a fertilizer, but also to make explosives, in which reactive nitrogen is a vital ingredient. Countries realized that their ability to wage war, as well as their ability to feed their populations, was becoming dependent on a reliable supply of reactive nitrogen. Most worried of all was Germany. It was the largest importer of Chilean nitrate at the beginning of the twentieth century, and its geography made it vulnerable to a naval blockade. So it was in Germany that the most intensive efforts were made to find new sources of reactive nitrogen.

  One approach was to derive it from coal, which contains a small amount of nitrogen left over from the biomass from which it originally formed. Heating coal in the absence of oxygen causes the nitrogen to be released in the form of ammonia. But the amount involved is tiny, and efforts to increase it made little difference. Another approach was to simulate lightning and use high voltages to generate sparks that would turn nitrogen in the air into more reactive nitrous oxide. This worked, but it was highly energy-intensive and was therefore dependent on the availability of cheap electricity (such as excess power from hydroelectric dams). So imported Chilean nitrate remained Germany’s main source of nitrogen. Britain was in a similarly difficult situation. Like Germany, it was also a big importer of nitrates, and was doing its best to extract ammonia from coal. Despite efforts to increase agricultural production, both countries relied on imported wheat.

  In a speech at the annual conference of the British Association for the Advancement of Science in 1898, William Crookes, an English chemist and the president of the association, highlighted the obvious solution to the problem. A century after Thomas Malthus had made the same point, he warned that “civilised nations stand in deadly peril of not having enough to eat.” With no more land available, and with concern growing over Britain’s dependence on wheat imports, there was no alternative but to find a way to increase yields. “Wheat preeminently demands nitrogen,” Crookes observed. But there was no scope to increase the use of manure or leguminous plants; the supply of fertilizer from coal was inadequate; and by relying on Chilean nitrate, he observed, “we are drawing on the Earth’s capital, and our drafts will not perpetually be honoured.” But there was an abundance of nitrogen in the air, he pointed out—if only a way could be found to get at it. “The fixation of nitrogen is vital to the progress of civilised humanity,” he declared. “It is the chemist who must come to the rescue . . . it is through the laboratory that starvation may ultimately be turned into plenty.”

  A PRODUCTIVE DISPUTE

  In 1904 Fritz Haber, a thirty-six-year-old experimental chemist at the Technische Hochschule in Karlsruhe, was asked to carry out some research on behalf of a chemical company in Vienna. His task was to determine whether ammonia could be directly synthesized from its constituent elements, hydrogen and nitrogen. The results of previous experiments had been unclear, and many people thought direct synthesis was impossible. Haber himself was skeptical, and he replied that the standard way to make ammonia, from coal, was known to work and was the easiest approach. But he decided to go ahead with the research anyway. His initial experiments showed that nitrogen and hydrogen could indeed be coaxed into forming ammonia at high temperature (around 1,000 degrees Centigrade, or 1,832 degrees Fahrenheit) in the presence of an iron catalyst. But the proportion of the gases that combined was very small: between 0.005 percent and 0.0125 percent. So although Haber had resolved the question of whether direct synthesis was possible, he also seemed to have shown that the answer had no practical use.

  Fritz Haber.

  And there things might have rested, had it not been for Walther Hermann Nernst, another German chemist, who was professor of physical chemistry at Göttingen. Although he was only four years older than Haber, Nernst was a more eminent figure who had made contributions in a number of fields. He had invented a new kind of light bulb, based on a ceramic filament, and an electric piano with guitar-style pickups, though neither was a commercial success. Nernst was best known for having proposed a “heat theorem” (now known as the third law of thermodynamics) in 1906 that would win him the Nobel prize in Chemistry in 1920. This theorem could be used to predict all sorts of results, including the proportion of ammonia that should have been produced by Haber’s experiment. The problem was that Nernst’s prediction was 0.0045 percent, which was below the range of possible values determined by Haber. This was the only anomalous result of any significance that disagreed with Nernst’s theory, so Nernst wrote to Haber to point out the discrepancy. Haber performed his original experiment again, obtaining a more precise answer: This time around the proportion of ammonia produced was 0.0048 percent. Most people would have regarded that as acceptably close to Nernst’s predicted figure, but for some reason Nernst did not. When Haber presented his new results at a conference in Hamburg in 1907, Nernst publicly disputed them, suggested that Haber’s experimental method was flawed, and called upon Haber to withdraw both his old and new results.

  Haber was greatly distressed by this public rebuke from a more se-nior scientist, and he suffered from digestion and skin problems as a result. He decided that the only way to restore his reputation was to perform a new set of experiments to resolve the matter. But during the course of these experiments he and his assistant, Robert Le Rossignol, discovered that the ammonia yield could be dramatically increased by performing the reaction at a higher pressure, but a lower temperature, than they had used in their original experiment. Indeed, they calculated that increasing the pressure to 200 times atmospheric pressure, and dropping the temperature to 600 degrees Centigrade (1,112 degrees Fahrenheit), ought to produce an ammonia yield of 8 percent—which would be commercially useful. The dispute with Nernst seeemed trivial by comparison and was swiftly forgotten, and Haber and Le Rossignol began building a new apparatus that would, they hoped, produce useful amounts of ammonia. At its center was a pressurized tube just 75 centimeters tall and 13 centimeters in diameter, surrounded by pumps, pressure gauges, and condensers. Haber refined his apparatus and then invited representatives of BASF, a chemical company that was by this time funding his work, to come and see it in operation.

  The crucial demonstration took place on July 2, 1909, i
n the presence of two employees from BASF, Alwin Mittasch and Julius Kranz. During the morning a mishap with one of the bolts of the high-pressure equipment delayed the proceedings for a few hours. But in the late afternoon the apparatus began operating at 200 atmospheres and about 500 degrees Centigrade, and it produced an ammonia yield of 10 percent. Mittasch pressed Haber’s hand in excitement as the colorless drops of liquid ammonia began to flow. By the end of the day the machine had produced 100 cubic centimeters of ammonia. A jubilant Haber wrote to BASF the next day: “Yesterday we began operating the large ammonia apparatus with gas circulation in the presence of Dr. Mittasch and were able to keep its production uninterrupted for about five hours. During this whole time it functioned correctly and it continuously produced liquid ammonia. Because of the lateness of the hour, and as we all were tired, we stopped the production because nothing new could be learned from continuing the experiment.”

 

‹ Prev