That such a diet makes people sick and fat we have known for a long time. Early in the twentieth century, an intrepid group of doctors and medical workers stationed overseas observed that wherever in the world people gave up their traditional way of eating and adopted the Western diet, there soon followed a predictable series of Western diseases, including obesity, diabetes, cardiovascular diseases, and cancer. They called these the Western diseases and, though the precise causal mechanisms were (and remain) uncertain, these observers had little doubt these chronic diseases shared a common etiology: the Western diet.
What’s more, the traditional diets that the new Western foods displaced were strikingly diverse: Various populations thrived on diets that were what we’d call high fat, low fat, or high carb; all meat or all plant; indeed, there have been traditional diets based on just about any kind of whole food you can imagine. What this suggests is that the human animal is well adapted to a great many different diets. The Western diet, however, is not one of them.
Here, then, is a simple but crucial fact about diet and health, yet, curiously, it is a fact that nutritionism cannot see, probably because it developed in tandem with the industrialization of our food and so takes it for granted. Nutritionism prefers to tinker with the Western diet, adjusting the various nutrients (lowering the fat, boosting the protein) and fortifying processed foods rather than questioning their value in the first place. Nutritionism is, in a sense, the official ideology of the Western diet and so cannot be expected to raise radical or searching questions about it.
But we can. By gaining a firmer grasp on the nature of the Western diet-trying to understand it not only physiologically but also historically and ecologically-we can begin to develop a different way of thinking about food that might point a path out of our predicament. In doing so we have two sturdy-and strikingly hopeful-facts to guide us: first, that humans historically have been healthy eating a great many different diets; and second, that, as we’ll see, most of the damage to our food and health caused by the industrialization of our eating can be reversed. Put simply, we can escape the Western diet and its consequences.
This is the burden of the third and last section of In Defense of Food: to propose a couple dozen personal rules of eating that are conducive not only to better health but also to greater pleasure in eating, two goals that turn out to be mutually reinforcing.
These recommendations are a little different from the dietary guidelines you’re probably accustomed to. They are not, for example, narrowly prescriptive. I’m not interested in telling you what to have for dinner. No, these suggestions are more like eating algorithms, mental devices for thinking through our food choices. Because there is no single answer to the question of what to eat, these guidelines will produce as many different menus as there are people using them.
These rules of thumb are also not framed in the vocabulary of nutrition science. This is not because nutrition science has nothing important to teach us-it does, at least when it avoids the pitfalls of reductionism and overconfidence-but because I believe we have as much, if not more, to learn about eating from history and culture and tradition. We are accustomed in all matters having to do with health to assuming science should have the last word, but in the case of eating, other sources of knowledge and ways of knowing can be just as powerful, sometimes more so. And while I inevitably rely on science (even reductionist science) in attempting to understand many questions about food and health, one of my aims in this book is to show the limitations of a strictly scientific understanding of something as richly complex and multifaceted as food. Science has much of value to teach us about food, and perhaps someday scientists will “solve” the problem of diet, creating the nutritionally optimal meal in a pill, but for now and the foreseeable future, letting the scientists decide the menu would be a mistake. They simply do not know enough.
You may well, and rightly, wonder who am I to tell you how to eat? Here I am advising you to reject the advice of science and industry-and then blithely go on to offer my own advice. So on whose authority do I purport to speak? I speak mainly on the authority of tradition and common sense. Most of what we need to know about how to eat we already know, or once did until we allowed the nutrition experts and the advertisers to shake our confidence in common sense, tradition, the testimony of our senses, and the wisdom of our mothers and grandmothers.
Not that we had much choice in the matter. By the 1960s or so it had become all but impossible to sustain traditional ways of eating in the face of the industrialization of our food. If you wanted to eat produce grown without synthetic chemicals or meat raised on pasture without pharmaceuticals, you were out of luck. The supermarket had become the only place to buy food, and real food was rapidly disappearing from its shelves, to be replaced by the modern cornucopia of highly processed foodlike products. And because so many of these novelties deliberately lied to our senses with fake sweeteners and flavorings, we could no longer rely on taste or smell to know what we were eating.
Most of my suggestions come down to strategies for escaping the Western diet, but before the resurgence of farmers’ markets, the rise of the organic movement, and the renaissance of local agriculture now under way across the country, stepping outside the conventional food system simply was not a realistic option for most people. Now it is. We are entering a postindustrial era of food; for the first time in a generation it is possible to leave behind the Western diet without having also to leave behind civilization. And the more eaters who vote with their forks for a different kind of food, the more commonplace and accessible such food will become. Among other things, this book is an eater’s manifesto, an invitation to join the movement that is renovating our food system in the name of health-health in the very broadest sense of that word.
I doubt the last third of this book could have been written forty years ago, if only because there would have been no way to eat the way I propose without going back to the land and growing all your own food. It would have been the manifesto of a crackpot. There was really only one kind of food on the national menu, and that was whatever industry and nutritionism happened to be serving. Not anymore. Eaters have real choices now, and those choices have real consequences, for our health and the health of the land and the health of our food culture-all of which, as we will see, are inextricably linked. That anyone should need to write a book advising people to “eat food” could be taken as a measure of our ali
enation and confusion. Or we can choose to see it in a more positive light and count ourselves fortunate indeed that there is once again real food for us to eat.
Part I - THE AGE OF NUTRITIONISM
ONE - FROM FOODS TO NUTRIENTS
I f you spent any time at all in a supermarket in the 1980s, you might have noticed something peculiar going on. The food was gradually disappearing from the shelves. Not literally vanishing-I’m not talking about Soviet-style shortages. No, the shelves and refrigerated cases still groaned with packages and boxes and bags of various edibles, more of them landing every year in fact, but a great many of the traditional supermarket foods were steadily being replaced by “nutrients,” which are not the same thing. Where once the familiar names of recognizable comestibles-things like eggs or breakfast cereals or snack foods-claimed pride of place on the brightly colored packages crowding the aisles, now new, scientific-sounding terms like “cholesterol” and “fiber” and “saturated fat” began rising to large-type prominence. More important than mere foods, the presence or absence of these invisible substances was now generally believed to confer health benefits on their eaters. The implicit message was that foods, by comparison, were coarse, old-fashioned, and decidedly unscientific things-who could say what was in them really? But nutrients-those chemical compounds and minerals in foods that scientists have identified as important to our health-gleamed with the promise of scientific certainty. Eat more of the right ones, fewer of the wrong, and you would live longer, avoid chronic diseases, and lose weight.
Nutrients themselves had been around as a concept and a set of words since early in the nineteenth century. That was when William Prout, an English doctor and chemist, identified the three principal constituents of food-protein, fat, and carbohydrates-that would come to be known as macronutrients. Building on Prout’s discovery, Justus von Liebig, the great German scientist credited as one of the founders of organic chemistry, added a couple of minerals to the big three and declared that the mystery of animal nutrition-how food turns into flesh and energy-had been solved. This is the very same Liebig who identified the macronutrients in soil-nitrogen, phosphorus, and potassium (known to farmers and gardeners by their periodic table initials, N, P, and K). Liebig claimed that all that plants need to live and grow are these three chemicals, period. As with the plant, so with the person: In 1842, Liebig proposed a theory of metabolism that explained life strictly in terms of a small handful of chemical nutrients, without recourse to metaphysical forces such as “vitalism.”
Having cracked the mystery of human nutrition, Liebig went on to develop a meat extract-Liebig’s Extractum Carnis-that has come down to us as bouillon and concocted the first baby formula, consisting of cow’s milk, wheat flour, malted flour, and potassium bicarbonate.
Liebig, the father of modern nutritional science, had driven food into a corner and forced it to yield its chemical secrets. But the post-Liebig consensus that science now pretty much knew what was going on in food didn’t last long. Doctors began to notice that many of the babies fed exclusively on Liebig’s formula failed to thrive. (Not surprising, given that his preparation lacked any vitamins or several essential fats and amino acids.) That Liebig might have overlooked a few little things in food also began to occur to doctors who observed that sailors on long ocean voyages often got sick, even when they had adequate supplies of protein, carbohydrates, and fat. Clearly the chemists were missing something-some essential ingredients present in the fresh plant foods (like oranges and potatoes) that miraculously cured the sailors. This observation led to the discovery early in the twentieth century of the first set of micronutrients, which the Polish biochemist Casimir Funk, harkening back to older vitalist ideas of food, christened “vitamines” in 1912 (“vita-” for life and “-amines” for organic compounds organized around nitrogen).
Vitamins did a lot for the prestige of nutritional science. These special molecules, which at first were isolated from foods and then later synthesized in a laboratory, could cure people of nutritional deficiencies such as scurvy or beriberi almost overnight in a convincing demonstration of reductive chemistry’s power. Beginning in the 1920s, vitamins enjoyed a vogue among the middle class, a group not notably afflicted by beriberi or scurvy. But the belief took hold that these magic molecules also promoted growth in children, long life in adults, and, in a phrase of the time, “positive health” in everyone. (And what would “negative health” be exactly?) Vitamins had brought a kind of glamour to the science of nutrition, and though certain elite segments of the population now began to eat by its expert lights, it really wasn’t until late in the twentieth century that nutrients began to push food aside in the popular imagination of what it means to eat.
No single event marked the shift from eating food to eating nutrients, although in retrospect a little-noticed political dustup in Washington in 1977 seems to have helped propel American culture down this unfortunate and dimly lighted path. Responding to reports of an alarming increase in chronic diseases linked to diet-including heart disease, cancer, obesity, and diabetes-the Senate Select Committee on Nutrition and Human Needs chaired by South Dakota Senator George McGovern held hearings on the problem. The committee had been formed in 1968 with a mandate to eliminate malnutrition, and its work had led to the establishment of several important food-assistance programs. Endeavoring now to resolve the question of diet and chronic disease in the general population represented a certain amount of mission creep, but all in a good cause to which no one could possibly object.
After taking two days of testimony on diet and killer diseases, the committee’s staff-comprised not of scientists or doctors but of lawyers and (ahem) journalists-set to work preparing what it had every reason to assume would be an uncontroversial document called Dietary Goals for the United States. The committee learned that while rates of coronary heart disease had soared in America since World War II, certain other cultures that consumed traditional diets based mostly on plants had strikingly low rates of chronic diseases. Epidemiologists had also observed that in America during the war years, when meat and dairy products were strictly rationed, the rate of heart disease had temporarily plummeted, only to leap upward once the war was over.
Beginning in the 1950s, a growing body of scientific opinion held that the consumption of fat and dietary cholesterol, much of which came from meat and dairy products
, was responsible for rising rates of heart disease during the twentieth century. The “lipid hypothesis,” as it was called, had already been embraced by the American Heart Association, which in 1961 had begun recommending a “prudent diet” low in saturated fat and cholesterol from animal products. True, actual proof for the lipid hypothesis was remarkably thin in 1977-it was still very much a hypothesis, but one well on its way to general acceptance.
In January 1977, the committee issued a fairly straightforward set of dietary guidelines, calling on Americans to cut down on their consumption of red meat and dairy products. Within weeks a firestorm of criticism, emanating chiefly from the red meat and dairy industries, engulfed the committee, and Senator McGovern (who had a great many cattle ranchers among his South Dakota constituents) was forced to beat a retreat. The committee’s recommendations were hastily rewritten. Plain talk about actual foodstuffs-the committee had advised Americans to “reduce consumption of meat”-was replaced by artful compromise: “choose meats, poultry, and fish that will reduce saturated fat intake.”
In Defense of Food Page 2