by Bee Wilson
4
MEASURE
Count what is countable, measure what is measurable,
and what is not measurable, make measurable.
GALILEO GALILEI, 1610
Don’t ask me to count the hundreds and thousands.
NIGELLA, LAWSON
FANNIE MERRITT FARMER WAS A COOK WHO HATED sloppiness in the kitchen. She had no truck with a handful of this and a pinch of that, preferring to deal in fixed and level measures. Her magnum opus, The Boston Cooking School Cookbook (1896), was the best-selling American recipe book of the early twentieth century; by 1915, it had sold more than 360,000 copies. Much of its appeal came down to its insistence—so comfortingly scientific—on using correct and accurate measurements in cooking. “A cupful,” wrote Farmer, “is measured level . . . A tablespoonful is measured level. A teaspoonful is measured level.” Farmer, a stout red-haired woman, used the same words in her cookery demonstrations, always employing a case knife to level off the top of the measure. No stray grains of flour would be allowed to intrude on Farmer’s pastry. Her nickname was the “Mother of Level Measurements.”
Farmer believed she was leading America into a new era of accuracy in the kitchen. Gone were the dark ages of guesswork. “Correct measurements are absolutely necessary to ensure the best results,” she wrote. Measuring is a way of imposing order on a chaotic universe. Fannie Farmer was not just teaching her middle-class readers how to cook; she was offering them a feeling of absolute control in the domain of the kitchen. It is odd, then, that Farmer should have chosen a method of measuring—the cup system—that is so erratic, ambiguous, and prone to wildly fluctuating results.
The cup system quantifies all ingredients, whether wet or dry, fluffy or dense, by using measuring cups of a certain volume: 236.59 ml, to be precise. Because the system measures volume rather than weight, it is sometimes referred to as “volumetric.” Cup measures are still nearly universally used in American cookbooks, and therefore in American kitchens, even though there are frequent complaints that measuring by weight using scales would be far easier and more accurate. Through a strange quirk of history, the United States is the only country in the world that measures food like this. Cooks in Australia and New Zealand dip in and out of the cup measuring system, and Europeans generally use volume to measure liquids, but only in the United States is this very specific unit of volume seen as the default way to measure all ingredients—animal, vegetable, or mineral—and this is due in large part to the lingering legacy of Fannie Farmer.
Fast-forward to the present day when, one summer’s evening, I am attempting to cook from one of Fannie Farmer’s supposedly infallible recipes. It sounds simple enough: String Bean Salad.
Marinate two cups cold string beans with French Dressing. Add one teaspoon finely cut chives. Pile in center of salad dish and arrange around base thin slices of radishes overlapping one another. Garnish top with radish cut to represent a tulip.
Have you ever tried to cram finely cut chives into a teaspoon and level the top with a knife? Don’t. The chives fall everywhere. It would make far more sense just to snip them directly into the dish; a few more or less wouldn’t matter. As for measuring out two cups of “cold string beans,” it is a joke. The beans poke about at angles, making the task impossible. To get two perfect level cupfuls of cold green beans, you would have to crush them so much that the salad would be ruined. The recipe is also notable for the important things it doesn’t tell us: how much French dressing? How long do you cook the string beans before you make them “cold”? How do you trim them? And where do you lay your hands on a radish “cut to represent a tulip,” because I’m certainly not making one (“begin at root end and make six incisions through skin running three-fourths length of radish,” instructs Farmer, dauntingly). There is a lot more to a recipe than measurement. Equally, however, no recipe has ever measured all the possible variations of cooking. With her faith in cups, Fannie Farmer thought she had measuring all sewn up. But the truth is that it never is.
Such is the story of measuring in the kitchen. Good cooking is a precise chemical undertaking. The difference between a truly great dinner and an indifferent one might be thirty seconds and one-quarter of a teaspoonful of salt. Recipes are an attempt to make dishes reproducible. In science, reproducibility is the ability of an experiment to be accurately re-created by an independent researcher. This is exactly the quality we seek in a recipe: your recipe for apple pie should ideally taste the same when I make it at home in my own kitchen. But cooks work in conditions with far more extraneous variables than any scientist would allow: unreliable oven temperatures, changeable raw ingredients, not to mention cooking for audiences with different tastes. The cook who gets too hung up on measuring for its own sake can lose the meal for the measuring cups. Focusing on an exact formula can make you forget that the best measure any cook has is personal judgment.
It’s also worth remembering that tools of measurement in the kitchen can be judged by more than one criteria. The first is accuracy: whether your measurement corresponds to a fixed value. Is the pitcher you are using to measure a liter of milk really a liter? The second is precision: the refinement of your measure. Could you measure out milk to within half a milliliter? The third is consistency (scientists would say reproducibility): an ability to measure out the same liter of milk over and over again. The fourth is convertibility: the extent to which a measure fits understandably into a wider system of weight and volume, and whether the tool and the units you use for measuring milk could be used to measure other things, too. The fifth may be the most important. It is ease (or user-friendliness): an ability to measure a liter of milk without any great ceremony, resources, or skill. Judged by this final criterion, one of the greatest measuring tools is the modest Pyrex measuring cup. As well as having nice clear graphics, showing both metric and imperial measures, the Pyrex cup, made from heatproof glass first patented in 1915, has a pourable spout, can withstand both freezers and microwaves, and has an invaluable ability to bounce when dropped, so long as the kitchen floor is not too hard.
All cooking entails measuring, even when it is only the spontaneous calculations of the senses. Your eyes tell you when sauteing onions are translucent enough. Your ears know when popcorn has finished popping. Your nose tells you when toast is about to burn. The cook is constantly making estimates and decisions based on those calculations. Volume and time, temperature and weight: these are the variables every cook has to navigate. But attempts to measure these things with greater accuracy through superior technology have not always led to better cooking. A fixation on formulas in the kitchen can become counterproductive. No technology has yet supplanted the measuring capabilities of a good cook, blessed with a sharp nose, keen eyes, asbestos hands, and many years at a hot stove, whose senses appraise food more tellingly than any artificial tool.
Of all the things that identify us as Americans and set us off from other peoples,“ wrote the great food critic Ray Sokolov in 1989, ”the least ambiguous—and the one most seldom noticed—is the measuring cup.“ Sokolov noted that nowhere else but in the United States does an ”entire nation habitually and almost exclusively measure dry ingredients with a cup.“ The rest of the world measures flour (at least most of the time) by weight.
Scales assume many guises, but the principle is always the same: to measure weight.4 To this end, a French cook might use a balance beam with a shallow tray in it, the kind used elsewhere to weigh newborn babies. In Denmark, the kitchen scale may be an unobtrusive circle attached to a wall, looking rather like a clock, a tray cleverly flipping down to reveal a dial. The English are still fond of old-fashioned Queen scales—the classic mechanical balance scale made from heavy cast iron, with a brass dish on one side and a set of gradated weights on the other. Or maybe that’s just me. Come to think of it, when friends see my kitchen, they often exclaim over the scales as if they were a museum piece, sometimes asking whether I actually use these antiquated weights. Yes! Every day! Although n
ot, I must admit, when accurate weighing is critical, because then—obviously—I measure digitally. Cooks everywhere in the developed world now use digital scales. These are one of the best tools in the modern kitchen, offering great accuracy and precision for little money. On scales that have a zero function, you can even weigh ingredients straight into a mixing bowl, setting the scale back to zero after you put the bowl on the scales. This saves on cleanup and is especially useful for things like syrup and honey, because there’s no need to scrape it messily from the scales to the bowl.
But some of the older methods of weighing worked remarkably well, too (albeit with a larger margin of error). If you are German and of a traditional disposition, it is possible that you have a seesaw balance with a cup for ingredients at one end and a counterweight at the other, with weights printed on the beam. You slide the beam until it balances perfectly, then check the weight marked on the beam. This is a technology—if flimsier in construction—identical to a metal steelyard balance found at Pompeii, dated 79 AD.
The science of weighing things has been largely solved for 2,000 years or more. The oldest Chinese balance goes back to the fourth century BC: the classic design of two pans suspended from a pole. This is not to say that many people could afford their own set of scales. At the start, scales were used for weighing precious things like gold; they crept into the kitchen only centuries later. They were certainly there by the time of Apicius, author of the first ancient Roman “cookbook,” who talks of the weights of not just dry ingredients (“6 scruples of lovage”) but wet ones, too (“1 pound of broth”). So the technology of weighing ingredients has been established for a very long time. And it works better now than ever. Most digital kitchen scales can weigh an ingredient within an accurate range of a single gram. The wonderful thing about weighing is that you do not have to worry about density: 100 g of brown sugar is still 100 g, whether it is tightly packed or fluffy. All that matters is the weight, which is constant. It’s like the old joke: which weighs more, a pound of gold or a pound of feathers? Of course, they both weigh the same. A pound is a pound is a pound.
By contrast, the American volumetric cup measurement system, at least as applied to dry ingredients, can be maddeningly imprecise. A cup of something is not just a cup. Experiments have shown that a cup of flour may vary in weight from 4 to 6 ounces, just by changing the degree to which the flour is sifted and airy or tamped down. This makes the difference between a cake that works and one that doesn’t; between batter that is far too thick and batter that is watery and thin. Let’s assume that the recipe writer wants you to use a “cup” of flour that corresponds to 4 ounces, but you instead measure one that comes in at 6 ounces. You will end up with one and a half times the flour required: a huge imbalance.
The problem with using volume to measure solid materials is that of compression and expansion. Under normal conditions, and assuming that it is not freezing or boiling, the density of water is fixed; you cannot squash it smaller. Flour, by contrast, may be compressed tightly into the cup or fluffed up with lots of air. Some recipes attempt to get around this by stipulating that the flour must be sifted before it is measured, and some even go into detail about exactly how much sifting should take place; but this is still no guarantee of accuracy, because flours vary so much. The sifting also adds a labor-intensive extra step to the recipe. The cook dances around with sieves and spoons, fluffing and packing and heaping and sifting, all to achieve less accuracy than a pair of scales could provide in seconds.
Moving beyond flour to other substances, cup measurement can be more maddening still. It is one thing to measure grains such as rice and couscous and porridge oats in a cup—indeed, this is probably the best way, because you want to be able to gauge the ratio of grains to cooking water by volume; the absolute quantity is less important. The ratio for porridge and most types of rice is 1:1.5, solid to liquid; couscous is 1:1. There’s a certain satisfaction in pouring a measure of couscous into a measuring cup, then pouring it out and trying to get the water or stock up to the same level. You are retracing your own steps. It is something else altogether to try to measure out 5 cups of cubed zucchini (the equivalent of a pound) or 10 cups of bite-sized lettuce (again, a pound). How do you manage the chopping? Do you cut the vegetables piece by piece, adding each one to the measuring cup as you go along, or do you do it all in one go, risking chopping too much? Do you tamp the cubed vegetables down into the cup, or do you assume that the cookbook author has allowed for the spaces in between? Or do you fling the cookbook on the floor in fury at being asked to perform such an absurd task?
America’s attachment to cups really is odd (and indeed there are finally small hints of rebellion against it, such as a New York Times article from 2011 making a “plea on behalf of the kitchen scale”). In countless ways, America feels like a more rational place than Europe. Most American city streets are laid out in orderly numbered grids, not piled up higgledy-piggledy as in London or Rome. Then there is the dollar, in use since 1792, an eminently reasonable system of currency. When it came to money, America established a usable system much sooner than Europe (with the exception of France). In the mid-twentieth century, the process of buying a cup of coffee in Rome using Italian lira was an exercise in advanced mathematics; it was not much better buying a pot of tea in London, as the British clung to the messy system of pounds, shillings, and pence. Meanwhile, Americans strolled to the grocery store and easily counted out their decimal cents, dimes, and dollars. Likewise, American phone numbers are neatly standardized in a ten-digit formula. An American friend describes the method or lack of it governing British phone numbers as “a baffling hodgepodge.” So why, then, when it comes to cooking, do Americans throw reason out of the window and insist on measuring cups?
American cup measures can only be understood in the context of the history of weights and measures. Viewed historically, an absence of clear standards in measures has been the rule rather than the exception. Moreover, cup measures belonged to a wider system of measurement, within which they made considerably more sense than they do today. Our present confusion has its roots in medieval England.
A pint’s a pound the world around” goes the old saying; and so it was at one time. In the Anglo-Saxon period, the ”Winchester measure” was established in England, Winchester being the capital city then. This system created an equivalence between the weight of food and its volume, which would have been an obvious way to create units of volume where none had existed before.
Think about how tricky it would be to establish the exact capacity of a vessel if you didn’t have a measuring vessel. How could you say how much water a given glass held? You could pour it out into another glass and compare the level between the two. But then how would you know how much the second glass held? The exercise quickly becomes nightmarish. It was much easier to establish given capacities by using the volume of certain known, weighed substances. A “Winchester bushel” was defined as the volume of 64 pounds of wheat (which was relatively constant, wheat grains being less variable in density than flour). A bushel was made up of 4 pecks. A peck was 2 gallons. A gallon was 4 quarts. And a quart was 4 pints. The upshot of all this was a pleasing fact: the Winchester bushel came in at 64 pounds (of wheat) and also 64 pints (of water). A pint really was a pound. Neat.
If only these Winchester measures had been the sole standard for volume, all would have been well. But in medieval England, numerous competing gallons came into use for different substances. As well as the Winchester gallon (also known as the corn gallon), there was the wine gallon and the ale gallon, all representing different amounts. The ale gallon was bigger than the wine gallon (around 4.62 liters as opposed to 3.79 liters), as if reflecting the fact that ale is usually drunk in bigger quantities than wine. It’s all too easy to succumb to this kind of unhinged logic when thinking about how to measure things. It’s like Nigel, the rock star in the film This Is Spinal Tap, who believes that to make music louder you need to create an amp that goes up to eleven
instead of ten.
The lack of standardized weights and measures was a problem for customers wanting to receive their due (a pint of ale varied hugely from county to county) but also for the state, because it affected the duty charged on goods. The Magna Carta of 1215 attempted to address the lack of uniformity: “Let there be one measure of wine throughout our whole realm; and one measure of ale; and one measure of corn.” This didn’t work; competing measures continued to proliferate. Between 1066 and the end of the seventeenth century, there were more than twelve different gallons, some assigned to solids and some to liquids.
By the late eighteenth century, there were various moves to escape the anarchy of the medieval measuring system. In the 1790s, after the French Revolution, the metric system began to be established in France. The meter was based on the findings of an expedition of scientists to measure the length of the earth’s meridian, an imagined line from the North Pole to the South Pole: a meter was supposed to be one ten-millionth of the meridian, though due to a tiny miscalculation it is actually a little smaller. But the principle was now set that the French would measure in tens. In 1795, the new measures were decreed in a law of 18th Germinal: liters, grams, meters. Sweeping away the old jumble of archaic standards was meant to demonstrate how modern France had become; how rational; how scientific; and how commercial. Everything, from street systems to pats of butter, was subdivided into perfect tens. The revolutionaries even experimented with a ten-day week—the “décade.” Thanks to this new measuring system, life was now logical. You breakfasted on bread measured in grams; you drank your coffee in milliliters; you paid for it in decimal francs and sous.