Book Read Free

Still the Iron Age

Page 3

by Vaclav Smil


  Figure 1.1 Charcoal making in early seventeenth-century England depicted in John Evelyn’s Sylva (1664).

  This meant that even when working with high-grade iron ores medieval bloomeries would consume commonly no less than 10 and up to 20 units of charcoal per unit of hot metal. For comparison, by 1800 a typical ratio was down to 8:1, a century later the best practices required just 1.2:1, and the late nineteenth-century Swedish furnaces needed just 0.77 kg of charcoal for every kg of hot metal (Campbell, 1907; Greenwood, 1907). Average 1:4 charcoal:wood ratio meant that (with respective energy densities of 30 and 18 MJ/kg) the traditional charcoaling process lost about 60% of input energy. Charcoal’s energy density of 30 MJ/kg is almost exactly twice that of air-dry wood and 20–30% higher than energy density of good bituminous coals. Free-burning charcoal could reach temperatures of 900°C, and with forced air supply (initially by using hollow wooden or bamboo tubes, later providing more powerful blast by waterwheel-powered bellows) its combustion could deliver nearly 2000°C, far higher than the melting points of common metals (lead melts at 232°C, copper at 1083°C, and iron at 1535°C).

  Although charcoal can be made from reeds and plant stalks, such sources would be entirely inadequate even for small blast furnaces and their operation depended on cutting down natural forests or establishing and harvesting coppicing tree plantations. Production of charcoal (there was no difference in preparing the fuel for metallurgical and other uses) is described in many old and modern sources (Fell, 1908; Greenpeace, 2013; Uhlmann & Heinrich, 1987). English coppicing allowed 15–25 years of growth between harvests. Cut wood for charcoaling was arranged concentrically in circular piles; it was covered by coarse grass and rushes and a thin layer of fine marl or clay. Charcoaling lasted a full day, and after the pile cooled the cover was raked off and charcoal was gathered for transportation to the nearest furnaces.

  Production of charcoal for small bloomeries was not a constraint on metal production in forested regions or in areas where coppicing of relatively fast growing trees provided adequate wood supply, but centuries of metal smelting in semiarid and arid regions of Europe and China were a major contributing factor to deforestation. Scale of charcoal demand had changed with the diffusion of blast furnaces. Although all medieval and early modern blast furnaces remained fairly small (maximum height of just above 5 m and square cross-sections creating internal volumes of less than 15 m3), their campaigns (periods of constant operation) lasted for many months and, particularly in regions where they were clustered, they required unprecedented amounts of charcoal. As a result, in many regions demand for charcoal was the leading cause of deforestation, and one of the best documented examples of this distress was Sussex, now among the most affluent English counties.

  By the middle of the sixteenth century, every iron mill in the Weald (area between the North and the South Downs in southeastern England) consumed every year at least 1500 loads of wood for metallurgical charcoal, and the county’s inhabitants petitioned the king, afraid how many towns were like to decay if the smelting continued, and how the growing scarcity and cost of wood might affect their ability to build houses, water mills and windmills, bridges, ships, and a long list of other items (from wagon wheels to barrels and bowls, which were indispensable in what was still largely a wooden society) (Straker, 1969). In 1548, the King’s commission was set up to examine the impact of the Wealden iron, but that did nothing to stop the smelting: as King (2005) showed, by 1590 the Weald’s charcoal furnaces produced 3.5 times as much iron as they did in 1548, but that was their peak output followed by a steady decline; by 1660 the Weald made less iron than in 1550 as the smelting shifted to other parts of the kingdom.

  During the closing decades of the seventeenth century, English iron-making campaigns lasted for 8 months (between October and May, not only due to water flow restrictions but also due to the need for annual repairs of furnace linings), and (a conservative mean) with 8 kg of charcoal per kilogram of pig iron and with 5 kg of wood per kg of charcoal, a typical furnace producing 300 t of metal would consume about some 12,000 t of wood (Hyde, 1977). That wood was harvested mostly from coppiced hardwood trees cut every 10–20 years and producing 5 t/ha; a single furnace would have thus required about 2400 ha of coppiced hardwoods (a square measuring roughly 5 ×5 km) for its annual campaign.

  Efficiency of converting cast iron to wrought (bar) iron was low, with at least a third of the metal wasted in the process, and the best available information indicates that at least 32 t of wood were needed to produce a tonne of bar iron (Hammersley, 1973). Consequently, a blast furnace and an adjacent bar forge would have consumed no less than about 10,000 t of wood a year—and in the early seventeenth century, with lower smelting and forging efficiencies, it could have been twice as much. With the wood coming from coppiced tree growth harvested in 20-year rotations and yielding 5 m3/ha, that would imply (given wood density of 0.75 g/cm3) about 3.75 t/ha, and hence a late seventeenth-century blast furnace and a forge would have claimed annually wood from about 2700 ha. In 1700, British output of about 12,000 t of bar iron required roughly 400,000 t of charcoaling wood, or at least 100,000 ha of coppiced trees, a square with sides of nearly 32 km (Smil, 1994).

  As the numbers of European blast furnaces were increasing steadily during the sixteenth and seventeenth centuries, two improvements, taller stacks and better bellows (whose sides were made of bull hides fastened to wooden tops and bottoms, and double bellows actuated by the cams on waterwheel axles) became evident (Fell, 1908). Smelting and forging of iron became gradually less wasteful and specific charcoal consumption kept on declining (Harris, 1988), but even so the charcoal use exerted another important limitation on the output of blast furnaces: its relatively low compression strength limited their height and hence their maximum capacity; moreover, the maximum blast power was restricted by the capacity of waterwheels, as well as by their low seasonal performance. Both of these limits were overcome almost at the same time during the latter half of the eighteenth century, the first one by the deployment of more efficient steam engines, and the second one by substituting coke for charcoal, the most important innovation since the beginning of iron smelting (see the next chapter for details).

  Premodern Steel

  As already noted, Japanese tatara offered a direct route to steel when operated in kera-oshi mode. The furnaces were charged with masa iron sand (with low titanium content and a low melting point) which was added first, covered with charcoal, and subjected to 3 days of smelting during which more masa was added to produce larger pieces of kera. Typical output of 2.8 t of kera per heat (and also 0.8 t of pig iron) from 13 t of iron masa and about 13 t of charcoal represents only 28% metal yield. The best chunks of the produced steel—tama-hagane or jewel steel, adding up to usually less than 1 t per heat—were used to make the famous nihonto, Japanese swords. Limited output of the directly produced steel and very expensive production of steel objects had largely restricted the use of that high-quality alloy in premodern Japan to a few categories of weapons.

  Although simple unalloyed iron was usually the only product of European bloomery smelting, on some occasions (with higher temperatures and longer smelting periods) the process produced small unagglomerated fragments (gromps) of high-carbon steel, and gromps were sometimes present even as components of agglomerated blooms. This minuscule steel production was simply an accidental coincidence of localized high temperature and right reducing conditions, not a desired outcome of bloomer smelting. By the beginning of the seventeenth century, the most common route to high-quality steel in preindustrial Europe was the conversion of wrought iron through carburization (cementation) that boosted its very low carbon content.

  European carburization involved prolonged heating of the metal (Swedish low-phosphorus iron was a preferred choice in England) with charcoal in stone chests. Without any subsequent forging, the gradual inward diffusion of carbon into wrought iron produced a thin steel layer enveloping the core of softer iron. Producing steel by carbu
rization of iron can impart as much as 2% of carbon by solid-state diffusion, but it is necessarily a very laborious process: it might take about 50 h to enrich the metal to the depth of 4 mm at 925°C (Godfrey & van Nie, 2004). This technique produced material perfectly suited for plowshares or for body armor. Repeated forging of carburized metal resulted in a fairly even distribution of the absorbed carbon and produced steel suited for swords.

  Carburization in crucibles was also the only way to make high-quality Indian steel, but its high cost limited the metal’s use to expensive, high-prestige objects, above all to swords. The best ones of these became known as Damascus steel blades, distinguished by their beautiful surface pattern of light-etched swirls on a nearly black background. Manufacturing of these blades reached the highest quality between the sixteenth and eighteenth centuries, and the technique then became virtually extinct during the nineteenth century. We now know that the metal was never produced in Syria but that it was imported from India in the form known in the West as wootz steel, a garbled transcription of urukku or urukke, the words for steel in several South Indian languages (Biswas, 1994). The quality of this metal has been admired for 2 millennia and its production and properties have fascinated metallurgists and historians for more than 2 centuries (Egerton, 1896; Feuerbach, 2006; Figiel, 1991; Mushet, 1804; Verhoeven, 1987, 2001; Voysey, 1832).

  This ancient Indian method of steelmaking, with the most likely origin in Tamil Nadu, was described in great detail by Richard Burton in The Book of the Sword:

  About a pound weight of malleable iron, made from magnetic ore, is placed, minutely broken and moistened, in a crucible of refractory clay, together with finely chopped pieces of wood Cassia auriculata. It is packed without flux. The open pots are then covered with the green leaves of the Asclepias gigantea or the Convolvulus lanifolius, and the tops are coated over with wet clay, which is sun-dried to hardness. Charcoal will not do as a substitute for the green twigs. Some two dozen of these cupels or crucibles are disposed archways at the bottom of a furnace, whose blast is managed with bellows of bullock’s hide. The fuel is composed mostly of charcoal and of sun-dried brattis or cow-chips. After two or three hours' smelting the cooled crucibles are broken up, when the regulus appears in the shape and size of half an egg (Burton, 1884, p. 111).

  After removal from crucibles small ingots were shaped into bars for trading, and although a single heat yielded just enough steel for two sword blades, wootz production in some Indian regions (Lahore, Amritsar, Agra, Jaipur, Mysore, Malabar, Golconda) was carried out on an almost industrial scale so that it could supply a steady stream of metal for exports to Persia and the Turkish Empire. Correct metallurgical classification of wootz steel is a hypereutectoid ferrocarbon alloy with spheriodized carbides and carbon content of 1.2–1.8%. Relatively high carbon and phosphorus content make wootz steel brittle in soft condition but well-made swords are ductile and not prone to breaking in a battle although they may bend on impact (Perttula, 2004). Studies of actual sword blades confirmed the material’s superplasticity and high impact hardness, although the latter quality was inferior compared to modern steel: the most often cited examination of Damascene swords showed Brinell hardness of 171–264 compared to 313–473 for Solingen steel (Zschokke, 1924).

  In order to get its fine grain and plasticity, wootz steel must be forged in a narrow range of 850–950°C, well below the white heat of 1200°C that would make the metal brittle (Biswas, 1994). About 50 cycles of forging may be needed to form the final blade shape from a wootz ingot. Modern metallurgical analyses and reconstructed wootz steel production demonstrated that the metal’s characteristic band formation resulted from microsegregation of small amounts of carbide-forming trace elements, above all vanadium, molybdenum, and manganese (Verhoeven, Pendray, & Dauksch, 1998; Verhoeven, 2001). Forging results in tiny (6–9 μm in diameter) articles of cementite (iron carbide, Fe3C) concentrated in clustered bands between 30 and 70 μm apart and after etching the surface with acid they show as delicate, irregularly wavy white lines within dark steel matrix. Ladder-like undulations and rose patterns displayed in some swords are created by cutting grooves and drilling holes into the surface of a blade and then forging it into its final form.

  Grazzi et al. (2011) used a noninvasive analysis (thermal neutron diffraction) to compare ancient and historic steels from Japan, India, and Europe and their findings confirmed the known compositional differences caused by specific production processes. European metal (from crossbow, arrow, pole arm, and gun) required the least amount of fuel to smelt and it is dominated by ferrite (92.8–97.8%) with no observed cementite and martensite; very expensive, high-carbon (>1% C) Indian objects (sword, shield, and knives) had between 70–85% of ferrite and 9–27% of cementite (but no martensite); and the composition of Japanese swords gave intermediate values (and it had a similar position in terms of fuel demand): much like the European metal objects it is dominated by ferrite (roughly 91–96%) but with significant presence of cementite (about 1.5–4.5%) and with traces of martensite (0.3–0.9%).

  In contrast to European and Indian practices, traditional Chinese metallurgists, able to produce carbon-rich liquid iron, faced the opposite challenge in steel production as they lowered high-carbon content through decarburization, and ever since the eventual worldwide adoption of blast furnaces producing liquid iron all modern steelmaking processes (first Bessemer converters, then open-hearth furnaces, now basic oxygen furnaces) are relatively rapid and highly effective forms of the process. The two methods of Chinese decarburization were by blowing oxidizing blasts of cold air over cast iron (known as the hundred refinings), and what Needham (1964) called co-fusion, immersion of wrought iron in molten cast iron practiced at least since the middle of the first millennium of the common era.

  We do not know how this method was eventually mastered in Europe (by transfer from China or by independent discovery) but Biringuccio’s (1540, p. 69) description of the immersion and subsequent cooling makes it clear that this was one of the most admirable feats of traditional ferrous metallurgy:

  This bath is called “the art of iron” by the masters of this art … they keep in it this melted material with a hot fire for four or six hours, often stirring it up … When they find that it has arrived at the desired point of perfection they take out the lumps … cut each one in six or eight small pieces. Then they return them to the same bath to heat again … at last, when these pieces are very hot, they are taken out and made into bars … after this, while they are still hot … they are suddenly thrown into a current of water as cold as possible… In this way the steel takes on that hardness which is commonly called temper; and thus it is transformed into a material that scarcely resembles what it was before.

  Medium- or high-carbon steel was also produced directly in small quantities in parts of sub-Saharan Africa and in Sri Lanka. The simplest ancient technique that could directly produce good-quality medium-carbon steel was practiced in East Africa since the first centuries of the common era. Skilled steelmakers built circular, cone-shaped mud furnaces up to 2 m tall and fueled them with charcoal laid over a pit of charred grass. Schmidt and Avery (1978) described a re-enactment of this traditional procedure during which eight men operating goatskin bellows connected to ceramic tuyères were able to raise the temperature above the melting point of iron.

  Africa’s carbon-containing blooms were made possible either by deep insertion of long tuyères or by building tall furnaces whose height maximized naturally induced draft (Avery & Schmidt, 1979; Schmidt & Childs, 1995). Long tuyères extending deep into the bowl resulted in preheating of blast air and hence in higher temperatures (1300–1500°C) than could be achieved in European blast furnaces relying on cold blast. In Sri Lanka high-carbon steel was produced by building the furnaces on the western sides of hills and ridges exposed to strong seasonal monsoonal winds. Experiments demonstrated that strong natural draft created by these flows helped to produce heterogeneous but sufficiently extensive carburization to indicat
e that the desired product of this high-wind smelting was a metal that can be classed as high-carbon steel (Juleff, 1996; 2009).

  But throughout sub-Saharan Africa, the traditional artisanal production remained small scale and localized, and iron artifacts remained relatively rare before the European imports became available. Some Sri Lankan metal was exported during the early medieval era to make Middle Eastern swords, but in that society, too, daily artifacts continued to be made of wood and clay and iron objects did not become common until the nineteenth century. Iron use was much more common in parts of ancient Europe, but finished objects made of high-carbon steel (1.5–2.1% C) are rare before the seventh century CE. For example, Godfrey and van Nie (2004) analyzed a Germanic ultrahigh carbon (about 2%) steel punch of the Late Roman Iron Age in the Eastern Netherlands that was most likely produced in solid state from thin strips of metal that were carburized separately and then welded.

  Iron objects in Western Europe did not become more common until the eleventh century, when many plows got iron shares (or at least iron colters), and by the thirteenth century metal tools and parts for construction and shipbuilding were used more frequently (Crossley, 1981). But small iron manufactures, so ubiquitous in later eras and so unremarkable today, were still in short supply: individual metal cutlery was absent as “people served themselves from the common plate … everyone drank from a single cup … knives and spoons were passed from person to person” (Flandrin, 1989, pp. 265–266)—and there were no small table forks.

  This began to change during the seventeenth century when iron objects and components also became a more prominent part of European proto-industrialization. Biringuccio’s book, published at the beginning of the early modern era, contains a paragraph-long list of iron products that required specific smithing expertise, ranging from such massive items as anchors, anvils, and guns and such agricultural implements as plowshares, spades, and hoes to “more genteel irons such as knives, daggers, swords” as well as personal armor, gouges, drills, locks, and keys and “many more … things that are made or can be made of iron” (Biringuccio, 1540, p. 370). Their number was to grow rapidly during the eighteenth and the early nineteenth century as it became easier to produce iron in larger quantities and with lower costs: by 1850, although steel was still expensive and relatively rare, iron was ascendant.

 

‹ Prev