by Vaclav Smil
Figure 3.9 Firth of Forth cantilevered steel bridge. Corbis.
In the United States cast and wrought iron were used for shorter bridges since the 1840s and the use of Bessemer steel began during the late 1870s. In 1879 a 270-m-long Glasgow bridge spanned the Missouri river. The famous Brooklyn Bridge was to be built with wrought iron but as inexpensive steel became available the design was changed, and the structure (its construction lasted from 1869 to 1883) has steel cables (19 strands per cable, 300 steel wires per strand, total mass of 3500 t) and steel floor beams (mass of 5000 t). In total 9500 t of steel were used to build the bridge whose main span across the East River is nearly 480 m (Feuerstein, 1998).
The use of iron in building construction predates the use of steel: cast iron (excellent in compression) formed supporting columns, and wrought iron beams were used in some smaller buildings during the 1850s, but only the availability of inexpensive high-tensile steel made it possible to build a new class of structures with the load-carrying frame of steel columns and beams and with no load-bearing walls. Steel skeletons did away with the need for massive foundations to support load-bearing masonry walls and allowed the construction of ever higher skyscrapers, starting with (by today’s standards) modestly high buildings in Chicago and New York during the 1880s that used riveted I beams for columns and beams. William Le Baron Jenney (1832–1907) began the race in 1885 with his 10-story (42 m tall) Home Insurance Building in Chicago (demolished in 1931 and replaced by the Field Building). This was the first structure with a load-carrying frame of steel columns and beams (Turak, 1986).
Structural steel used for the building added up to only a third of the mass of an equally tall masonry building, and its use made it possible to design larger windows and to create more floor space. But buildings beyond four to five floors would have been impractical without mechanical elevation, the solution that first came (steam-powered) in 1857 (Otis elevator used in a five-story New York department store) and that took off only during the late 1880s with the introduction of electricity-powered elevators (for the first time in 1887 in Baltimore, then the first Otis installation in 1889 in New York) and whose rapid adoption created more demand for steel wheels, cables, and cabins. By 1890 Manhattan’s World Building rose to 20 stories (94 m tall), and by 1908 the Singer Building (headquarters of the Singer Manufacturing Company) approached 200 m with its 47 stories (187 m). Other steel-based machines instrumental in the rise of modern skyscraper were construction cranes.
The second generation of skyscrapers took advantage of new H beams produced by a universal mill beam whose design (enabling the beams to be rolled in a single piece directly from ingots) was patented by Henry Grey (1849–1913) in 1897 and whose first US installation came in 1907 at the Bethlehem Steel’s new mill in Saucon, Pennsylvania (Hogan, 1971). The higher strength of these beams opened the way for new high-rise designs, with the pre–WW I record held by the Woolworth Building designed by Cass Gilbert (1859–1934) and completed in 1913 as 233 Broadway (Fenske, 2008). The 60-story structure is 241.4 m tall and it embodied all the essential components of a modern skyscraper: steel skeleton, concrete foundation topped by steel beams, steel bracing to minimize swaying in Manhattan’s strong winds, and high-speed elevators. Woolworth remained the world’s tallest building until 1930 (when it was surpassed by the Chrysler Building and by the Bank of Manhattan and in 1931 by the Empire State Building), and a century after its construction it remains the city’s (since 1983 officially designated) landmark.
Inexpensive steel transformed construction in an even more ubiquitous way, by making it possible to reinforce concrete, the most common building material of modern civilization (Shaeffer, 1992). Firing of limestone and clay at high temperatures vitrifies the alumina and silica materials and produces a glassy clinker whose grinding yields cement suitable to make high-quality concrete; the process was patented by Joseph Aspdin (1778–1855), an English bricklayer, in 1824 (Shaeffer, 1992). He named the material Portland cement because its color resembled the limestone from the Isle of Portland in the English Channel. Concrete, produced by combining cement, aggregate (sand, gravel), and water, is a material that is strong under compression (excellent for columns) and weak in tension (poor for beams), but the latter disadvantage can be eliminated by reinforcing it with iron: concrete forms a solid bond with the metal and, moreover, hydraulic cement protects iron from corrosion (Wight & MacGregor, 2011).
This composite material behaves as a monolith and makes it possible to use concrete not only for beams but also for virtually any shapes, as attested to by such modern structures as Sydney’s opera house with its six large shell-like sails or the sail-like Burj al-Arab hotel in Dubai with its two reinforced concrete wings. Unlike Darby’s smelting with coke or Bessemer’s production of steel, reinforced concrete does not have a single identifiable commercial beginning. Reinforced concrete had eventually become the most ubiquitous building material of modern civilization, but its beginnings were artisanal. Reinforcement of stone and brick structures with iron has a long, and almost universally damaging, history (Drougas, 2009). Iron clamps and ties used to strengthen large buildings—be it Athens’ Parthenon (Zambas, 1992) or Madrid’s Palacio Real (González et al., 2004)—corrode in place, and the resulting iron hydroxides cause a large increase in volume that exerts pressure and leads to fissures and disaggregation of the stressed stone.
Reinforced concrete The first projects with reinforced concrete date to the 1850s—William Wilkinson’s wires in coffered ceilings in 1854, François Coignet’s (1814–1888) lighthouse in Port Said—and in the 1860s they were followed by a work on concrete beams containing metal netting, and in the 1870s by patents given to William Ward, Thaddeus Hyatt, and Joseph Monier (1823–1906), a Parisian gardener whose effort began with designing bigger planters (Newby, 2001). Projects incorporating these innovations began to appear during the 1880s when Adolf Gustav Wayss (1851–1917) brought Monier’s reinforcements to Germany and Austria (in 1885), and when Ernest Ransome (1852–1917) in the United States and François Hennebique (1842–1921) in France patented their reinforced designs for industrial buildings. Ransome’s 1903 15-story Ingalls Building in Cincinnati became the world’s first skyscraper built with reinforced concrete.
Prestressing—stretching the reinforcing bars while the freshly poured concrete (usually in precast form) is wet and releasing the tension when the reinforced concrete hardens—was first patented by P.H. Jackson in California in 1872 and in Germany by Carl Doehring in 1888. But early attempts to make it work failed due to the loss of low prestress caused by shrinkage and creep of concrete, and these problems were eventually (by the late 1920s) overcome thanks to the innovations of Eugène Freyssinet (1879–1962), a French builder of concrete bridges. The advantages of prestressing are obvious: it puts the reinforced pieces into compression, saving both materials (70% less steel and up to 40% less concrete for the same loading capacity as a segment that was simply reinforced) and making it possible to build shell-like structures.
In 1891 Chicago’s Monadnock Building was the first high-rise building built with reinforced concrete, and before the century’s end the quality of cement improved, and its price declined, due to the adoption of modern rotary kilns that sinter limestone and aluminosilicate clays at high temperature. Reinforced concrete became a common material for piles, foundations, chimneys, and industrial structures. In a surprisingly little-known effort, Thomas A. Edison (1847–1931), who in 1889 established his Edison Portland Cement Company in New Village, spent a great deal of time and a substantial amount of money to perfect a system of cast-iron molds that would make it possible for a contractor to pour a concrete house in a day.
The patent application was filed in 1908 and Edison advertised that his concrete houses will eventually cost just $1,200, compared to $4,000 for an average house of those years. Later a few of these cast-in-place concrete houses were built in New Jersey, Pennsylvania, Virginia, and Ohio (Steiger, 1999). Concrete was used to cast not onl
y walls, floors, roof, and stairways but also baths, laundry tubs, fireplaces, basements resembling grottoes, and even picture frames. A few of these houses—including the prototype at 303 North Mountain Avenue in Montclair (New Jersey) still stand, massive testaments to a failed experiment (Beckett, 2012).
In contrast, Parisian apartment designs by Auguste Perret (1874–1954) and bridge designs by Robert Maillart (1872–1940) showed that reinforced concrete structures can be elegant and have a lasting appeal. In particular, Maillart’s Swiss bridges—the first one built in 1901 in Zuoz across the Inn, the second one, the three-hinged arch bridge across the Rhine at Tavanasa, in 1905—have become classic examples of structural elegance (Maillart, 1935). I will close this chapter on pre–WW I iron and steel by noting another unusual application of reinforced concrete in construction: its use in cubist buildings, most notably in Josef Gočár’s House (1880–1945) of the Black Madonna in Prague’s Old Town designed in 1911 (Bulasová et al., 2014).
And before I begin to describe the technical innovations of the past 100 years I should note an important managerial advance of the pre–1914 period that had its origins in the American steel industry. The optimization of labor use and a quest for higher productivity began during the fall of 1880 at the Midvale Steel Company in Pennsylvania where Frederick Winslow Taylor, a young metallurgist from an affluent Philadelphia Quaker family, embarked on more than a quarter century of experiments whose original goal was to quantify all key variables in steel cutting. Taylor eventually reduced these studies to a set of calculations yielding an optimal path for a given cutting task and generalized his findings in his famous book on The Principles of Scientific Management (Taylor, 1911).
Taylor’s quest for optimized production was often criticized as yet another tool of labor exploitation, but a careful reading of his conclusions absolves him: he was against setting any excessive quota. He reminded managers that their combined knowledge is inferior to that of the workmen they supervise and he called for their intimate cooperation with workers. Ironically, Taylor’s application of his rules led to his firing from Bethlehem Steel in 1901, but his principles had eventually formed the foundation for labor optimization in modern industries worldwide.
Chapter 4
A Century of Advances, 1914–2014
Changing Leadership in Iron and Steel Industry
Abstract
Expansion of pig iron smelting and steel production and accompanying technical advances in iron and steel industry in the 100 years since 1914 took place in seven distinct waves. The first short period was the rise of production caused by WW I demand for armaments and munitions, with new record outputs reached in Germany, the United Kingdom, and the United States. Then came the postwar fluctuations of the first half of the 1920s followed by gradual resumption of steady growth during the second half of the decade. The wave, a downward one, was brought by the Great Depression, with pig iron output in major producing countries shrinking by as much as 75% compared to the precrisis peaks. Then yet another brief war-driven demand (1940–1945) revived the output as the United States, the USSR, Germany, and Japan reached new steel production records.
Keywords
Expansion and advancement in iron and steel industry; America’s postwar retreat; Japanese dominance in steelmaking; Chinese dominance in steelmaking; electrostatic precipitators, new blast furnace designs
Expansion of pig iron smelting and steel production and accompanying technical advances in iron and steel industry in the 100 years since 1914 took place in seven distinct waves. The first short period was the rise of production caused by WW I demand for armaments and munitions, with new record outputs reached in Germany, the United Kingdom, and the United States. Then came the postwar fluctuations of the first half of the 1920s followed by gradual resumption of steady growth during the second half of the decade. The third wave, a downward one, was brought by the Great Depression, with pig iron output in major producing countries shrinking by as much as 75% compared to the precrisis peaks. Then yet another brief war-driven demand (1940–1945) revived the output as the United States, the USSR, Germany, and Japan reached new steel production records.
All of these periods had one thing in common: while there was some growth of unit capacities and gradual improvement in processing efficiencies, there were very few fundamental technical advances as the combination of blast furnace smelting, open-hearth furnaces, ingot casting, and ingot reheating and rolling in steel mills dominated the production. Contrary to the frequent assumption of accelerated innovation spurred by war demands, this was also true during WW I and WW II, when the rapid expansion of steel production dictated the reliance on well-proven methods that could be scaled up fast rather than on experimentation with new processes.
All of that changed during the postwar reconstruction as many advances began to transform the industry and as Germany and Japan, two countries whose iron and steel industries were severely damaged by the war, came to the forefront of technical innovation in ferrous metallurgy and processing, and as the USSR continued its mass-scale Stalinist industrialization (started in the late 1920s and set back by the war) based on heavy industrial production. This fifth period lasted until 1973, when OPEC’s unexpected quintupling of oil prices led to a pronounced, worldwide economic slowdown. The sixth period, the two decades between 1973 and 1993, was characterized by fluctuating or declining outputs in Europe, the USSR, and North America and by only slow growth of iron and steel production in China. As a result, the global pig iron output remained essentially constant (512 Mt in 1973, 516 Mt in 1993), and the worldwide steel output in 1993 was less than 5% above the 1973 level.
The final period has seen the truly soaring rise of Chinese steel production set against slow, or no, growth in the rest of the world, with China quadrupling its share of the global output, from 12% in 1993 to 48% in 2013 (WSA, 2014). As in so many other cases of Chinese modernization, the country’s new dominance has been based on well-established techniques, and it is more notable for its stress on quantity rather than on quality. That rapid expansion carried the global steel output past the 1 Gt mark by 2004, but by 2014 it was slowing down: with iron and steel industry being in a senescent stage in the Western world and Japan and in a mature stage in China and South Korea, India is left as the only major producer with a strong growth potential, although it is highly unlikely (and, in fact, undesirable) that in the near future India will replicate China’s achievements of the recent past.
Shifts in steel production during the past 100 years are an excellent indicator of changing national and company fortunes and of the uneven progress of the global economy. At the century’s beginning, on April 1, 1901, Charles M. Schwab, J. Pierpont Morgan (1813–1837), and Andrew Carnegie formed the US Steel Corporation, the world’s first company to be capitalized at more than 1 billion dollars (Apelt, 2001). The company produced nearly 30% of the world’s steel, and the US steel mills contributed 36% of the global output. At the beginning of WW I the three leading steel producers were the United States, Germany, and the United Kingdom; at the beginning of WW II it was the United States, Germany, and the USSR.
In 1945, as the Japanese and German economies were in ruins, the United States accounted for more than 70% of the global steel output, but a decade later expanding Japanese and Soviet steelmaking began to reduce the American share. In 1970 four of the top 10 steel companies were Japanese and three were from the United States, but by 1975 the United States was still the second largest steelmaker (16% of the total) behind the USSR, and during the late 1970s the United States still had three of its steel companies among the world’s top seven, with US Steel, Bethlehem Steel, and National Steel placed fourth to sixth. But the 1980s were the beginning of clear secular retreat as American steel companies, once the world’s preeminent producers and innovation leaders, were falling further down the ladder of the largest enterprises.
By 1990, with the United States producing less than 12% of the world’s steel, US Steel was alone
among the world’s top 10 steel companies (at number five), and in 1991 USX, its parent company, was removed from the list of Dow 30 industrial stocks in order to make way for Disney: displacing steel by the heirs of Mickey Mouse and operators of fairy tale lands offered an unavoidably symbolic confirmation of America’s deep deindustrialization. In 1997 Bethlehem Steel, the industry’s last holdout in Dow industrial, was replaced by Johnson & Johnson (Bagsarian, 2000). Low and (between 1995 and 2000) declining steel prices and large excess foreign steelmaking capacity (reaching about 250 Mt by 2000, or nearly twice the US annual steel consumption) accelerated the retreat, and between 1998 and the year 2000 more than half a dozen US steelmakers filed for bankruptcy (Smil, 2006).
The iron and steel industry’s employment was cut by more than 70%: it was still above 500,000 in 1975 and 425,000 in 1980, but the total fell below 200,000 in 1991 and was just 151,000 in the year 2000, only some 20,000 higher than in the aluminum industry! By the century’s end, America’s steel production actually supplied a slightly larger share of the global total than in 1990 (12% vs. 11.6%), but there was no American company among the top 10 steelmakers. And by 2013, when American steel accounted for just 5.2% of the worldwide output, six out of the 10 largest steelmakers were in China, the country that produced 50% of the world’s steel, and the two largest US companies, US Steel and Nucor, had ranked, respectively, thirteenth and fourteenth.