Book Read Free

Creating the Twentieth Century

Page 3

by Vaclav Smil


  Understanding of energy conversions progressed rapidly during the 1840s and 1850s with the brilliant deductions of Robert Mayer (1814–1878) and James Prescott Joule (1818–1889). When Mayer worked as a physician on a Dutch ship bound for Java, he noted that sailors had a brighter venous blood in the tropics. He correctly explained this fact by pointing out that less energy, and hence less oxygen, was needed for basal metabolism in warmer climates. Mayer saw muscles as heat engines energized by oxidation of blood, and this led to the establishment of mechanical equivalent of heat and hence to a formulation of the law of conservation of energy (Mayer 1851) later known as the first law of thermodynamics. A more accurate quantification of the equivalence of work and heat came independently from Joule’s work: in his very first attempt, he was able to come within less than 1% of the actual value (Joule 1850).

  Soon afterward, William Thomson (Lord Kelvin, 1824–1907) described a universal tendency to the dissipation of mechanical energy (Thomson 1852). Rudolf Clausius (1822–1888) formalized this insight by concluding that the energy content of the universe is fixed and that its conversions result in an inevitable loss of heat to lower energy areas: energy seeks uniform distribution, and the entropy of the universe tends to maximum (Clausius 1867). This second law of thermodynamics—the universal tendency toward disorder and heat death—became perhaps the most influential, as well as a much misunderstood, cosmic generalization. And although its formulation did not end those futile attempts to build perpetuum mobile machines it made it clear why that quest will never succeed.

  There is perhaps no better illustration of the link between the new theoretical understanding and astonishing practical results than Charles Parsons’s invention and commercialization of the steam turbine, the most powerful commonly used prime mover of the 20th (and certainly at least of the first half of the 21st) century (figure 1.5 for details, see chapter 2). Parsons, whose father was an astronomer and a former president of the Royal Society, received mathematical training at the Trinity College in Dublin and at Cambridge, joined an engineering firm as a junior partner, and proceeded to build the first model steam turbine because thermodynamics told him it could be done. He prefaced his Rede lecture describing his great invention by noting “that the work was initially commenced because calculation showed that, from the known data, a successful steam turbine ought to be capable of construction. The practical development of this engine was thus commenced chiefly on the basis of the data of physicists…” (Parsons 1911:1).

  FIGURE 1.5. Longitudinal cross section through the casing of 1-MW-capacity Parsons’s steam turbine designed in 1899. Reproduced from Ewing (1911).

  Pre-1870 gains in chemical understanding were, comparatively, even greater as the science started from a lower base. Brilliant chemists of the late 18th century—Antoine Lavoisier, Wilhelm Scheele, Joseph Priestley—began to systematize the fragmentary understanding of elements and compounds, but there was no unifying framework for their efforts. Early 19th-century physics at least had a solid grasp of mechanics, but large parts of chemistry still had the feel of alchemy. Then came a stream of revolutionary chemical concepts. First, in 1810, John Dalton put atomic theory on a quantitative basis, and in 1828 Friedrich Wothler (1800–1882) prepared the first organic compound when he synthesized urea. Beginning in the 1820s, Justus von Liebig (1803–1873; figure 1.6) established standard practices of organic analysis and attributed the generation of CO2 and water to food oxidation, thus providing a fundamentally correct view of heterotrophic metabolism (Liebig 1840).

  FIGURE 1.6. What Justus von Liebig (1803–1873) helped to do so admirably for chemistry, other early 19th-century scientists did in their disciplines: made them the foundations of a new knowledge economy. Photo from author’s collection.

  After 1860, Friedrich August Kekulé (1829–1896) and his successors made sense of the atomic structure of organic compounds. In 1869 Dimitrii Mendeleev (1834–1907) published his magisterial survey of chemistry and placed all known, as well as yet unknown, elements, in their places in his periodical table. This achievement remains one of the fundamental pillars of the modern understanding of the universe (Mendeleev 1891). These theoretical advances were accompanied by the emergence of chemical engineering, initially led by the British alkali industry, and basic research in organic synthetic chemistry brought impressively rapid development of the coal-tar industry. Its first success was the synthesis of alizarin, a natural dye derived traditionally from the root of madder plant (Rubia tinctorum).

  Badische Anillin- & Soda-Fabrik (BASF) synthesized the dye by using the method invented by Heinrich Caro (1834–1910), the company’s leading researcher; his method was nearly identical to the process proposed in England by William Henry Perkin (1838–1907). Synthetic alizarin has been used ever since to dye wool and also to stain microscopic specimens. German and British patents for the process were filed less than 24 hours apart, on June 25 and 26, 1869 (Brock 1992). This was not as close a contest as the patenting of telephone by Alexander Graham Bell and Elisha Gray, which they did just a few hours apart. These two instances illustrate the intensity and competitiveness of the era’s quest for innovation.

  Remarkable interdisciplinary synergies combining new scientific understanding, systematic experiments, and aggressive commercialization can be illustrated by developments as diverse as the birth of the electric era and the synthesis of ammonia from its elements. Edison’s lightbulb was not a product (as some caricatures of Edison’s accomplishments would have it) of intuitive tinkering by an untutored inventor. Incandescent electric lights could not have been designed and produced without combining deep familiarity with the state-of-the-art research in the field, mathematical and physical insights, a punishing research program supported by generous funding by industrialists, a determined sales pitch to potential users, rapid commercialization of patentable techniques, and continuous adoption of the latest research advances.

  Fritz Haber’s (1868–1934) discovery of ammonia synthesis was the culmination of years of research effort based on decades of previously unsuccessful experiments, including those done by some of the most famous chemists of that time (among them two future Nobel Prize winners, Wilhelm Ostwald and Walter Nernst). Haber’s success required familiarity with the newly invented process of air liquefaction, willingness to push the boundaries of high-pressure synthesis, and determination, through collaboration with BASF, not to research the process as just another laboratory curiosity but to make it the basis of commercial production (Stoltzenberg 1994; Smil 2001).

  And Carl Bosch (1874–1940), who led BASF’s development of ammonia synthesis, made his critical decision as a metallurgist and not as a chemist. When, at the crucial management meeting in March 1909, the head of BASF’s laboratories heard that the proposed process will require pressures of at least 100 atmospheres, he was horrified. But Bosch remained confident: “I believe it can go. I know exactly the capacities of steel industry. It should be risked” (Holdermann 1954:69). And it was: Bosch’s confidence challenged the German steelmakers to produce reaction vessels of unprecedented size able to operate at previously unheard of pressures—but less than five years later these devices were in operation at the world’s first ammonia plant.

  The Age of Synergy

  At this point I must anticipate, and confront, those skeptics and critics who would insist on asking—despite all of the arguments and examples given so far in support of the Age of Synergy—how justified is my conclusion to single out this era, how accurate is my timing of it? As De Vries (1994:249) noted, history is full of “elaborate, ideal constructions that give structure and coherence to our historical narratives and define the significant research questions.” And while all of these historiographical landmarks—be it the Renaissance or the Industrial Revolution—are based on events that indubitably took place, these concepts tend to acquire life of their own, and their interpretations tend to shift with new insights. And so the concept of the Renaissance, the era seen as the opening chap
ter of modern history, came to be considered by some historians a little more than an administrative convenience, “a kind of blanket under which we huddle together” (Bouwsma 1979:3).

  Even more germane for any attempt to delimit the Age of Synergy is the fact that some historians have questioned the very concept of its necessary precursor, the Industrial Revolution. Its dominant interpretation as an era of broad economic and social change (Mokyr 1990; Landes 1969; Ashton 1948) has been challenged by views that see it as a much more restricted, localized phenomenon that brought significant technical changes only to a few industries (cotton, ironmaking) and left the rest of the economy in premodern stagnation until the 1850s: Watt’s steam engines notwithstanding, by the middle of the 19th century the British economy was still largely traditional (Crafts and Harley 1992).

  Or as Musson (1978:141) put it, “the typical British worker in the mid-nineteenth century was not a machine-operator in a factory but still a traditional craftsman or labourer or domestic servant.” At the extreme, Cameron (1982) argued that the change was so small relative to the entire economy that the very title of the Industrial Revolution is a misnomer, and Fores (1981) went even further by labeling the entire notion of a British industrial revolution a myth. Temin (1997) favors a compromise, seeing the Industrial Revolution’s technical progress spread widely but unevenly. I am confident that the concept of the Age of Synergy is not a mental construct vulnerable to a devastating criticism but an almost inevitable identification of a remarkable reality.

  The fact that the era’s epoch-making contributions were not always recognized as such by the contemporary opinion is not at all surprising. Given the unprecedented nature of the period’s advances, many commentators simply did not have the requisite scientific and technical understanding to appreciate the reach and the transforming impact of new developments. And so during the early 1880s most people thought that electricity will merely substitute faint light- bulbs for similarly weak gas lights: after all, the first electric lights were explicitly designed to match the luminosity of gas jets and gave off only about 200 lumens, or an equivalent of 16 candles (compared to at least 1,200 lm for today’s 100 W incandescent bulb). And even one of the era’s eminent innovators and a pioneer of electric industry did not think that gas illumination is doomed.

  William Siemens (see figure 4.4) reaffirmed in a public lecture on November 15, 1882 (i.e., after the first two Edison plants began producing electricity), his long-standing conviction that gas lighting “is susceptible of great improvement, and is likely to hold its own for the ordinary lighting up of our streets and dwellings” (Siemens 1882:69). A decade later, during the 1890s, people thought that gasoline-fueled motor cars were just horseless carriages whose greatest benefit may be to rid the cities of objectionable manure. And many people also hoped that cars will ease the congestion caused by slow-moving, and often uncontrollable, horse-drawn vehicles (figure 1.7).

  And given the fact that such fundamental technical shifts as the widespread adoption of new prime movers have invariably long lead times—for example, by 1925 England still derived 90% of its primary power from steam engines (Hiltpold 1934)—it is not surprising that even H. G. Wells, the era’s most famous futurist, maintained in his first nonfictional prediction that if the 19th century needs a symbol, then that symbol will be almost inevitably a railway steam engine (Wells 1902a). But in retrospect it is obvious that by 1902 steam engines were a symbol of a rapidly receding past. Their future demise was already irrevocably decided as it was only a matter of time before the three powerful, versatile, and more energy-efficient prime movers that were invented during the 1880s–steam turbines, gasoline-powered internal combustion engines, and electric motors—would completely displace wasteful steam engines.

  FIGURE 1.7. A noontime traffic scene at the London Bridge as portrayed in The Illustrated London News, November 16, 1872.

  I also readily concede that, as is so often the case with historical periodizations, other, and always somewhat arbitrary, bracketings of the era that created the 20th century are possible and readily defensible. This is inevitable given the fact that technical advances have always some antecedents and that many claimed firsts are not very meaningful. Patenting dates also make dubious markers as there have been many cases when years, even more than a decade, elapsed between the filing and the eventual grant. And too many published accounts do not specify the actual meanings of such claims as “was invented” or “was introduced.” Consequently, different sources will often list different dates for their undefined milestones, and entire chains of these events may be then interpreted to last only a few months or many years, or even decades.

  Petroski (1993) offers an excellent example of a dating conundrum by tracing the patenting and the eventual adoption of zipper, a minor but ubiquitous artifact that is now produced at a rate of hundreds of millions of units every year. U.S. Patent 504,038 for a clasp fastener was granted to Whitcomb L. Judson of Chicago in August 1893, putting the invention of this now universally used device squarely within the Age of Synergy. But an examiner overlooked a very similar idea was patented in 1851 by Elias Howe, Jr., the inventor of sewing machine. And it took about 20 years to change Judson’s idea of the slide fastener to its mature form when Gideon Sundback patented his “new and improved” version in 1913, and about 30 years before that design became commercially successful. In this case there is a span of some seven decades between the first, impractical invention and widespread acceptance of a perfected design. Moreover, there is no doubt that patents are an imperfect measure of invention as some important innovations were not patented and as many organizational and managerial advances are not patentable.

  Turning once again to the Industrial Revolution, we see that its British, or more generally its Western, phase has been dated as liberally as 1660–1918 and as restrictively as 1783–1802. The first span was favored by Lilley (1966), who divided the era into the early (1660–1815) and the mature (1815–1918) period; the second one was the time of England’s economic take-off as defined, with a specious but suspect accuracy, by Rostow (1971). Other dates can be chosen, most of them variants on the 1760–1840 span that was originally suggested by Toynbee (1884). Ashton (1948) opted for 1760–1830; Beales (1928) preferred 1750–1850, but he also argued for no terminal date. Leaving aside the appropriateness of the term “revolution” for what was an inherently gradual process, it is obvious that because technical and economic take-offs began at different times in different places there can be no indisputable dating even if the determination would be limited to European civilization and its overseas outposts.

  This is also true about the period that some historians have labeled as the second Industrial Revolution. This clustering of innovations was recently dated to between 1870 and 1914 by Mokyr (1999) and to 1860–1900 by Gordon (2000). But in Musson’s (1978) definition the second Industrial Revolution was most evident between 1914 and 1939, and King (1930) was certain that, at least in the United States, it started in 1922. And in 1956, Leo Brandt and Carlo Schmid described to the German Social Democratic Party Congress the principal features, and consequences, of what they perceived was the just unfolding second Industrial Revolution that was bringing the reliance on nuclear energy (Brandt and Schmid 1956). Three decades later, Donovan (1997) used the term to describe the recent advances brought by the use of Internet during the 1990s, and e-Manufacturing Networks (2001) called John T. Parsons—who patented the first numerical machine control tool—“father of the Second Industrial Revolution.”

  Moreover, some high-tech enamorati are now reserving the term only for the ascent of future nanomachines that not only will cruise through our veins but also will eventually be able to build complex self-replicating molecular structures: according to them the second Industrial Revolution is just about to take place. Consequently, I am against using this constantly morphing and ever-advancing term. In contrast, constraining the singular Age of Synergy can be done with greater confidence. My choi
ce of the two generations between the late 1860s and the beginning of WWI is bracketed at the beginning by the introduction of first practical designs of dynamos and open-hearth steel-making furnaces (1866–1867), the first patenting of sulfite pulping process (1867), introduction of dynamite (1866–1868), and the definite formulation of the second law of thermodynamics (1867). Also in 1867 or 1868 the United States, the indisputable overall leader of the great pre-WWI technical saltation, became the world’s largest economy by surpassing the British gross domestic product (Maddison 1995).

  Late in 1866 and in the early part of 1867, several engineers concluded independently that powerful dynamos can be built without permanent magnets by using electromagnets, and this idea was publicly presented for the first time in January 1867. These new dynamos, after additional improvements during the 1870s, were essential in order to launch the electric era during the 1880s. The open-hearth furnace was another fundamental innovation that made its appearance after 1867: in 1866 William Siemens and Emile Martin agreed to share the patent rights for its improved design; the first units became operational soon afterward, and the furnaces eventually became the dominant producers of the steel during most of the 20th century. Tilghman’s chemical wood pulping process (patented in 1867) opened the way for mass production of inexpensive paper. And by 1867 Alfred Nobel was ready to produce his dynamite, a new, powerful explosive that proved to be another epoch-making innovation, in both the destructive and constructive sense.

  The formulation of the second law of thermodynamics in 1867 was such an epoch-making achievement that more than two decades later Nature editorialized (quite correctly as we can see in retrospect) that the theory “seems to have outrun not only our present experimental powers, but almost any conceivable extension which they may hereafter undergo” (Anonymous 1889a: 2). The year 1867, the beginning of my preferred time span of the Age of Synergy, also saw the design of the first practical typewriter and the publication of Marx’s Das Kapital. Both of these accomplishments—an ingenious machine that greatly facilitated information transfer, and a muddled but extraordinarily influential piece of ideological writing—had an enormous impact on the 20th century that was suffused with typewritten information and that experienced the prolonged rise and the sudden fall of Marx-inspired Communism.

 

‹ Prev