by Hasok Chang
In chapter 2 I discussed how Victor Regnault solved the problem of the choice of thermometric fluids by subjecting each candidate to a stringent test of physical consistency (comparability). At the end of "Minimalism against Duhemian Holism," I also alluded to the type of situations in which that strategy would not work. We have now come face to face with such a situation, in post-Wedgwood pyrometry. Wedgwood tested his instrument for comparability, and it passed. In other people's hands, too, the Wedgwood pyrometer seems to have passed the test of comparability, as long as standard clay pieces were used. Although that last qualification would be sufficient to discredit the Wedgwood pyrometer if we applied the same kind of rigor as seen in Regnault's work, every other pyrometer would also have
end p.155
failed such stringent tests of comparability, until the late nineteenth century. Even platinum pyrometers, the most controllable of all, yielded quite divergent results when employed by Guyton and by Daniell. On the whole, the quality and amount of available data were clearly not sufficient to allow each pyrometer to pass rigorous tests of comparability until much later.
This returns us to the question of how it was possible to reject the Wedgwood pyrometer as incorrect, when each of the alternative pyrometers was just about as poor as Wedgwood's instrument. In "Ganging Up on Wedgwood," we have seen how the Wedgwood pyrometer was rejected after it was shown that various other pyrometers disagreed with it and agreed much more with each other (the situation can be seen at a glance in table 3.2). To the systematic epistemologist, this will seem like a shoddy solution. First of all, if there is any justification at all involved in this process, it is entirely circular: the platinum pyrometer is good because it agrees with the ice calorimeter, and the ice calorimeter is good because it agrees with the platinum pyrometer; and so on. Second, relying on the convergence of various shaky methods seems like a poor solution that was accepted only because there was no single reliable method available. One good standard would have provided an operational definition of temperature in the pyrometric range, and there would have been no need to prop up poor standards against each other. These are cogent points, at least on the surface. However, I will argue that the state of post-Wedgwood pyrometry does embody an epistemic strategy of development that has positive virtues.
In basic epistemological terms, relying on the convergence of various standards amounts to the adoption of coherentism after a recognized failure of foundationalism. I will discuss the relative merits of foundationalist and coherentist theories of justification more carefully in chapter 5, but what I have in mind at this point is the use of coherence as a guide for a dynamic process of concept formation and knowledge building, rather than strict justification. A very suggestive metaphor was given by Otto Neurath, a leader of the Vienna Circle and the strongest advocate of the Unity of Science movement: "We are like sailors who have to rebuild their ship on the open sea, without ever being able to dismantle it in dry-dock and reconstruct it from the best components."62
As often noted, there is some affinity between Neurath's metaphor and W. V. O. Quine's later coherentist metaphor of the stretchable fabric: "The totality of our so-called knowledge or beliefs … is a man-made fabric which impinges on experience only along the edges. Or, to change the figure, total science is like a field of force whose boundary conditions are experience" (Quine [1953] 1961, 42). But there is one difference that is very important for our purposes. In Quine's metaphor, it does not really matter what shape the fabric takes; one presumes it will not rip. In contrast, when we are sailing in Neurath's leaky boat, we will drown if we do not actively do something about it, and do it right. In other words, Neurath's metaphor embodies a clear value judgment about the current state of knowledge, namely that it
62. Neurath [1932/33] 1983, 92. For further discussions of Neurath's philosophy, and particularly the significance of "Neurath's boat," see Cartwright et al. 1996, 139 and 89ff.
end p.156
is imperfect, and also a firm belief that it can be improved. Neurath's metaphor has a progressivist moral, which is not so central to Quine's.
Post-Wedgwood pyrometry was a very leaky boat. And if I may take some liberties with Neurath's metaphor, we must recognize that even such a leaky boat was already a considerable achievement, since there was no boat at all earlier. Investigators like Clément and Desormes were like shipwrecked sailors pulling together a few planks floating by, to form a makeshift lifeboat (however unrealistic that possibility might be in a real shipwreck). Guyton got on that boat bringing the plank of platinum pyrometry, which fitted well enough. They also picked up the plank of Wedgwood pyrometry, admired it for its various pleasing qualities, but reluctantly let it float away in the end, since it could not be made to fit. They did have the choice of floating around hanging on to the Wedgwood plank waiting for other planks that were compatible with it, but they decided to stay on the boat that they already had, leaky as it was. It is difficult to fault such prudence.
Metaphors aside, what exactly were the positive merits of this process, which I will call the "mutual grounding" of measurement standards? First of all, it is an excellent strategy of managing uncertainty. In the absence of a measurement standard that is demonstrably better than others, it is only sensible to give basically equal status to all initially plausible candidates. But a standard that departs excessively from most others needs to be excluded, just by way of pragmatics rather than by any absolute judgment of incorrectness. In metrological extension, we are apt to find just the sort of uncertainty that calls for mutual grounding. In the new domain the pre-existing meaning is not likely to be full and precise enough to dictate an unambiguous choice of a measurement standard: there is probably no sufficient basis of sensation; few general theories can cover unknown domains confidently; and any extensions of phenomenological laws from known domains face the problem of induction. All in all, there will probably be many alternative extensions with an underdetermined choice between them.63
Mutual grounding is not a defeatist compromise, but a dynamic strategy of development. First of all, it allows new standards to come into the convergent nest; the lack of absolute commitment to any of the standards involved also means that some of them can be ejected with relative ease if further study reveals them to be inconsistent with others. Throughout the process decisions are taken in the direction of increasing the degree of coherence within the whole set of mutually grounded standards. The aspect of the coherence that is immediately sought is a numerical convergence of measurement outcomes, but there are also other possible
63. In Chang 1995a, I discussed the case of energy measurement in quantum physics in terms of mutual grounding. In that case, the uncertainty was brought on by theoretical upheaval. In nineteenth-century physics standards for energy measurement in various macroscopic domains had been firmly established, and their extension into microscopic domains was considered unproblematic on the basis of Newtonian mechanics and classical electromagnetic theory, which were assumed to have universal validity. With the advent of quantum mechanics, the validity of classical theories was denied in the microscopic domains, and suddenly the old measurement standards lost their unified theoretical justification. However, the ensuing uncertainty was dealt with quite smoothly by the mutual grounding of two major existing measurement methods and one new one.
end p.157
aspects to consider, such as the relations between operational procedures, or shared theoretical justification.
The strategy of mutual grounding begins by accepting underdetermination, by not forcing a choice between equally valid options. In the context of choosing measurement standards, accepting plurality is bound to mean accepting imprecision, which we can actually afford to do in quite a few cases, with a promise of later tightening. If we let underdetermination be, multiple standards can be given the opportunity to develop and prove their virtues, theoretically or experimentally. Continuing with observations using multiple standards helps us collect a wide range of phe
nomena together under the rubric of one concept. That is the best way of increasing the possibility that we will notice previously unsuspected connections, some of which may serve as a basis of further development. A rich and loose concept can guide us effectively in starting up the inquiry, which can then double back on itself and define the concept more rigorously. This, too, is an iterative process of development, as I will discuss in more detail in chapter 5.
end p.158
4. Theory, Measurement, and Absolute Temperature
Narrative: The Quest for the Theoretical Meaning of Temperature
Abstract: The precise measurement of temperature became possible by the middle of the 19th century. However, there were no theories of heat to direct the practice of thermometry in a useful way. This chapter discusses the difficulty of making a productive and convincing connection between thermometry and the theory of heat, and how such a connection was eventually made.
Keywords: temperature, thermometry, theory of heat
Hasok Chang
Although we have thus a strict principle for constructing a definite system for the estimation of temperature, yet as reference is essentially made to a specific body as the standard thermometric substance … we can only regard, in strictness, the scale actually adopted as an arbitrary series of numbered points of reference sufficiently close for the requirements of practical thermometry.
William Thomson, "On an Absolute Thermometric Scale," 1848
A theoretically inclined reader may well be feeling disturbed by now to note that so much work on the measurement of temperature seems to have been carried out without any precise theoretical definition of temperature or heat. As seen in the last three chapters, by the middle of the nineteenth century temperature became measurable in a coherent and precise manner over a broad range, but all of that was achieved without much theoretical understanding. It is not that there was a complete lack of relevant theories—there had always been theories about the nature of heat, since ancient times. But until the late nineteenth century no theories of heat were successful in directing the practice of thermometry in a useful way. We have seen something of that theoretical failure in chapter 2. The discussion in this chapter will show why it was so difficult to make a productive and convincing connection between thermometry and the theory of heat, and how such a connection
end p.159
was eventually made. In order to stay within the broad time period covered in this book, I will limit my discussion to theoretical developments leading up to classical thermodynamics. Statistical mechanics does not enter the narrative because it did not connect with thermometry in meaningful ways in the time period under consideration.
Temperature, Heat, and Cold
Practical thermometry achieved a good deal of reliability and precision before people could say with any confidence what it was that thermometers measured. A curious fact in the history of meteorology gives us a glimpse into that situation. The common attribution of the centigrade thermometer to the Swedish astronomer Anders Celsius (1701-1744) is correct enough, but his scale had the boiling point of water as 0° and the freezing point as 100°. In fact, Celsius was not alone in adopting such an "upside-down" thermometric scale. We have already come across the use of such a scale in "Can Mercury Be Frozen?" and "Can Mercury Tell Us Its Own Freezing Point?" in chapter 3, in the mercury thermometer designed by the French astronomer Joseph-Nicolas Delisle (1688-1768) in St. Petersburg (the Delisle scale can be seen in the Adams thermometer in fig. 1.1). In England the "Royal Society Thermometer" had its zero point at "extream heat" (around 90°F or 32°C) and increasing numbers going down the tube.1 These "upside-down" scales were in serious scientific use up to the middle of the eighteenth century, as shown emblematically in figure 4.1.2
Why certain early pioneers of thermometry created and used such "upside-down" thermometers remains a matter for speculation. There seems to be no surviving record of the principles behind the calibration of the Royal Society thermometer, and Delisle's (1738) published account of his thermometers only concentrates on the concrete procedures of calibration. There is no clear agreement among historians about Celsius's motivations either. Olof Beckman's view is that "Celsius and many other scientists were used to both direct and reversed scales, and
1. For more detail on the Celsius, Delisle, and Royal Society scales, see: Middleton 1966, 58-62, 87-89, 98-101; Van Swinden 1778, 102-106, 115-116, 221-238; and Beckman 1998. An original Delisle thermometer sent to Celsius by Delisle in 1737 is still preserved at Uppsala University; see Beckman 1998, 18-19, for a photo and description. The National Maritime Museum in Greenwich holds three Royal Society thermometers (ref. no. MT/Th.5, MT/BM.29, MT/BM.28), and also a late nineteenth-century thermometer graduated on the Delisle scale (ref. no. MT/Th.17(iv)).
2. For instance, Celsius's original scale was adopted in the meteorological reports from Uppsala, made at the observatory that Celsius himself had founded, for some time in the late 1740s. From 1750 we find the scale inverted into the modern centigrade scale. The Royal Society thermometer provided the chief British standard in the early eighteenth century, and it was sent out to agents in various countries who reported their meteorological observations to the Royal Society, which were summarized for regular reports in the Philosophical Transactions of the Royal Society of London. The use of the Royal Society scale is in evidence at least from 1733 to 1738. Delisle's scale was recognized widely and remained quite popular for some time, especially in Russia.
end p.160
Figure 4.1. George Martine's comparison of fifteen thermometric scales. The ninth (Delisle) and eleventh (Royal Society) are of the "upside-down" type. This figure was attached at the beginning of Martine [1740] 1772, chart facing p. 37 . Courtesy of the British Library.
simply did not care too much" about the direction.3 My own hypothesis is that those who designed upside-down thermometers may have been thinking more in terms of measuring the degrees of cold than degrees of heat. If that sounds strange, that is only because we now have a metaphysical belief that cold is simply the absence of heat, not a real positive quality or entity in its own right. Although the existence of the upside-down temperature scales does not prove that their makers were trying to measure degrees of cold rather than heat, at least it reveals a lack of a sufficiently strong metaphysical commitment against the positive reality of cold.
3. Private communication, 28 February 2001; I thank Professor Beckman for his advice.
end p.161
Indeed, as we have seen in "Can Mercury be Frozen?" and "Can Mercury Tell Us Its Own Freezing Point?" in chapter 3, in seventeenth- and eighteenth-century discussions of low-temperature phenomena, people freely spoke of the "degrees of cold" as well as "degrees of heat."4 The practical convenience of the Delisle scale in low-temperature work is obvious, as it gives higher numbers when more cooling is achieved. If we look back about a century before Celsius, we find that Father Marin Mersenne (1588-1648), that diplomat among scholars and master of "mitigated skepticism," had already devised a thermometer to accommodate all tastes, with one sequence of numbers going up and another sequence going down.5 Similarly, the alcohol thermometer devised by the French physicist Guillaume Amontons (1663-1738) had a double scale, one series of numbers explicitly marked "degrees of cold" and the other "degrees of heat" (see fig. 4.2).
The history of cold is worth examining in some more detail, since it is apt to shake us out of theoretical complacency regarding the question of what thermometers measure. There have been a number of perfectly capable philosophers and scientists through the ages who regarded cold as real as heat—starting with Aristotle, who took cold and hot as opposite qualities on an equal footing, as two of the four fundamental qualities in the terrestrial world. The mechanical philosophers of the seventeenth century were not united in their reactions to this aspect of Aristotelianism. Although many of them subscribed to theories that understood heat as motion and cold as the lack of it, the mechanical philosophy did not rule out
giving equal ontological status to heat and cold. In the carefully considered view of Francis Bacon (1561-1626), heat was a particular type of expansive motion and cold was a similar type of contractive motion; therefore, the two had equal ontological status. Robert Boyle (1627-1691) wanted to rule out the positive reality of cold, but had to admit his inability to do so in any conclusive way after honest and exhaustive considerations. The French atomist Pierre Gassendi (1592-1655) had a more complex mechanical theory, in which "calorific atoms" caused heat by agitating the particles of ordinary matter; Gassendi also postulated "frigorific atoms," whose angular shapes and sluggish motion made them suited for clogging the pores of bodies and damping down the motions of atoms.6
4. For example, Bolton (1900, 42-43) quotes Boyle as saying in 1665: "The common instruments show us no more than the relative coldness of the air, but leave us in the dark as to the positive degree thereof. …" Similarly, the records of the Royal Society (Birch [1756-57] 1968, 1:364-5) state that at the meeting of 30 December 1663 a motion was made "to make a standard of cold," upon which Hooke suggested "the degree of cold, which freezes distilled water." Even Henry Cavendish, who had quite a modern-sounding theory of heat, did not hesitate titling his article of 1783: "Observations on Mr. Hutchins's Experiments for Determining the Degree of Cold at which Quicksilver Freezes."