Heat flows from hot bodies to cold ones but not, evidently, the other way around. Entropie strebt ein Maximum zu, the German physicist Rudolf Clausius declared in 1865, coining a new word: “Entropy strives toward a maximum.” That maximum is achieved when heat spreads itself around as evenly and uniformly as possible. Put a couple of ice cubes in a summer drink. Heat flows from the liquid to the cold ice, so that the ice melts and the drink becomes cooler. In the process, entropy increases. If the ice cubes grew bigger while the cold drink around them began to boil, entropy would be decreasing, and that’s what the second law forbids.
Clausius and others propounded the second law before the nature of heat was truly understood. They took the law to be rigid and exact, as laws of physics are supposed to be. Heat always flows from hot to cold. Entropy can only increase.
The realization that heat is nothing but the motion of atoms seemed at first to clarify the second law. If a collection of fast-moving, therefore hot, atoms is mixed with a collection of slow-moving, therefore cold, atoms, it’s not hard to understand that random bashing about among the atoms would tend to slow down the fast ones and buck up the slow ones until all, on average, are moving at the same speed. The temperature would then be the same everywhere, and entropy would have been duly maximized. The opposite process—the fast atoms getting faster, taking energy from the slower ones, which become slower still—doesn’t seem plausible.
In 1877, the prickly, irascible Austrian physicist Ludwig Boltzmann proved a difficult mathematical theorem saying exactly this. He found a way to define entropy as a statistical measure of the motion of a collection of atoms, and showed that collisions among atoms would push entropy toward its maximum value. It’s from Boltzmann that we get the idea of entropy having to do with order or disorder. If, in some container of gas, all the fast atoms hung about one end while all the slow ones stayed at the other, that would be a state with an unusual degree of arrangement or orderliness. It would have low entropy. Let all the atoms mix, collide, and share their energy as equably as possible, and they attain a state of maximum entropy. The atoms are then at maximum disorder, in the sense that they are as randomly arranged as possible. Ignorance of what the atoms are up to is spread uniformly.
But something about Boltzmann’s theorem didn’t seem right. The increase of entropy represents directionality—a process that always goes one way, never the other. Yet Newton’s laws, governing the movements of atoms, are thoroughly evenhanded with respect to time. A set of atomic motions, if played in a time-reversed manner, will still obey Newton’s laws. Mechanics contains no intrinsic distinction between past and future, whereas in Boltzmann’s theorem, elaborately derived from mechanics, that distinction mysteriously appears.
Not too many years after Boltzmann had proved his theorem, the French mathematician Henri Poincaré proved a theorem of his own that seemed to contradict Boltzmann. Applied to a set of atoms constituting a gas, Poincaré’s theorem said that every possible arrangement of atoms, corresponding to states with entropy high, low, and in between, must occur sooner or later, in the fullness of time. In which case, it would seem, entropy can and must decrease as well as increase.
Perplexities such as these led some physicists to an extreme viewpoint: atoms cannot be real, they said, since they lead to theoretical paradox. In some quarters this conclusion was warmly received. In the German-speaking world especially there had arisen a so-called positivist philosophy of science, whose adherents argued that atoms were illegitimate in the first place. Science, they said, should deal in what is visible and tangible, in what experimenters can directly observe and measure. That means atoms are at best speculative, and reasoning based on them is strictly hypothetical. Atoms are not, the positivists sternly maintained, the factual, trustworthy ingredients from which real science should be made.
Tortuous attempts to resolve the apparent contradiction between Boltzmann’s and Poincaré’s theorems only made the positivists happier. The gist of it is that Boltzmann’s theorem—because of certain assumptions he had been obliged to make in order to untangle the fearsome mathematics he had gotten himself into—is not exactly true. Generally, it’s far more likely that orderly arrangements of atoms will become disorderly than vice versa—but the latter is not completely ruled out.
With this qualification, physicists realized that their kinetic theory of heat was telling them something rather unexpected and subtle. It is not absolutely certain, they saw, that entropy must always increase, that heat must always flow from hot to cold. There is a chance, depending on the way atoms happen to crash around, that a bit of heat could move from a cool place to a hotter one, so that entropy would for a moment decrease. Probability comes irrevocably into the picture. Most of the time, everything will happen in the expected way. Collisions among atoms will almost always tend to increase disorder, and therefore increase entropy. But the reverse is not impossible, only unlikely.
This dubious and slippery conclusion outraged the positivists further. If a law of physics were to mean anything, they said, it must surely be definitive. To say that heat would most likely flow from hot to cold, but that it had some chance, no matter how tiny, of going the other way, was to make a mockery of scientific thinking. Still further reason to disbelieve in the fiction called atoms.
It became an urgent matter for the pro-atom physicists to bolster their case in a way that positivists would accept. In 1896 Boltzmann himself, in a reply to one of his critics, hit upon a straightforward and easy argument in favor of atoms. “The observed motions of very small particles in a gas,” he wrote, “may be due to the circumstance that the pressure exerted on their surfaces by the gas is sometimes a little greater, sometimes a little smaller.” In other words, because a gas is made of atoms, and because these atoms dance around in an erratic way, a small particle within the gas will be jostled unpredictably back and forth. This is precisely what Gouy, following Fathers Thirion and Delsaulx, had already said, but Boltzmann evidently knew nothing of their work. He was the first physicist of profound mathematical ability to hit upon the idea that Brownian motion provides direct visual evidence not only of the atomic nature of matter but of the randomness inherent in atomic motions.
This throwaway remark by Boltzmann caught no one’s attention, and indeed has barely been noticed by historians of science ever since. His casual manner suggests that he didn’t find the suggestion either novel or particularly important. Like Thirion, Delsaulx, and Gouy, he took it as unremarkable that molecular movement explained Brownian motion. Unlike those previous authors, who made vague references to the “law of large numbers,” Boltzmann had sufficient mastery of statistical theory that he could have attempted to calculate the expected magnitude of Brownian motion in terms of the underlying movements of atoms.
But he didn’t make the effort. Maxwell had failed to heed what Brownian motion was telling physicists. Now Boltzmann got the message but, perhaps thinking the point perfectly obvious, didn’t pursue it.
Another decade passed before the tale of Brownian motion reached its momentous conclusion, and it is at this point in our story that we first encounter the piercing intellect of Albert Einstein. In 1905 Einstein was a smart, trim twenty-six-year-old, working at the patent office in Bern because he had been unable to land an academic position. He had published a few papers but was far from well known in the physics community. That was about to change.
An admirer of Boltzmann’s dense and, frankly, long-winded monographs, Einstein had become fascinated by statistical questions in physics and by the attendant controversy over the existence of atoms. At some point he too realized that a suitably small particle immersed in a liquid would bounce about because of molecular collisions—exactly as Boltzmann had said, though it appears that Einstein, like everyone else, failed to notice this obscure remark of his predecessor. In any case, Einstein dug deeper. He wondered if the motion of a particle large enough to see in a microscope might constitute a direct and quantitative test of the atomic
hypothesis—exactly what the positivists demanded but said was impossible. And so he decided to calculate the answer.
It was not a straightforward piece of reasoning. Gouy had realized that a Brownian particle ought to have, on average, the same energy of motion as the molecules of the liquid in which it was suspended. Those molecules, being many times less massive, would zip around very fast, while the Brownian particle would blunder about much more slowly. There should be a simple relationship between the average speed of the molecule and the average speed of the particle in the liquid. But the erratic nature of Brownian motion made it hard to define a particle’s average speed in a meaningful way, and an experimenter of the late nineteenth century was in no position to measure or record precisely how such a particle zigged and zagged.
Ingeniously, Einstein took a different tack. He found a way to calculate not how fast a suspended particle would move but how far its hither-and-thither motions would cause it to drift in some period of time. For example, one could draw a small circle around the starting position of some particle and ask how long it should take, on average, to reach the circumference. In this way Einstein derived a theoretical result that could be put to practical scrutiny. Finally, almost eighty years after Robert Brown had given a scientific account of the motion of small particles suspended in fluid, Einstein provided the first quantitative treatment of its true cause. His clever analysis constituted one of the four historic papers he published in his annus mirabilis of 1905; in the others he propounded his special theory of relativity to what was then a mainly bewildered audience of physicists, and offered provocative notions about the true nature of light.
In a final exasperating irony, it turns out that Einstein, when he began his calculations, didn’t even know there was such a thing as Brownian motion. Only in the course of writing his paper did he discover that the phenomenon had been known to microscopists, botanists, and others for generations. In his introduction he cautioned that “it is possible that the motions to be discussed here are identical with the so-called ‘Brownian molecular motion’; however, the details I have been able to ascertain regarding the latter are so imprecise that I can form no judgment in the matter.”
Three years later, in 1908, the French physicist Jean Perrin performed a series of careful experiments to measure Brownian motion and compare the findings with Einstein’s theory. It all matched up, and Perrin’s work is often cited as the crucial, crushing evidence for the existence of atoms. For most physicists this came as no surprise but rather was pleasant confirmation of what they had long believed. Even the most die-hard positivist opponents of atomism, with one or two exceptions, now had to give way.
From this point on, atoms were undeniably real. At the same time statistical thinking was cemented firmly into place as an essential part of physical theorizing. The two were inextricably tied together. Those who had been espousing kinetic theory for years took satisfaction in this development: any useful account of atoms necessarily involves statistical reasoning. The chancy nature of the second law of thermodynamics—entropy almost always rises—was here to stay.
Even so, determinism survived—or seemed to. To Einstein, certainly, the appeal of statistical reasoning was precisely that it allowed the physicist to make quantitative statements about the behavior of crowds of atoms, even while the motion of individual atoms remained beyond the observer’s ken. What mattered was only that those motions followed strict and unerring rules. Nature, at bottom, remained intrinsically deterministic. The problem is that scientific observers cannot gather all the information they would need to fulfill Laplace’s ideal of total knowledge leading to perfect predictability.
Without altogether appreciating what had happened, physicists had subtly revised their estimation of what a theory meant. Until this time, a theory was a set of rules that accounted for some set of facts. Between theory and experiment there existed a direct, exact two-way correspondence. But that was no longer quite the case. Theory now contained elements that the physicists were sure existed in reality, but which they couldn’t get at experimentally. For the theorist, atoms had definite existence, and had definite positions and speeds. For the experimenter, atoms existed only inferentially, and could be described only statistically. A gap had opened up between what a theory said was the full and correct picture of the physical world and what an experiment could in practice reveal of that world.
What was lost, then, was not the underlying ideal of a deterministic physical world but the Laplacian hope for perfectibility in the scientific accounting of that world. The universe unfolds imperturbably according to its inner design. Scientists could legitimately hope to understand that design fully. What they could no longer attain, it seemed, was complete knowledge of how that design was realized. They could know the blueprint, but not the shape and color of every brick.
One commentator who glimpsed this difficulty was the historian Henry Adams, whose idiosyncratic autobiography, The Education of Henry Adams, depicts a man of old-fashioned classical wisdom, a scholar of politics and culture and religion, struggling to stay on his feet in a world increasingly driven by science and technology. It wasn’t that he was opposed to science, rather that he found its grandiosity and reach forbidding and more than a little alarming.
Adams heard of the advance of statistical reasoning in physics and found it perplexing in a way that most scientists did not care to think about. Science aimed for completion and perfection, of course, but now, as Adams loftily put it, “the scientific synthesis commonly called Unity was the scientific analysis commonly called Multiplicity.” It seemed to him, in his somewhat overheated way, that kinetic theory was but a step away, philosophically, from chaos and anarchy. What was the meaning of the quest for unity and synthesis in science if the power of prediction would from now on only ever be approximate?
Adams quizzed his scientifically and philosophically minded friends, but, he lamented, “here everybody flatly refused help.” Perhaps they couldn’t grasp what he was getting at. Adams had a fondness for enigmatic, obscure oratory. To scientists it only seemed that their statistical theories actually gave them a greater grasp of the universe and an increased power of prediction. They understood more now than they had before, and would understand still more in the future. Any losses seemed conceptual, metaphysical, philosophical—and therefore of no scientific account.
Chapter 3
AN ENIGMA, A SUBJECT OF PROFOUND ASTONISHMENT
It would be more accurate to say, by the first decade of the twentieth century, that science had amassed a surfeit of atoms, all doing distinct jobs and with no clear kinship between them. Of some standing were the chemists’ atoms, the indivisible units of matter that participated in reactions and joined together to form molecules. Not quite as venerable were the physicists’ kinetic atoms, those prototypical billiard balls that in their random crashing around gave substance to the laws of heat. Between these two atoms, from a theoretical perspective, there was essentially no point of contact. And in 1896 a new task had been piled onto the already overburdened atom.
Henri Becquerel’s discovery of radioactivity offers eloquent testimony to the power of serendipity. On the first day of January 1896, a German physicist by the name of Wilhelm Röntgen sent to his colleagues across Europe details of an astonishing observation. To prove his point, he included a photograph of his hand—or rather, an eerie likeness of the bones of his hand, with flesh discernible as a faint halo and with the unmistakable shadow of his wedding ring loosely orbiting the skeletal third finger. This was the world’s first X-ray image, and it set off a sensation not only among scientists but also in the newspapers, which raced to print pictures of bones, nails accidentally embedded in limbs, and internal skeletal deformities of one kind or another.
Röntgen’s discovery was itself purely accidental. He had noticed a strange glow on a phosphorescent screen near an electrical discharge tube in his laboratory and, investigating further, had seen a bony shadow spring into visibility
when he placed his hand between tube and screen. Physicists, it turned out, had been making X-rays for years without knowing it. Once the news was out, labs around the world set out to explore these unseen penetrating rays. It was quickly established that they were a kind of electromagnetic radiation shorter in wavelength than visible light and ultraviolet.
Becquerel, seeing X-ray images at a meeting of the French Academy of Sciences in Paris early in 1896, followed a hunch. He was the son and grandson of distinguished Parisian physicists, all graduates of the École Polytechnique, all members of the Académie Française, and all, one after the other, occupants of the chair of physics at the Musée d’Histoire Naturelle. Henri’s son Jean, in due course, would follow the same path. The various Becquerels investigated electricity, chemistry, and sunlight, among other things, but one particular interest had become a family tradition. They all studied fluorescence, the phenomenon by which certain minerals, after exposure to strong sunlight, are then seen to emit a faint luminosity of their own when taken into the dark. Henri’s father had established himself as an expert especially on the fluorescence of uranium-bearing minerals.
Hearing about X-rays, Henri Becquerel wondered if this curious new emanation had any connection to the fluorescence he knew so much about. His first experiments seemed to confirm that suspicion. He took a variety of fluorescent minerals, including potassium uranyl sulfate (a special favorite of his father’s), placed them on photographic plates wrapped tightly in thick black paper, and set the samples in bright sunlight to activate their fluorescence. Developing the plates after a few hours, he found that the one beneath the uranium-containing mineral had been fogged by some emanation that had penetrated the opaque paper. This rock, he concluded, activated by sunlight, was giving off X-rays.
Uncertainty Page 3