Book Read Free

How Big is Big and How Small is Small

Page 9

by Smith, Timothy Paul

Figure 5.1 Reversible and nonreversible collisions on a pool table. Top: Newton’s laws of motion describe balls colliding with each other, a process which is time reversible. Bottom: statistical mechanics describes systems that evolve from order to disorder and which are not time reversible.

  Now imagine that we could take every ball and turn it around and have each one roll back over the path they had just traced out. A ball that was headed north would now be rolling south; a ball that skidded east would now be skidding west. It would start out looking chaotic, but each ball would be following Newton’s laws. As time continued the balls would collect at the center of the table, form a triangle, and kick the cue ball out. Newton’s laws allow this, but thermodynamics does not. If you saw a video of balls rolling into a formation you would know that someone had just reversed the video.

  Here is a second example. Take six pennies and a die and start with all the pennies heads up (see Figure 5.2). This is a highly ordered state. Now roll the die and when a 2 turns up turn the second penny over. Now keep rolling the die and turning pennies. At the beginning you had six heads, then five, then four and then five (you just rolled a second 2). After a while all the order is lost and you usually have three heads and three tails, but four heads is not unheard of. In the language of Boltzmann, all heads (or all tails) has the lowest entropy, because there is only one way (or microstate) of having this arrangement. The situation with three heads and three tails has the highest entropy because there are 20 ways of having this arrangement.

  In this toy statistical mechanics model the pennies occasionally become more ordered and entropy decreases. But this occurrence in conflict with the second law. Boltzmann’s statistical mechanics actually predicts that sometimes entropy will decrease. So which is right, statistical mechanics or thermodynamics?

  Figure 5.2 Order as shown with a die and pennies. Order generally decreases, but occasionally it can increase and entropy decrease.

  Back in our toy experiment with pennies and a die, if I have three heads and three tails there is about a 3% chance that in three turns I will return to that orderly state of all heads. But this was only with six pennies. If instead I start with a dozen pennies with half heads and half tails it would take six turns to get to the maximally ordered state, but the probability is only about 0.024%, or 1 in 4000. On our pool table, with fifteen balls, the probability of returning to the initial ordered state is incredibly low because we are not just dealing with a two-state, heads-or-tails system, but instead we have continuous numbers to describe positions and velocities of each ball. In the real world, where the objects are molecules and the number of them is given by Avogadro’s number, the probability of returning to an ordered state is vanishingly small.

  We now understand that Boltzmann’s statistical mechanics is right and does form the theoretical basis of thermodynamics. In a small space, for a short time, entropy may decrease, but we will never notice it.

  This is where Boltzmann’s story should end. The Austrian mathematical physicist, with a growing beard and failing eyesight, should have been able to rest on his laurels. He had given us two equations and a constant that bear his name. First the Maxwell–Boltzmann distribution, which told us how energy is distributed among molecules at a certain temperature. Secondly, Boltzmann’s entropy equation told us that entropy is proportional to the logarithm of the number of microstates of the same energy. Finally, he gave us Boltzmann’s constant, or kB as we often write it. Boltzmann’s constant is the amount of energy you need to give to one molecule to raise its temperature by 1°C. Boltzmann’s insight was that the world was really made up of a huge number of atoms and molecules and to understand heat you needed to turn to a statistical perspective of that micro-world. Boltzmann locked heat, energy and entropy together at the atomic and molecular level.

  Boltzmann once wrote:

  In the restaurant of the Nordwestbahnhof I consumed a leisurely meal of tender roast pork, cabbage and potatoes and drank a few glasses of beer. My memory for figures, otherwise tolerably accurate, always lets me down when I am counting beer glasses.

  From: A German Professor’s Journey Into Eldorado

  Boltzmann was waiting for a train in Vienna to start his trip to Caltech in California; he also traveled to Cambridge where he and his ideas were well received. I think he should have been able to spend his later years mis-counting beer glasses in Austria, but he had one staunch nemesis in his hometown.

  ***

  At this time Vienna was an intellectual hotbed, an academic carnival. This was the time when Sigmund Freud was opening our eyes to the mind and in Boltzmann’s own department Ernst Mach (1838–1916) was challenging the way we look at the world. Mach was one of the precursors of the logical positivists and the Vienna circle of the 1920s. He called for science to confine itself to what was directly observable. This meant that Mach rejected “hypothetical” particles such as atoms and molecules since they could not be directly observed. Because of this he ridiculed Boltzmann’s work and claimed it had no role in a true scientific discussion.

  Mach’s point of view sounds so foreign to us that it is easy to dismiss him as an eccentric. How could any sensible thinker reject Dalton’s atom and the success of nearly a century of chemistry? But Mach was no lightweight and had solid physics and intellectual credentials. He was trained in Vienna under Joseph Stefan a few years before Boltzmann. He also went to Graz and then spent a number of years as a professor at Prague before being called back to Vienna. We still associate his name with the speed of sound and Einstein cited Mach’s principle as an important inspiration for general relativity. But Mach would not accept the results of Boltzmann and the two men became locked in philosophical combat. It may have been this conflict, in part, which led Boltzmann in 1906 to take his own life.

  One of the tragedies of his death was that in 1906 the tide was turning. Max Planck was describing light in terms of atom-like oscillators and Einstein was describing Brownian motion, the jittering of microscopic dust as the result of atomic and molecular collisions. At the time of his suicide the atomic evidence was mounting and the scientific world was ready to follow Boltzmann’s lead.

  ***

  Max Karl Ernst Ludwig Planck (1858–1947) was a very cautions and reluctant convert to Boltzmann’s ideas. He had been trained in thermodynamics and even wrote his thesis on the second law, but his was a classic view. In the 1890s Planck, then at the University of Berlin, became involved in the problem of blackbody radiation. This problem is built upon a curious observation. When you take a piece of metal and heat it hot enough it will start to glow. If you have seen a blacksmith pulling a piece of iron out of a forge it is easy to picture the red or even white light the hot iron radiates. The color it glows in fact tells you the temperature of the metal. What is surprising is that the color is independent of the type of metal. Take a piece of steel and heat it to 900°C and it glows orange. Take a piece of copper that starts out looking very different from steel, heat it to 900°C and it too will glow with the exact same color. In fact start with anything, including an ideal blackbody with no intrinsic color, and if it is 900°C it will glow orange. 700°C will glow red and 1300°C will glow yellowish white. Gustav Kirchhoff had pointed this out in about 1860 and suggested that this observation was a hint to something deeper in nature.

  In 1894 Max Planck was commissioned by the local power company to investigate this problem with the idea that the light bulb design could be optimized. Planck was not the first one to try to put an equation on the blackbody spectrum. In 1884, Joseph Stefan had published the Stefan–Boltzmann law, which described the energy radiated as a function of the temperature; in 1896 Wilhelm Wien published a description of the blackbody spectrum that was good, but not quite right. Also Wien’s law was descriptive, but lacked a firm theoretical foundation.

  Planck went back to basics. He knew thermodynamics and he knew Maxwell’s equations of electromagnetic radiation, which is the foundational theory for light. He had the classical explana
tion of heat and light at his fingertips and he tried to weld them together, but it did not work and so Planck continued to search. He was also very aware of the work of Heinrich Hertz; Planck had been Hertz’s student in Berlin shortly after Hertz did his pioneering radio work. So Planck knew that radio-waves and any other type of electromagnetic radiation, including light, can be produced when charges oscillate, like currents in an antenna. This raised the question of what were the oscillators in blackbody radiation? Maxwell’s equations described light as the oscillation of electric and magnetic fields. What Planck needed was to break fields into tiny pieces in the same way that matter was made of atoms. Planck called this piece of light a quantum, what we now refer to as a photon. His final result, and equation for the intensity of light,

  was inspired by and dependent upon Boltzmann’s work (see Figure 5.3). The c is the speed of light, λ is the wavelength (think color), kB is Boltzmann’s constant and T is the temperature. There remains one mysterious constant: h. We now call this Planck’s constant.

  Figure 5.3 The distribution of light described by Planck’s law. The distribution of light from a hot object will shift towards blue as it gets hotter. The peaks of the curves do not match the colors seen at the blacksmith because the eye does not weight all colors of light the same.

  Experimental measurements were very good, much better than any theoretical prediction before Planck. They were also very extensive. Experimentalists could set the temperature and measure how much red, orange, or blue light there was. They could even measure the amount of infrared and ultraviolet. And then they could change the temperature and measure all of these intensities again. The experimentalists had dozens of numbers and Planck had one equation with two unknown constants: Planck’s (h) and Boltzmann’s (kB). With all that data you could use some of it to fit these numbers and the rest to confirm them. The match was perfect! Not only were the two constants well determined, but the amplitude and shape of the spectrum, the distribution of the colors of light, was just right.

  The 1901 paper in which Planck first published his equation is often cited as the beginning of quantum mechanics because it postulated that the electromagnetic fields, which gives us light, are made up of quanta or photons. The 1901 paper also contains the first appearance of h, the most basic unit of energy in an oscillator. As a side note, sometimes Planck’s constant is written as This is really the reduced Planck’s constant, and is related to Planck’s constant by 2π: h = 2π Numerically:

  h = 6.6 × 10−34 J.s

  where J stands for joules, the basic metric unit of energy and s is seconds.

  Planck’s constant shows up in essentially every quantum mechanical equation, for example in Planck’s Relation:

  Here E is energy and f is the frequency of light. For instance, if I have a red laser pointer with wavelength λ = 630 nm = 6.3 × 107 m, then the energy of one photon from that pointer is E = 5 × 10−20 J, which does not sound like a lot. But if this is a 1 mW laser, it is producing 2 × 1016 photons each second. I could also apply this to a kitchen microwave oven. A typical oven produces microwaves at 2.45 GHz (2.45 × 109 cycles per second) and applies about 700 W of energy to the food. That works out to only 2.6 × 10−25 J per photon, but with 2.7 × 1027 photons per second.

  ***

  There is a debate among people who study the history of science as to who was the first person to really understand what a quantum was and what quantization really meant. Some will point to Planck’s 1901 paper and say, “There it is. Planck must have understood what a quantum was to be able to derive his results.” In later years he would write that at the time he had seen quanta not as real and revolutionary, but rather as something that would solve a problem. I am reminded of Murray Gell-Mann, one of the originators of the quark hypothesis, who at one time referred to quarks as a convenient mathematical construct that was to be discarded after the desired results were obtained. Gell-Mann eventually saw quarks as real, and Planck eventually embraced the reality of photons and quanta.

  If it was not Planck who first understood the significance of quanta, who was it? Most readers of science history agree that by 1905 Einstein appreciated that they were real. Whereas Planck used h to describe one effect—blackbody radiation—Einstein successfully applied it to an independent problem, the photoelectric effect. In truth the acceptance of quanta and quantum mechanics went through fits and starts and really was not of a form we would now recognize until the 1920s. But it really was revolutionary and one should be slow and cautious when embracing radical ideas. Planck was telling us that Maxwell’s electromagnetic equations, a true Victorian tour de force, were not the last words on light. The world on the microscopic, or even sub-microscopic level was a bit different than what we were used to, and a bit stranger.

  ***

  Planck’s constant, h = 6.6 × 10−34 J.s looks like a conversion factor between energy and frequency of oscillation, but it is deeper than that. Planck’s equation works because there are distinct, integer numbers of photons. The fields are not continuous at the smallest scale. That really is at the heart of the revolution and h is just the signature of these discrete bits. h, with its 10−34, sure looks like a small number, but that is in part because of the units we have chosen to measure energy and time in. If we had picked electron-volts (eV), a unit useful in atomic physics, we would find that h = 4.1 × 10−15 eV.s, still a small number, but not quite so frightening. If we also chose an exotic nuclear particle’s lifetime—the delta (tΔ = 10−20 s)—as our timescale, then we would find that h = 4.1 × 105 eVtΔ, or 410,000. So the size of Planck’s constant is somewhat artificial. I would, however, like to compare it to a meter, or the size of an atom. In that case it is still very small.

  At about the same time as Planck introduced his constant he also proposed a set of natural units, a set of length, time, mass/energy, charge and temperature that were not based on a macroscopic standards like the meter, but rather on intrinsic constants of nature. The Planck length is defined as

  and is illustrated in (see Figure 5.4). The length itself is defined in terms of Planck’s constant, Newton’s gravitational constant (G) and the speed of light, all universal constants.

  Figure 5.4 The Planck length. The Planck length is much smaller than everything else. It is 20 orders of magnitude smaller than a proton. Our biggest accelerator probes structures halfway in size between humans and the Planck length.

  The significance of the Planck length is not clear even now, but there are reasons to think it may the ultimate distance standard at small scales. Nothing can interact with a particle smaller than a Planck length, as we will see in Chapter 11.

  At the beginning of this book I said that the smallest scale we could probe, the present experimental limit, was a bit less than 10−18 m. These measurements involve the energy of our largest accelerators. The Planck length is over a quadrillion times smaller, so I do not expect direct experiments on the Planck scale in the near future. Only time will tell us what is the smallest, most tiny, final iota of nature.

  6

  The Sand Reckoner

  In the middle of the poem “The Walrus and the Carpenter,” Lewis Carroll poses a nice little arithmetical problem. The Walrus and the Carpenter are walking along a seaside beach, gazing at the sand.

  “If seven maids with seven mops

  Swept it for half a year.

  Do you suppose,” the Walrus said,

  “That they could get it clear?”

  “I doubt it,” said the Carpenter,

  And shed a bitter tear.

  Through the Looking-Glass

  Lewis Carroll

  Who is right? Is it the optimistic Walrus who suspects that with a bit of backbone and elbow grease that the beach could be cleaned up and made into a tidy place? Or is the Carpenter right, that dower soul who totes a bag full of tools? Is the task too much? It is not very difficult to imagine that Carroll, in his other persona as a lecturer of mathematics at Christ Church college in Oxford, may have do
ne the calculation himself. We, who have already calculated the number of cups in the ocean should not shy away from the question, “can the maids clean the beach?”

  However, right from the first line I am struck with a problem. Can you really “sweep” with a mop? And if we assume we can, how much? If you were sweeping a floor you would expect less than a cup of sand in a minute. But if you swept a beach? I am going to be brazen here (we are talking about Lewis Carroll) and just postulate that they could sweep up between 1 and 10 l of sand a minute. Admittedly, the 10-l estimate might only happen if the maids traded in their mops for shovels. I do not think this too outrageous a suggestion, especially if the maids are enterprising and really trying to clean the beach in half a year.

  Seven maids working for half a year could mean 8 hours a day, five days a week, which with a few holidays, adds up to about 1,000 hours per maid, or 7,000 maid-hours, or 420,000 maid-minutes. Alternatively it could mean seven maids continuously laboring away, or perhaps a platoon of maids working in shifts, twenty-four hours a day, seven days a week, for 182.5 days. That maximal labor is then 30,660 maid-hours or 1,839,600 maid-minutes. So at the low end of our estimate we expect that the maids could clear away 420,000 l or 420 m3 of sand. At the other extreme (maids in shifts with shovel-like mops) they may remove 18,396,000 l or 18,396 m3 of sand. This is a wide range of solutions, which we now need to compare to the beach itself.

  Lewis Carroll gives us only a few hints as to the amount of sand there is on this beach. Later in the poem he tells us

  The Walrus and the Carpenter

  Walked on a mile or so,

  And then they rested on a rock

  Conveniently low:

  So the beach is at least a mile long. Also, the presence of a rock may indicate that the sand is relatively shallow. If we use our maximal estimate of sand, 18,396 m3, then it could be distributed as a strip 1,600 m long (1 mile), 1 m deep and 11 or 12 m wide. If, however, we use our minimal estimate of 420 m3, that could cover a beach 1,600 m long, 10 cm deep and 2–3 m wide. To me this sounds like a pretty thin beach. Most beaches seem to me to be bigger than either of these estimates, and so I am suspect that the Carpenter is right. However, there are a few beaches where the Walrus’s maids might just tidy things up a bit.

 

‹ Prev