by Isaac Asimov
When a solid is heated to a point where the to-and-fro trembling is strong enough to break the bonds that hold neighboring molecules together, the solid melts and becomes a liquid. The stronger the bond between neighboring molecules in a solid, the more heat is needed to make it vibrate violently enough to break the bond. Hence, the substance has a higher melting point.
In the liquid state, the molecules can move freely past one another. When the liquid is heated further, the movements of the molecules finally become sufficiently energetic to set them free of the body of the liquid altogether, and then the liquid boils. Again, the boiling point is higher where the intermolecular forces are stronger.
In converting a solid to a liquid, all of the energy of heat goes into breaking the intermolecular bonds. Thus, the heat absorbed by melting ice does not raise the ice’s temperature. The same is true of a liquid being boiled.
Now we can distinguish between heat and temperature easily. Heat is the total energy contained in the molecular motions of a given quantity of matter. Temperature represents the average energy of motion per molecule in that matter. Thus, a quart of water at 60° C contains twice as much heat as a pint of water at 60° C (twice as many molecules are vibrating), but the quart and the pint have the same temperature, for the average energy of molecular motion is the same in each case.
There is energy in the very structure of a chemical compound—that is, in the bonding forces that hold an atom or molecule to its neighbor. If these bonds are broken and rearranged into new bonds involving less energy, the excess of energy will make its appearance as heat or light or both. Sometimes the energy is released so quickly as to result in an explosion.
It is possible to calculate the chemical energy contained in any substance and show what the amount of heat released in any reaction must be. For instance, the burning of coal involves breaking the bonds between carbon atoms in the coal and the bonds between the oxygen molecules’ atoms, with which the carbon recombines. Now the energy of the bonds in the new compound (carbon dioxide) is less than that of the bonds in the original substances that formed it. This difference, which can be measured, is released as heat and light.
In 1876, the American physicist Josiah Willard Gibbs worked out the theory of chemical thermodynamics in such detail that this branch of science was brought from virtual nonexistence to complete maturity at one stroke.
The long paper in which Gibbs described his reasoning was far above the heads of others in America and was published in the Transactions of the Connecticut Academy of Arts and Sciences only after considerable hesitation. Even afterward, its close-knit mathematical argument and the retiring nature of Gibbs himself combined to keep the subject under a bushel basket until Ostwald discovered the work in 1883, translated the paper into German, and proclaimed the importance of Gibbs to the world.
As an example of the importance of Gibbs’s work, his equations demonstrated the simple, but rigorous, rules governing the equilibrium between different substances existing simultaneously in more than one phase (that is, in both solid form and in solution, in two immiscible liquids and a vapor, and so on). This phase rule is the breath of life to metallurgy and to many other branches of chemistry.
Mass to Energy
With the discovery of radioactivity in 1896 (see chapter 6), a totally new question about energy arose at once. The radioactive substances uranium and thorium were giving off particles with astonishing energies. Moreover, Marie Curie found that radium was continually emitting heat in substantial quantities: an ounce of radium gave off 4,000 calories per hour, and would do so hour after hour, week after week, decade after decade. The most energetic chemical reaction known could not produce I millionth of the energy liberated by radium. Was the law of conservation of energy being broken?
And no less surprising was the fact that this production of energy, unlike chemical reactions, did not depend on temperature: it went on just as well at the very low temperature of liquid hydrogen as it did at ordinary temperatures!
Quite plainly an altogether new kind of energy, very different from chemical, was involved here. Fortunately physicists did not have to wait long for the answer. Once again, it was supplied by Einstein, in his Special Theory of Relativity. Einstein’s mathematical treatment of energy showed that mass can be considered a form of energy—a very concentrated form, for a very small quantity of mass would be converted into an immense quantity of energy.
Einstein’s equation relating mass and energy is now one of the most famous equations in the world. It is:
e = mc²
Here e represents energy (in ergs), m represents mass (in grams) and c represents the speed of light (in centimeters per second). Other units of measurement can be used but would not change the nature of the result.
Since light travels at 30 billion centimeters per second, the value of c² is 900 billion billion; or, in other words, the conversion of I gram of mass energy will produce 900 billion billion ergs. The erg is a small unit of energy not translatable into any common terms, but we can get an idea of what this number means when we know that the energy in 1 gram of mass is sufficient to keep a 1,000-watt electric-light bulb running for 2,850 years. Or, to put it another way, the complete conversion of 1 gram of mass into energy would yield as much as the burning of 2,000 tons of gasoline.
Einstein’s equation destroyed one of the sacred conservation laws of science. Lavoisier’s law of conservation of mass had stated that matter can be neither created nor destroyed. Actually, every energy-releasing chemical reaction changes a small amount of mass into energy: the products, if they could be weighed with utter precision, would not quite equal the original matter. But the mass lost in ordinary chemical reactions is so small that no technique available to the chemists of the nineteenth century could conceivably have detected it. Physicists, however, were now dealing with a completely different phenomenon, the nuclear reaction of radioactivity rather than the chemical reaction of burning coal. Nuclear reactions release so much energy that the loss of mass is large enough to be measured.
By postulating the interchange of mass and energy, Einstein merged the laws of conservation of energy and of mass into one law—the conservation of mass-energy. The first law of thermodynamics not only still stood: it was more unassailable than ever.
The conversion of mass to energy was confirmed experimentally by Aston through his mass spectograph, which could measure the mass of atomic nuclei very precisely by the amount of their deflection by a magnetic field. What Aston did with an improved instrument in 1925 was to show that the various nuclei are not exact multiples of the masses of the neutrons and protons that compose them.
Let us consider the masses of these neutrons and protons for a moment. For a century, the masses of atoms and subatomic particles generally have been measured on the basis of allowing the atomic weight of oxygen to be exactly 16.00000 (see chapter 6). In 1929, however, Giauque had showed that oxygen consists of three isotopes—oxygen 16, oxygen 17, and oxygen 18—and that the atomic weight of oxygen is the weighted average of the mass numbers of these three isotopes.
To be sure, oxygen 16 is by far the most common of the three, making up 99.759 percent of all oxygen atoms. Thus, if oxygen has the over-all atomic weight of 16.00000, the oxygen-16 isotope has a mass number of almost 16. (The masses of the small quantities of oxygen 17 and oxygen 18 bring the value up to 16.) Chemists, for a generation after the discovery, did not let this disturb them, but kept the old basis for what came to be called chemical atomic weights.
Physicists, however, reacted otherwise. They preferred to set the mass of the oxygen-16 isotope at exactly 16.0000 and determine all other masses on that basis. On this basis, the physical atomic weights could be set up. On the oxygen-16 equals 16 standard, the atomic weight of oxygen itself, with its traces of heavier isotopes, is 16.0044. In general the physical atomic weights of all elements would be 0.027 percent higher than their chemical atomic weight counterparts.
In 1961, physic
ists and chemists reached a compromise and agreed to determine atomic weights on the basis of allowing the carbon-12 isotope to have a mass of 12.0000, thus basing the atomic weights on a characteristic mass number and making them as fundamental as possible. In addition, this base made the atomic weights almost exactly what they were under the old system. Thus, on the carbon-12 equals 12 standard, the atomic weight of oxygen is 15.9994.
Well, then, let us start with a carbon-12 atom, with its mass equal to 12.0000. Its nucleus contains six protons and six neutrons. From mass-spectrographic measurements, it becomes evident that, on the carbon-12 equals 12 standard, the mass of a proton is 1.007825 and that of a neutron is 1.008665. Six protons, then, should have a mass of 6.046950; and six neutrons, 6.051990. Together, the twelve nucleons should have a mass of 12.104940. But the mass of the carbon-12 is 12.00000. What has happened to the missing 0.104940?
This disappearing mass is the mass defect. The mass defect divided by the mass number gives the mass defect per nucleon, or the packing fraction. The mass has not really disappeared but has been converted into energy, in accordance with Einstein’s equation, so that the mass defect is also the binding energy of the nucleus. To break a nucleus down into individual protons and neutrons would require the input of an amount of energy equal to the binding energy, since an amount of mass equivalent to that energy would have to be formed.
Aston determined the packing fraction of many nuclei, and he found it to increase rather quickly from hydrogen up to elements in the neighborhood of iron and then to decrease, rather slowly, for the rest of the periodic table. In other words, the binding energy per nucleon is highest in the middle of the periodic table. Thus, conversion of an element at either end of the table into one nearer the middle should release energy.
Take uranium 238 as an example. This nucleus breaks down by a series of decays to lead 206. In the process, it emits eight alpha particles. (It also gives off beta particles, but these are so light they can be ignored.) Now the mass of lead 206 is 205.9745 and that of eight alpha particles totals 32.0208. Altogether these products add up to a mass of 237.9953. But the mass of uranium 238, from which they came, is 238.0506. The difference, or loss of mass, is 0.0553. That loss of mass is just enough to account for the energy released when uranium breaks down.
When uranium breaks down to still smaller atoms, as it does in fission, a great deal more energy is released. And when hydrogen is converted to helium, as it is in stars, there is an even larger fractional loss of mass and a correspondingly richer development of energy.
Physicists began to look upon the mass-energy equivalence as a very reliable bookkeeping. For instance, when the positron was discovered in 1934, its mutual annihilation with an electron produced a pair of gamma rays whose energy was just equal to the mass of the two particles. Furthermore, as Blackett was first to point out, mass could be created out of appropriate amounts of energy. A gamma ray of the proper energy, under certain circumstances, would disappear and give rise to an electron-positron pair, created out of pure energy. Larger amounts of energy, supplied by cosmic particles or by particles fired out of proton synchrotons (see chapter 7), would bring about the creation of more massive particles, such as mesons and antiprotons.
It is no wonder that when the bookkeeping did not balance, as in the emission of beta particles of less than the expected energy, physicists invented the neutrino to balance the energy account rather than tamper with Einstein’s equation (see chapter 7).
If any further proof of the conversion of mass to energy was needed, nuclear bombs provided the final clincher.
Particles and Waves
In the 1920s, dualism reigned supreme in physics. Planck had shown radiation to be particlelike as well as wavelike. Einstein had shown that mass and energy are two sides of the same coin; and that space and time are inseparable. Physicists began to look for other dualisms.
In 1923, the French physicist Louis Victor de Broglie was able to show that, just as radiation has the characteristics of particles, so the particles of matter, such as electrons, should display the characteristics of waves. The waves associated with these particles, he predicted, would have a wavelength inversely related to the mass times the velocity (that is, the momentum) of the particle. The wavelength associated with electrons of moderate speed, de Broglie calculated, ought to be in the X-ray region.
In 1927, even this surprising prediction was borne out. Clinton Joseph Davisson and Lester Halbert Germer of the Bell Telephone Laboratories were bombarding metallic nickel with electrons. As the result of a laboratory accident, which had made it necessary to heat the nickel for a long time, the metal was in the form of large crystals, which were ideal for diffraction purposes because the spacing between atoms in a crystal is comparable to the very short wavelengths of electrons. Sure enough, the electrons passing through those crystals behaved not as particles but as waves. The film behind the nickel showed interference patterns, alternate bands of fogging and clarity, just as it would have shown if X rays rather than electrons had gone through the nickel.
Interference patterns were the very thing that Young had used more than a century earlier to prove the wave nature of light. Now they proved the wave nature of electrons. From the measurements of the interference bands, the wavelength associated with the electron could be calculated, and it turned out to be 1.65 angstrom units, almost exactly what de Broglie had calculated it ought to be.
In the same year, the British physicist George Paget Thomson, working independently and using different methods, also showed that electrons have wave properties.
De Broglie received the Nobel Prize in physics in 1929, and Davisson and Thomson shared the Nobel Prize in physics in 1937.
ELECTRON MICROSCOPY
This entirely unlooked-for discovery of a new kind of dualism was put to use almost at once in microscopy. Ordinary optical microscopes, as I have mentioned, cease to be useful at a certain point because there is a limit to the size of objects that light-waves can define sharply. As objects get smaller, they also get fuzzier, because the light-waves begin to pass around them—something first pointed out by the German physicist Ernst Karl Abbe in 1878. The cure, of course, is to try to find shorter wavelengths with which to resolve the smaller objects. Ordinary-light microscopes can distinguish two dots 1/5,000 millimeter apart, but ultraviolet microscopes can distinguish dots 1/10,000 millimeter apart. X rays would be better still, but there are no lenses for X rays. This problem can be solved, however, by using the waves associated with electrons, which have about the same wavelength as X rays but are easier to manipulate. For one thing, a magnetic field can bend the electron rays, because the waves are associated with a charged particle.)
Just as the eye can see an expanded image of an object if the light-rays involved are appropriately manipulated by lenses, so a photograph can register all expanded image of an object if electron waves are appropriately manipulated by magnetic fields. And, since the wavelengths associated with electrons are far smaller than those of ordinary light, the resolution obtainable with an electron microscope at high magnification is much greater than that available to an ordinary microscope (figure 8.5).
Figure 8.5. Diagram of electron microscope. The magnetic condenser directs the electrons in a parallel beam. The magnetic objective functions like a convex lens, producing an enlarged image, which is then further magnified by a magnetic projector. The image is projected on a fluorescent observation screen or a photographic plate.
A crude electron microscope capable of magnifying 400 times was made in Germany in 1932 by Ernst Ruska and Max Knoll, but the first really usable one was built in 1937 at the University of Toronto by James Hillier and Albert F. Prebus. Their instrument could magnify an object 7,000 times, whereas the best optical microscopes reach their limit with a magnification of about 2,000. By 1939, electron microscopes were commercially available; and eventually Hillier and others developed electron microscopes capable of magnifying up to 2,000,000 times.
> Whereas an ordinary electron microscope focuses electrons on the target and has them pass through, another kind has a beam of electrons pass rapidly over the target, scanning it in much the wayan electron beam scans the picture tube in a television set. Such a scanning electron microscope was suggested as early as 1938 by Knoll, but the first practical device of this sort was built by the British-American physicist Albert Victor Crewe about 1970. The scanning electron microscope is less damaging to the object being viewed, shows the object with a greater three-dimensional effect so that more information is obtained, and can even show the position of individual atoms of the larger varieties.
ELECTRONS AS WAYES
It ought not be surprising should particle-wave dualism work in reverse, so that phenomena ordinarily considered wavelike in nature should have particle characteristics as well. Planck and Einstein had already shown radiation to consist of quanta, which, in a fashion, are particles. In 1923, Compton, the physicist who was to demonstrate the particle nature of cosmic rays (see chapter 7), showed that such quanta possessed some down-to-earth particle qualities. He found that X rays, on being scattered by matter, lose energy and become longer in wavelength. This effect was just what might be expected of a radiation “particle” bouncing off a matter particle: the matter particle is pushed forward, gaining energy; and the X ray veers off, losing energy. This Compton effect helped establish the wave-particle dualism.