The Universe in Zero Words

Home > Other > The Universe in Zero Words > Page 12
The Universe in Zero Words Page 12

by Mackenzie, Dana


  In 1798, Napoleon Bonaparte launched a military campaign in Egypt, with the idea of making it into a French colony. In addition to his invasion force of 40,000 soldiers, Napoleon brought 167 scientists to study Egypt and catalog Egyptian culture.† Among the “savants” who came along was Fourier. This was apparently the first time Fourier met Napoleon, and the association would change his life. The military campaign was a disaster—the British navy destroyed the French fleet after Napoleon reached Egypt, stranding his massive army—but Fourier evidently made a good impression on the future emperor. After Fourier returned to France in 1801, Napoleon appointed him prefect of Isère, a province on the Italian border. Fourier was not entirely happy about this, as he would have preferred to stay in Paris at the École Polytechnique, but he proved to be a capable public servant. Napoleon’s defeat at Waterloo in 1815 ended Fourier’s political career, but it actually helped his scientific career. He moved back to Paris, where he became the secretary of the Academy of Sciences in 1822 and died in 1830.

  Above The heat equation has numerous practical uses, including weather forecasting.

  Like Abel and Galois after him, Fourier struggled to obtain recognition for his most important work, but for a different set of reasons. Beginning in 1802, he began to conduct experiments on the diffusion of heat in solid materials. He began with very simple cases—first a solid bar, then a ring—which could be treated as one-dimensional problems. At the same time, he developed a two-part mathematical theory of these objects, first setting up an equation (known as the heat equation) that expresses the conduction of heat inside the bar, and then solving it by a method that became known as Fourier series.

  THE HEAT EQUATION is an excellent example of what mathematicians in the ninetheenth century did: it indicates exactly how the current temperature distribution affects the future temperature. Roughly, it says that heat will flow toward points that are cooler than the average temperature of their neighbors, and away from points that are warmer. Because this is a statement about rates of change, of course it is expressed in the language of calculus. Furthermore, it relates two different kinds of rates of change. The rate of change in temperature over time, written in the formula as du/dt is determined by the spatial variations in temperature, represented by d2u/dx2, which reflects the difference between the temperature at the point x and the temperature at two equally spaced points to its left and right. The complete heat equation reads as follows:

  Such an equation is called a partial differential equation: “partial” because each term expresses part of the way that temperature varies (either in space or in time); “differential” because it involves derivatives. Partial differential equations would turn out to be crucial for modeling all sorts of physical processes, from heat conduction to fluid flow to the propagation of electric and magnetic fields. Every time you read a weather forecast, you are seeing the solution of several partial differential equations that describe the motion of heat and air and water in the atmosphere.

  Fourier’s work also illustrates the fact that mathematics, when applied to real-world problems, is a two-step procedure. First comes the modeling of the problem—translating your assumptions, or your empirical observations, into mathematical language. Fourier’s modeling of heat flow is beautiful, convincing, and far-reaching. The three-dimensional heat equation applies to everything from the inside of your coffee cup to the inside of a star to global climate change.

  The next step after modeling is to solve the equations of the model. It would seem that this would be the most routine part of the work—a solution is a solution, it is either correct or not—and yet this was exactly where Fourier ran into controversy.

  Fourier used a time-honored method of solving the equation: he guessed. In particular, because the temperature u in the bar is a function of both space (x) and time (t), he guessed that it was simply a product of two functions, one of them purely a function of time and the other one purely a function of space. It worked; the solution was a product of a sine wave (in space) and a decaying exponential function (in time). If your metal bar starts with a temperature distribution whose graph is a sine wave, its temperature will gradually cool down to zero (or whatever the ambient temperature is) at a rate that is proportional to the square of the wavelength of the sine wave.

  But what if the initial temperature distribution of your metal bar isn’t a sine wave? For example, in his experiments Fourier put one end of the bar into a furnace, creating a temperature distribution with half of the bar hot and half cold. In physics lingo, this would be called a “square wave,” not a sine wave. But Fourier asserted that any temperature distribution could be written as a sum (not just a finite sum—an infinite sum, nowadays called a Fourier series) of sine waves.

  Above Fourier’s solution for the heat equation involves sine waves whose amplitudes decrease over time, as shown here.

  Nowadays, with computers, we can draw beautiful pictures to illustrate Fourier’s idea of approximating arbitrary functions with trigonometric series. In particular, it is easy to see how a square wave emerges out of a chorus of wobbly approximations. But this precise point stuck in the throats of his colleagues, particularly his former teacher Joseph Louis Lagrange. It implied a sea change in mathematicians’ conception of what a function was.

  EVER SINCE EULER, functions had been seen as formulas: finite combinations of known functions such as polynomials, exponentials, trigonometric functions, and so forth. Or, following Newton, they had been expressed as power series, which are basically “infinite polynomials.” But Fourier series were much more versatile. They could represent functions with jumps and corners, which could not be expressed with simple arithmetic formulas. Fourier’s paper marked the beginning of a broader conception of a function, the input-output model that we use today. A function is simply a rule that assigns to any input value a unique output. The input and output values don’t even have to be real numbers, and the rule certainly does not have to be expressible as a formula. In Part Two, I said perhaps a bit cavalierly that classical mathematicians were not interested in quantities like heartbeats and stock prices. It would be more accurate to say that it wouldn’t even have occurred to them to think of such things as mathematical functions. Fourier’s insight opens the door to a vast range of physical and empirical processes, especially ones with jumps and discontinuities.

  Above A graph displaying the “square wave” and its approximation by finite sums of sine waves. The approximations can be made as close as desired to the original square wave.

  Lagrange’s objections did have some merit, though. Fourier said that you could break any function down into a sum of sine waves, each with a different frequency n. There is a function or “f-hat” that tells you how “strong” each frequency is. Fourier’s key point is that you can reconstruct the original f from “f-hat.” According to Fourier’s inversion formula, “f-hat-hat” is equal to f again. Fourier did not provide an adequate proof of this. In fact, it is not even true for functions with discontinuities. What kinds of functions do obey the Fourier inversion formula? The answer is not easy, and the problem provided a major stimulus in the nineteenth and twentieth centuries to the theory of functions, or “functional analysis.”

  Above Joseph Fourier (1768–1830).

  THE IMPORTANCE OF the Fourier series (and the “hat” concept, which is technically called a Fourier transform) goes far beyond the heat equation. Fourier transforms allow any time-varying signal to be decomposed into a spectrum of wavelengths. Astronomers use this principle to determine what molecules are in distant stars. Radios use this principle to pick out a particular channel—it’s a matter of finding a particular wavelength in a time-varying signal. Music synthesizers use Fourier series to simulate the sound of a violin or a flute, or to create a new sound that has never been heard before. In other words, they are tweaking “f-hat” in order to produce, hopefully, a better-sounding “f.” Fourier series and transforms are all around us; we just don’t know it. />
  As for Fourier, he had to wait a long time to see his paper published. He presented it to the Institute of France in 1807, and it was rejected because of Lagrange’s objections. (Also, another academician named Jean-Francois Biot complained that Fourier should have given him more credit.) Fourier submitted a re-worked version for a prize in 1811 and it won, but Lagrange still deemed it unsuitable for publication. Finally, in 1822, with Lagrange dead and Fourier now installed as secretary of the Academy, his treatise The Analytic Theory of Heat finally appeared, and it became one of the most widely read mathematics books of the nineteenth century.

  * * *

  †. In spite of his other failings, Napoleon was an admirer and supporter of science. He even has a minor theorem in Euclidean geometry, Napoleon’s theorem, named after him. It is unclear whether Napoleon actually proved it.

  18

  a god’s-eye view of light maxwell’s equations

  While mathematics was experiencing revolutions in algebra, geometry, and the theory of functions, physics was undergoing its own revolution.

  At the beginning of the nineteenth century, the theories of mechanics and gravity were in pretty good shape. Newton had explained how planets orbit around the Sun. Euler, Laplace, and others had explained multiple-body interactions in the solar system, such as the precession of the equinoxes and the slow variations in Jupiter and Saturn’s orbits. Newton’s laws had explained how solid objects respond to mechanical forces, and Euler’s equations of hydrodynamics had done the same thing for fluids.

  However, three subjects in physics remained entirely mysterious to the scientific community: electricity, magnetism, and the nature of light. As of 1800, there was not the slightest bit of evidence that any of these three phenomena were related to the others. Yet by 1865, that had all changed and physicists had arrived at a theory that unifies all three subjects. Magnetic fields are produced by electric currents. Electric fields are generated by changing magnetic fields. And light is nothing more than a traveling electromagnetic wave—an intricately woven tapestry of vibrating magnetic fields and electric fields that cross one another like the warp and the weft of a piece of fabric.

  E and B represent the electric and magnetic fields in a vacuum, with no electric charges or currents present. The constant c is the speed of light. The symbol “∇” (the divergence) represents the tendency for field lines to move apart. The symbol “∇ ×” (the curl) represents the tendency of the field lines to rotate. Collectively, the equations say that in the absence of electric charges, neither the electric field nor the magnetic field has any sources or sinks.

  In order to reach these conclusions, physicists first had to assimilate a number of startling experimental discoveries. Then they had to develop a new kind of physics, in which solid, tangible objects (like wheels, bars, pulleys, and levers—the stuff of mechanics) were replaced by intangible concepts such as electric and magnetic fields. Because common sense and everyday experience no longer apply to these intangible but real phenomena, physicists were forced to embrace mathematics in a deeper way than they ever had before. It was the only guide that worked when intuition and our senses failed.

  THE NATURE OF LIGHT had been debated as early as the 1600s, when Isaac Newton argued that it consisted of tiny corpuscles, while Robert Hooke insisted that it was made of waves. Newton’s enormous prestige pushed the wave theory into the background for a hundred years or so. But in the early 1800s, several experimental discoveries revived the debate. In 1801, Thomas Young discovered the interference of light waves. When a beam of light passes through two narrow, parallel slits, what we see on the other side is not two narrow bright bands, but a series of alternating dark and light bands with the brightest one right in the middle. This is easy to explain if you think of light as being like ripples of water in a tank, but not if you think about it as tiny particles of grapeshot.

  Left Young’s Double Slit experiment.

  Also, as early as 1665, Francesco Grimaldi had observed an effect he called diffraction—the apparent bending of light around a corner. Again, this was hard to square with Newton’s laws. (Remember that particles in motion with no force acting on them are supposed to go in a straight line.) Refraction, the bending of light as it passes through a prism, was also easier to explain with the wave theory than the particle theory. In 1818, Augustin Fresnel successfully accounted for all three of these phenomena—interference, diffraction, and refraction—with a theory in which light consists of transverse waves.‡

  By the 1820s in France, and the 1830s in England (which was slower to shake off its hero-worship of Newton), the wave theory had gained the upper hand. But if light was a wave, what was the wave made of? It could not be a wave of air or any other fluid, because transverse waves don’t travel through fluids; they require a medium with elasticity, or the ability to “snap back” after being stretched. The great majority of physicists assumed that light traveled through some sort of “luminiferous aether,” but all efforts to detect this aether directly were in vain.

  Meanwhile, the mysteries of electricity and magnetism were also deepening. In 1799, Count Alessandro Volta of Italy had invented the battery, which for the first time made it possible for physicists to experiment with steady electric currents. In 1820, Hans Christian Ørsted noticed, while preparing for a lecture, that when he turned on an electric current in a wire, it deflected a nearby compass needle. This was the first indication that electricity and magnetism were related. This clue was followed in 1831 by Michael Faraday’s discovery of electromagnetic induction. Faraday showed that a changing electric current in one coil would induce a temporary electric current in another one. Likewise, moving a magnet close to a coil would temporarily induce a current. This was, then, a reciprocal effect to the one Ørsted had noticed. Magnetism could induce electricity, but only if the strength of the magnetism was changing.

  Above A coherent beam from a red Helium-Neon laser (632.8 nm) is used to illuminate two closely-spaced 25-micron-wide slits (double-slits).

  THE MAN WHO WOVE all of these confusing clues into a beautiful theory was James Clerk Maxwell, a Scottish physicist. For those who think that great discoveries are always made in a flash of inspiration—like William Rowan Hamilton’s discovery of quaternions—Maxwell provides compelling evidence to the contrary. He worked on electromagnetism for several years, gradually painting the beautiful canvas we now know as Maxwell’s equations.

  Maxwell’s first step, in 1855, was to take seriously Faraday’s description of the “lines of force” created by a magnet—lines that are easily seen if you sprinkle iron filings nearby. Faraday believed that the space around the magnet was surrounded by these “lines of force” even when no iron filings were present. Maxwell gave this invisible collection of curves a name—the magnetic field. He also postulated an electric field that conveys electric forces.

  In the twenty-first century, we are completely accustomed to the idea that we live surrounded by electric and magnetic fields. So it may take a conscious effort to imagine how radical the idea was in the 1850s. What is an electric field? You can’t see it. You can’t touch it. How can you tell that it’s there?

  Opposite Iron shavings are used to reveal magnetic field lines produced by two bar magnets.

  An additional roadblock to Maxwell’s theory of fields was, again, the legacy of Newton. In Newton’s theory of gravity, planets attract each other from a distance, with a force proportional to the inverse square of their distance. For a while, electricity and magnetism seemed to work in exactly the same way. Physicists subscribed to the idea of “action-at-a-distance” as an article of faith. But Faraday and Maxwell questioned this conviction. They said the force between two charges, or two magnets, results from the field between them. In Newton’s universe, empty space is empty. But in Maxwell’s universe, it is humming with electric and magnetic potential.

  Six years after his first paper, Maxwell added another stroke to his scientific painting. He envisioned electricity as an
elastic force in the medium that electric and magnetic fields inhabit. It’s interesting to note that he had not yet abandoned the mechanical way of thinking and accepted the greater flexibility of mathematics. His second paper relies on an extremely complicated model, replete with spinning vortices to represent the magnetic fields and counterrotating “idle wheels” to represent the electric fields. All of this baroque machinery would be discarded in his third paper.

  Elastic forces, as noted above, are exactly what is necessary to transmit transverse waves. Not only that, there is a simple formula for the speed of waves in any elastic medium. Reasoning by analogy, Maxwell was led to a formula for the speed of an electromagnetic wave. At the time, he was spending the summer at his estate in Scotland, and he could not look up the necessary physical constants to plug into the equation. But when he got back to his office at Kings College in London in the fall of 1861, he computed the speed as 310,740,000 meters per second. By comparison, in 1849, a French physicist named Armand Fizeau had measured the speed of light at 314,850,000 meters per second! (The currently accepted value is 299,792,458 meters per second. In fact, since 1983 the meter has been defined as the distance light travels in 1/299,792,458 of a second, so the speed of light is now prescribed by definition and is no longer an experimental constant.) It could not be an accident, thought Maxwell, that the two constants were so close. In his paper announcing the result, he wrote in italics: “We can scarcely avoid the inference that light consists in the transverse undulations of the same medium which is the cause of electrical and magnetic phenomena.”

  BUT MAXWELL was not done. Having used a mechanical analogy to discover that electromagnetic waves and light waves are the same thing, he realized that he could forget about the vortices and the counterrotating gears, and derive the result solely from mathematics. What was left, by the time he wrote his third paper in 1865, was a simple set of four partial differential equations that relates the electric field (E) to the magnetic field (B) at any point in a vacuum.

 

‹ Prev