Book Read Free

Zero

Page 10

by Charles Seife


  Adding infinite things to each other can yield bizarre and contradictory results. Sometimes, when the terms go to zero, the sum is finite, a nice, normal number like 2 or 53. Other times the sum goes off to infinity. And an infinite sum of zeros can equal anything at all—and everything at the same time. Something very bizarre was going on; nobody knew quite how to handle the infinite.

  Luckily the physical world made a little more sense than the mathematical one. Adding infinite things to each other seems to work out most of the time, so long as you are dealing with something in real life, like finding the volume of a barrel of wine. And 1612 was a banner year for wine.

  Johannes Kepler—the man who figured out that planets move in ellipses—spent that year gazing into wine barrels, since he realized that the methods that vintners and coopers used to estimate the size of barrels were extremely crude. To help the wine merchants out, Kepler chopped up the barrels—in his mind—into an infinite number of infinitely tiny pieces, and then added them back together again to yield their volumes. This may seem a backward way of going about measuring barrels, but it was a brilliant idea.

  To make the problem a bit simpler, let us consider a two-dimensional object rather than a three-dimensional one—a triangle. The triangle in Figure 23 has a height of 8 and a base of 8; since the area of a triangle is half the base times the height, the area is 32.

  Now imagine trying to estimate the size of the triangle by inscribing little rectangles inside the triangle. For a first try, we get an area of 16, quite short of the actual value of 32. The second try is a bit better; with three rectangles, we get a value of 24. Closer, but still not there yet. The third try gives us 28—closer still. As you can see, making smaller and smaller rectangles—whose widths, denoted by the symbol ?x, go to zero—makes the value closer and closer to 32, the true value for the area of the triangle. (The sum of these rectangles is equal to ?f(x)?x where the Greek ? represents the sum over an appropriate range and f(x) is the equation of the curve that the rectangles strike. In modern notation, as ?x goes to zero, we replace the ? with a new symbol, , and ?x with dx, turning the equation into f(x) dx, which is the integral.)

  Figure 23: Estimating the area of a triangle

  In one of Kepler’s lesser-known works, Volume-Measurement of Barrels, he does this in three dimensions, slicing barrels into planes and summing the planes together. Kepler, at least, wasn’t afraid of a glaring problem: as ?x goes to zero, the sum becomes equivalent to adding an infinite number of zeros together—a result that makes no sense. Kepler ignored the problem; though adding infinite zeros together was gibberish from a logical point of view, the answer it yielded was the right one.

  Kepler was not the only prominent scientist who sliced objects infinitely thin. Galileo, too, pondered infinity and these infinitely small slices of area. These two ideas transcend our finite understanding, he wrote, “the former on account of their magnitude, the latter because of their smallness.” Yet despite the deep mystery of the infinite zeros, Galileo sensed their power. “Imagine what they are when combined,” he wondered. Galileo’s student Bonaventura Cavalieri would provide part of the answer.

  Instead of barrels, Cavalieri cut up geometric objects. To Cavalieri, every area, like that of the triangle, is made up of an infinite number of zero-width lines, and a volume is made up of an infinite number of zero-height planes. These indivisible lines and planes are like atoms of area and volume; they can’t be divided any further. Just as Kepler measured the volumes of barrels with his thin slices, Cavalieri added up an infinite number of these indivisible zeros to figure out the area or the volume of a geometric object.

  For geometers, Cavalieri’s statement was troublesome indeed; adding infinite zero-area lines could not yield a two-dimensional triangle, nor could infinite zero-volume planes add up to a three-dimensional structure. It was the same problem: infinite zeros make no logical sense. However, Cavalieri’s method always gave the right answer. Mathematicians ignored the logical and philosophical troubles with adding infinite zeros—especially since indivisibles or infinitesimals, as they came to be called, finally solved a long-standing puzzle: the problem of the tangent.

  A tangent is a line that just kisses a curve. For any point along a smooth curve that flows through space, there is a line that just grazes the curve, touching at exactly one point. This is the tangent, and mathematicians realized that it is extremely important in studying motion. For instance, imagine swinging a ball on a string around your head. It’s traveling in a circle. However, if you suddenly cut the string, the ball will fly off along that tangent line; in the same way, a baseball pitcher’s arm travels in an arc as he throws, but as soon as he lets go, the ball flies off on the tangent (Figure 24). As another example, if you want to find out where a ball will come to rest at the bottom of a hill, you look for a point where the tangent line is horizontal. The steepness of the tangent line—its slope—has some important properties in physics: for instance, if you’ve got a curve that represents the position of, say, a bicycle, then the slope of the tangent line to that curve at any given point tells you how fast that bicycle is going when it reaches that spot.

  Figure 24: Flying off at a tangent

  For this reason, several seventeenth-century mathematicians—like Evangelista Torricelli, René Descartes, the Frenchman Pierre de Fermat (famous for his last theorem), and the Englishman Isaac Barrow—created different methods for calculating the tangent line to any given point on a curve. However, like Cavalieri, all of them came up against the infinitesimal.

  To draw a tangent line at any given point, it’s best to make a guess. Choose another point nearby and connect the two. The line you get isn’t exactly the tangent line, but if the curve isn’t too bumpy, the two lines will be pretty close. As you reduce the distance between the points, the guess gets closer to the tangent line (Figure 25). When your points are zero distance away from each other, your approximation is perfect: you have found the tangent. Of course, there’s a problem.

  Figure 25: Approximating the tangent

  The most important property of a line is its slope, and to measure this, mathematicians look at how high a line rises in a certain amount of distance. As an example, imagine you are driving east on a hill; for every mile east you drive, you gain half a mile in altitude. The slope of the hill is simply the height—half a mile—over the horizontal distance you have driven—one mile. Mathematicians say that the slope of the hill is ½. The same thing is true for lines; to measure the slope of a line, you look at how much the line rises (which mathematicians denote by the symbol ?y) in a given horizontal distance (which is denoted by ?x). The slope of the line is ?y/?x.

  When you try to calculate the slope of a tangent line, zero wrecks your approximation process. As your approximations of the tangent lines get better and better, the points on the curve you use to create the approximations get closer together. This means that the difference in height, ?y, goes to zero, as does the horizontal distance between the points, ?x. As your tangent approximations get better and better, ?y/?x approaches 0/0. Zero divided by zero can equal any number in the universe. Does the slope of the tangent line have any meaning?

  Every time mathematicians tried to deal with the infinite or with zero, they encountered trouble with illogic. To figure out the volume of a barrel or the area under a parabola, mathematicians added infinite zeros together; to find out the tangent of a curve, they divided zero by itself. Zero and infinity made the simple acts of taking tangents and finding areas appear to be self-contradictory. These troubles would have ended as an interesting footnote but for one thing: these infinities and zeros are the key to understanding nature.

  Zero and the Mystical Calculus

  If we lift the veil and look underneath…we shall discover much emptiness, darkness, and confusion; nay, if I mistake not, direct impossibilities and contradictions…. They are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departe
d quantities?

  —BISHOP BERKELEY, THE ANALYST

  The tangent problem and the area problem both ran afoul of the same difficulties with infinities and zeros. It’s no wonder, because the tangent problem and the area problem are actually the same thing. They are both aspects of calculus, a scientific tool far more powerful than anything ever seen before. The telescope, for instance, had given scientists the ability to find moons and stars that had never been observed before. Calculus, on the other hand, gave scientists a way to express the laws that govern the motion of the celestial bodies—and laws that would eventually tell scientists how those moons and stars had formed. Calculus was the very language of nature, yet its very fabric was infused with zeros and infinities that threatened to destroy the new tool.

  The first discoverer of calculus nearly died before he ever took a breath. Born prematurely on Christmas Day in 1642, Isaac Newton squirmed into the world, so small that he was able to fit into a quart pot. His father, a farmer, had died two months earlier.

  Despite a traumatic childhood* and a mother who wanted him to become a farmer, Newton enrolled in Cambridge in the 1660s—and flourished. Within a few years he developed a systematic method of solving the tangent problem; he could figure out the tangent to any smooth curve at any point. This process, the first half of calculus, is now known as differentiation; however, Newton’s method of differentiation doesn’t look very much like the one we use today.

  Newton’s style of differentiation was based upon fluxions—the flows—of mathematical expressions that he called fluents. As an example of Newton’s fluxions, take the equation

  y = x2 + x + 1

  In this equation, the fluents are y and x; Newton supposed that y and x are changing, or flowing, as time progresses. Their rates of change—their fluxions—are denoted by and respectively.

  Newton’s method of differentiation was based on a notational trick: he let the fluxions change, but he only let them change infinitesimally. Essentially, he gave them no time to flow. In Newton’s notation, y would change in that instant to (y + o) while x changes to (x + o). (The letter o represented the amount of time that had passed; it was almost a zero, but not quite, as we shall see.) The equation then becomes

  Multiplying out the (x + o)2 term gives us

  Rearranging the terms yields

  Since y = x2 + x + 1, we can subtract y from the left side of the equation and x2 + x + 1 from the right side of the equation and leave the system unchanged. That leaves us with

  Now comes the dirty trick. Newton declared that since o was really, really small, (o)2 was even smaller: it vanished. In essence, it was zero, and could be ignored. That gives us

  which means that o/o = 2x + 1, which is the slope of the tangent line at any point x on the curve (Figure 26). The infinitesimal time period o drops right out of the equation, o/o becomes /, and o need never be thought of again.

  The method gave the right answer, but Newton’s vanishing act was very troubling. If, as Newton insisted, (o)2 and (o)3 and higher powers of o were equal to zero, then o itself must be equal to zero.* On the other hand, if o was zero, then dividing by o as we do toward the end is the same thing as dividing by zero—as is the very last step of getting rid of the o in the top and bottom of the o/o expression. Division by zero is forbidden by the logic of mathematics.

  Figure 26: To find the slope at a point on the parabola y=x2 + x + 1, use the formula 2x + 1.

  Newton’s method of fluxions was very dubious. It relied upon an illegal mathematical operation, but it had one huge advantage. It worked. The method of fluxions not only solved the tangent problem, it also solved the area problem. Finding the area under a curve (or a line, which is a type of curve)—an operation we now call integration—is nothing more than the reverse of differentiation. Just as differentiating the curve y = x2 + x + 1 gives you an equation for the slope of the tangent—y = 2x + 1—integrating the curve y = 2x + 1 gives you a formula for the area under the curve. This formula is y = x2 + x + 1; the area underneath the curve between the boundaries x = a and x = b is simply (b2 + b + 1) - (a2 + a + 1) (Figure 27). (Technically, the formula is y = x2 + x + c, where c is any constant you choose. The process of differentiation destroys information, so the process of integration doesn’t give you exactly the answer you are looking for unless you add another bit of information.)

  Calculus is the combination of these two tools, differentiation and integration, in one package. Though Newton broke some very important mathematical rules by toying with the powers of zero and infinity, calculus was so powerful that no mathematician could reject it.

  Nature speaks in equations. It is an odd coincidence. The rules of mathematics were built around counting sheep and surveying property, yet these very rules govern the way the universe works. Natural laws are described with equations, and equations, in a sense, are simply tools where you plug in numbers and get another number out. The ancients knew a few of these equation-laws, like the law of the lever, but with the beginning of the scientific revolution these equation-laws sprang up everywhere. Kepler’s third law described the time it takes for planets to complete an orbit: r3/t2 = k for time t, distance r, and a constant k. In 1662, Robert Boyle showed that if you take a sealed container with a gas in it, squishing the container would increase the pressure inside: pressure p times volume v was always a constant—pv = k for a constant k. In 1676, Robert Hooke figured out that the force exerted by a spring, f, was a negative constant, –k, multiplied by the distance, x, that you’ve stretched it: f =-kx. These early equation-laws were extremely good at expressing simple relationships, but equations have limitations—their constancy, which prevented them from being universal laws.

  Figure 27: To find the area under the line y = 2x + 1, use the formula x2 + x + 1.

  As an example, let’s take the famous equation we all learned in high school: rate times time equals distance. It shows how far you get, x miles, when you run with a certain velocity, v miles per hour, for a time, t hours: vt = x; after all, miles per hour times hours equals miles. This equation is very useful when you are calculating how long it will take to get from New York to Chicago on a train that moves exactly 120 miles an hour. But how many things really move at a constant rate like a train in a math problem? Drop a ball, and it moves faster and faster; in this case, x = vt is quite simply wrong. For the case of a dropped ball, x = gt2/2, where g is the acceleration due to gravity. On the other hand, if you’ve got an increasing force on the ball, x might equal something like t3/3. Rate times time equals distance is not a universal law; it doesn’t apply under all conditions.

  Calculus allowed Newton to combine all these equations into one grand set of laws—laws that applied in all cases, under all conditions. For the first time, science could see the universal laws that underlie all of these little half laws. Even though mathematicians knew that calculus was deeply flawed—thanks to the mathematics of zero and infinity—they quickly embraced the new mathematical tools. For the truth is, nature doesn’t speak in ordinary equations. It speaks in differential equations, and calculus is the tool that you need to pose and solve these differential equations.

  Differential equations are not like the everyday equations that we are all familiar with. An everyday equation is like a machine; you feed numbers into the machine and out pops another number. A differential equation is also like a machine, but this time you feed equations into the machine and out pop new equations. Plug in an equation that describes the conditions of the problem (is the ball moving at a constant rate, or is a force acting on the ball?) and out pops the equation that encodes the answer that you seek (the ball moves in a straight line or in a parabola). One differential equation governs all of the uncountable numbers of equation-laws. And unlike the little equation-laws that sometimes hold and sometimes don’t, the differential equation is always true. It is a universal law. It is a glimpse at the machinery of nature.

  Newton’s calculus—his method of fluxions—did just this by tying togeth
er concepts like position, velocity, and acceleration. When Newton denoted position with the variable x, he realized that velocity is simply the fluxion—what modern mathematicians call the derivative—of x: And acceleration is nothing more than the derivative of velocity, Going from position to velocity to acceleration and back again is as simple as differentiating (adding another dot) or integrating (removing a dot). With that notation in hand, Newton was able to create a simple differential equation that describes the motion of all objects in the universe: F = m where F is the force on an object and m is its mass. (Actually, this is not quite a universal law, as the equation only holds when the mass of an object is constant. The more general version of Newton’s law is F = where p is an object’s momentum. Of course, Newton’s equations were eventually refined further by Einstein.) If you’ve got an equation that tells you about the force that is being applied on an object, the differential equation reveals exactly how the object moves. For instance, if you have a ball in free fall, it moves in a parabola, while a frictionless spring wobbles back and forth forever, and a spring with friction slowly comes to rest (Figure 28). As different as these outcomes seem, they are all governed by the same differential equation.

  Likewise, if you know the way an object moves—whether it be a toy ball or a giant planet—the differential equation can tell you what kind of force is being applied. (Newton’s triumph was taking the equation that described the force of gravity and figuring out the shapes of the planets’ orbits. People had suspected that the force was proportional to 1/r2, and when ellipses popped right out of Newton’s differential equations, people began to believe that Newton was correct.) Despite the power of calculus, the key problem remained. Newton’s work was based on a very shaky foundation—dividing zero by itself. His rival’s work had the same flaw.

 

‹ Prev