Figure 28: Different motions, all governed by the same differential equation
In 1673 an esteemed German lawyer and philosopher visited London. His name was Gottfried Wilhelm Leibniz. He and Newton would tear the scientific world asunder, though neither would solve the problem of the zeros that suffused calculus.
Nobody knows whether the thirty-three-year-old Leibniz encountered Newton’s unpublished work during his trip to England. But between 1673 and 1676, when Leibniz next visited London, he, too, had developed calculus, although in a slightly different form.
Looking back, it appears that Leibniz formulated his version independently of Newton, though the matter is still being debated. The two had a correspondence in the 1670s, making it very difficult to establish how they influenced each other. However, though the two theories came up with the same answers, their notations—and their philosophies—were very different.
Newton disliked infinitesimals, the little os in his fluxion equations that sometimes acted like zeros and sometimes like nonzero numbers. In a sense these infinitesimals were infinitely small, smaller than any positive number you could name, yet still somehow greater than zero. To the mathematicians of the time, this was a ridiculous concept. Newton was embarrassed by the infinitesimals in his equations, and he swept them under the rug. The os in his calculations were only intermediaries, crutches that vanished miraculously by the end of the computation. On the other hand, Leibniz reveled in the infinitesimal. Where Newton wrote o Leibniz wrote dx—an infinitesimally tiny little piece of x. These infinitesimals survived unchanged throughout Leibniz’s calculations; indeed the derivative of y with respect to x was not the infinitesimal-free ratio of fluxions / but the ratio of infinitesimals dy/dx.
With Leibniz’s calculus, these dys and dxs can be manipulated just like ordinary numbers, which is why modern mathematicians and physicists usually use Leibniz’s notation rather than Newton’s. Leibniz’s calculus had the same power as Newton’s, and thanks to its notation, even a bit more. Nevertheless, underneath all the mathematics, Leibniz’s differentials still had the same forbidden 0/0 nature that plagued Newton’s fluxions. As long as this flaw remained, calculus would be based upon faith rather than logic. (In fact, faith was very much on Leibniz’s mind when he derived new mathematics, such as the binary numbers. Any number can be written as a string of zeros and ones; to Leibniz, this was the creation ex nihilo, the creation of the universe out of nothing more than God/1 and void/0. Leibniz even tried to get the Jesuits to use this knowledge to convert the Chinese to Christianity.)
It would be many years before mathematicians began to free calculus from its mystical underpinnings, for the mathematical world was busy fighting over who invented calculus.
There is little doubt that Newton came up with the idea first—in the 1660s—but he did not publish his work for 20 years. Newton was a magician, theologian, and alchemist as well as a scientist (for instance, he used biblical texts to conclude that the second coming of Christ would occur around 1948) and many of his views were heretical. As a result, he was secretive and reluctant to reveal his work. In the meantime, while Newton sat upon his discovery, Leibniz developed his own calculus. The two promptly accused each other of plagiarism, and the English mathematical community, which backed Newton, pulled away from the Continental mathematicians, who supported Leibniz. As a result, the English stuck to Newton’s fluxion notation rather than adopting Leibniz’s superior differential notation—cutting off their noses to spite their faces. English mathematicians fell far behind their Continental counterparts when it came to developing calculus.
A Frenchman, not an Englishman, would be remembered for taking the first nibble at the mysterious zeros and infinities that suffused calculus; mathematicians learn of l’Hôpital’s rule when they first learn about calculus. Oddly enough, it was not l’Hôpital who came up with the rule that bears his name.
Born in 1661, Guillaume-François-Antoine de l’Hôpital was a marquis—and was thus very wealthy. He had an early interest in mathematics, and though he spent some time in the army, becoming a cavalry captain, he soon turned back to his true love of math.
L’Hôpital bought himself the best teacher that money could buy: Johann Bernoulli, a Swiss mathematician and one of the early masters of Leibniz’s calculus of infinitesimals. In 1692, Bernoulli taught l’Hôpital calculus. L’Hôpital was so enthralled by the new mathematics that he persuaded Bernoulli to send him all Bernoulli’s new mathematical discoveries for the marquis to use as he desired, in return for cash. The result was a textbook. In 1696, l’Hôpital’s Analyse des infiniment petits became the first textbook on calculus and introduced much of Europe to the Leibnizian version. Not only did l’Hôpital explain the fundaments of calculus in his textbook, he also included some exciting new results. The most famous is known as l’Hôpital’s rule.
L’Hôpital’s rule took the first crack at the troubling 0/0 expressions that were popping up throughout calculus. The rule provided a way to figure out the true value of a mathematical function that goes to 0/0 at a point. L’Hôpital’s rule states that the value of the fraction was equal to the derivative of the top expression divided by the derivative of the bottom expression. For instance, consider the expression x/(sin x) when x = 0; x = 0, as does sin x, so the expression is equal to 0/0. Using L’Hopital’s rule, we see that the expression goes to 1/(cos x), as 1 is the derivative of x and cos x is the derivative of sin x. Cos x = 1 when x = 0, so the whole expression equals 1/1 = 1. Clever manipulations could also bring l’Hopital’s rule to resolve other odd expressions: ?/?, 00, 0?, and ?0.
All of these expressions, but especially 0/0, could take on any value you desire them to have, depending on the functions you put in the numerator and denominator. This is why 0/0 is dubbed indeterminate. It was no longer a complete mystery; mathematicians could extract some information about 0/0 if they approached it very carefully. Zero was no longer an enemy to be avoided; it was an enigma to be studied.
Soon after l’Hôpital’s death in 1704, Bernoulli started implying that l’Hôpital had stolen his work. At the time the mathematical community rejected Bernoulli’s claims; not only had l’Hôpital proved himself an able mathematician, but Johann Bernoulli had a tarnished reputation. He had previously tried to claim credit for another mathematician’s proof. (The other mathematician happened to be his brother, Jakob.) In this case, though, Johann Bernoulli’s claim was justified. His correspondence with l’Hôpital backs his story. Alas for Bernoulli, the name for l’Hôpital’s rule stuck.
L’Hôpital’s rule was extremely important for resolving some of the difficulties with 0/0, but the underlying problem remained. Newton’s and Leibniz’s calculus depends upon dividing by zero—and on numbers that miraculously disappear when you square them. L’Hôpital’s rule examines 0/0 with tools that were built upon 0/0 to begin with. It is a circular argument. And as physicists and mathematicians all over the world were beginning to use calculus to explain nature, cries of protest emanated from the church.
In 1734, seven years after Newton’s death, an Irish bishop, George Berkeley, wrote a book entitled The Analyst, Or a Discourse Addressed to an Infidel Mathematician. (The mathematician in question was most likely Edmund Halley, always a supporter of Newton.) In The Analyst, Berkeley pounced on Newton’s (and Leibniz’s) dirty tricks with zeros.
Calling infinitesimals “ghosts of departed quantities,” Berkeley showed how making these infinitesimals disappear with impunity can lead to a contradiction. He concluded that “he who can digest a second or third fluxion, a second or third difference, need not, methinks, be squeamish about any point in divinity.”
Though mathematicians of the day sniped at Berkeley’s logic, the good bishop was entirely correct. In those days calculus was very different from other realms of mathematics. Every theorem in geometry had been rigorously proved; by taking a few rules from Euclid and proceeding, very carefully, step by step, a mathematician could show how a triangle’s ang
les sum to 180 degrees, or any other geometric fact. On the other hand, calculus was based on faith.
Nobody could explain how those infinitesimals disappeared when squared; they just accepted the fact because making them vanish at the right time gave the correct answer. Nobody worried about dividing by zero when conveniently ignoring the rules of mathematics explained everything from the fall of an apple to the orbits of the planets in the sky. Though it gave the right answer, using calculus was as much an act of faith as declaring a belief in God.
The End of Mysticism
A quantity is something or nothing; if it is something, it has not yet vanished; if it is nothing, it has literally vanished. The supposition that there is an intermediate state between these two is a chimera.
—JEAN LE ROND D’ALEMBERT
In the shadow of the French Revolution, the mystical was driven out of calculus.
Despite calculus’s shaky foundations, by the end of the eighteenth century, mathematicians all over Europe were having stunning successes with the new tool. Colin Maclaurin and Brook Taylor, perhaps the best British mathematicians in the era of isolation from the Continent, discovered how to use calculus to rewrite functions in a totally different form. For instance, after using some tricks in calculus, mathematicians realized that the function 1/(1 – x) can be written as
1 + x + x2 + x3 + x4 + x5 +…
Though the two expressions look dramatically different, they are (with some caveats) exactly the same.
Those caveats, which stem from the properties of zero and infinity, can become very important, however. The Swiss mathematician Leonhard Euler, inspired by calculus’s easy manipulation of zeros and infinities, used similar reasoning as Taylor and Maclaurin and “proved” that the sum
…1/x3 + 1/x2 + 1/x + 1 + x + x2 + x3…
equals zero. (To convince yourself that something fishy is going on, plug in the number 1 for x and see what happens.) Euler was an excellent mathematician—in fact, he was one of the most prolific and influential in history—but in this case the careless manipulation of zero and infinity led him astray.
It was a foundling who finally tamed the zeros and infinities in calculus and rid mathematics of its mysticism. In 1717 an infant was found on the steps of the church of Saint Jean Baptiste le Rond in Paris. In memory of that occasion, the child was named Jean Le Rond, and he eventually took the surname d’Alembert. Though he was raised by an impoverished working-class couple—his foster father was a glazier—it turns out that his birth father was a general and his mother was an aristocrat.
D’Alembert is best known for his collaboration on the famed Encyclopédie of human knowledge—a 20-year effort with coauthor Denis Diderot. But d’Alembert was more than an encyclopedist. It was d’Alembert who realized that it was important to consider the journey as well as the destination. He was the one who hatched the idea of limit and solved calculus’s problems with zeros.
Once again, let us consider the story of Achilles and the tortoise, which is an infinite sum of steps that get closer and closer to zero. Manipulating an infinite sum—whether it is in the Achilles problem or in finding the area underneath a curve or finding an alternate form for a mathematical function—caused mathematicians to come up with contradictory results.
D’Alembert realized that the Achilles problem vanishes if you consider the limit of the race. In our example on page 41, at every step the tortoise and Achilles get closer and closer to the two-foot mark. No step takes them farther away or even keeps them at the same distance; each moment brings them closer to that mark. Thus, the limit of that race—its ultimate destination—is at the two-foot mark. This is where Achilles passes the tortoise.
But how do you prove that two feet is actually the limit of the race? I ask you to challenge me. Give me a tiny distance, no matter how small, and I will tell you when both Achilles and the tortoise are less than that tiny distance away from the limit.
As an example, let’s say that you challenge me with a distance of one-thousandth of a foot. Well, a few calculations later, I would tell you that after the 11th step, Achilles is 977 millionths of a foot away from the two-foot mark, while the tortoise is half that distance away; I have met your challenge with 23 millionths of a foot to spare. What if you challenged me with a distance of one-billionth of a foot? After 31 steps, Achilles is 931 trillionths of a foot away from the target—69 trillionths closer than you needed—while the tortoise, again, is half that distance away. No matter how you challenge me, I can meet that challenge by telling you a time when Achilles is closer to the mark than you require. This shows that, indeed, Achilles is getting arbitrarily close to the two-foot mark as the race progresses: two feet is the limit of the race.
Now, instead of thinking of the race as a sum of infinite parts, think of it as a limit of finite sub-races. For instance, in the first race Achilles runs to the one-foot mark. Achilles has run
1
1 foot in all. In the next race Achilles does the first two parts—first running 1 foot, and then a half foot. In total, Achilles has run
1 + ½
1.5 feet in all. The third race takes him as far as
1 + ½ + ¼
1.75 feet, all told. Each of these sub-races is finite and well-defined; we never encounter an infinity.
What d’Alembert did informally—and what the Frenchman Augustin Cauchy, the Czech Bernhard Bolzano, and the German Karl Weierstrass would later formalize—was to rewrite the infinite sum
1 + ½ + ¼ + 1/8 +…+ ½n +…
as the expression
limit (as n goes to ?) of 1 + ½ + ¼ + 1/8 +…+ ½n
It’s a very subtle change in notation, but it makes all the difference in the world.
When you have an infinity in an expression, or when you divide by zero, all the mathematical operations—even those as simple as addition, subtraction, multiplication, and division—go out the window. Nothing makes sense any longer. So when you deal with an infinite number of terms in a series, even the + sign doesn’t seem so straightforward. That is why the infinite sum of +1 and -1 we saw at the beginning of the chapter seems to equal 0 and 1 at the same time.
However, by putting this limit sign in front of a series, you separate the process from the goal. In this way you avoid manipulating infinities and zeros. Just as Achilles’ sub-races are each finite, each partial sum in a limit is finite. You can add them, divide them, square them; you can do whatever you want. The rules of mathematics still work, since everything is finite. Then, after all your manipulations are complete, you take the limit: you extrapolate and figure out where the expression is headed.
Sometimes that limit doesn’t exist. For instance, the infinite sum of +1 and -1 does not have a limit. The value of the partial sums flips back and forth between 1 and 0; it’s not really heading to a predictable destination. But with Achilles’ race, the partial sums go from 1 to 1.5 to 1.75 to 1.875 to 1.9375 and so forth; they get closer and closer to two. The sums have a destination—a limit.
The same thing goes for taking the derivative. Instead of dividing by zero as Newton and Leibniz did, modern mathematicians divide by a number that they let approach zero. They do the division—perfectly legally, since there are no zeros—then they take the limit. The dirty tricks of making squared infinitesimals disappear and then dividing by zero to get a derivative were no longer necessary (see appendix C).
This logic may seem like splitting hairs, like an argument as mystical as Newton’s “ghosts,” but in reality it’s not. It satisfies the mathematician’s strict requirement of logical rigor. There is a very firm, consistent basis for the concept of limits. Indeed, you can even dispense with the “I challenge you” argument entirely, as there are other ways of defining a limit, such as calling it the convergence of two numbers, the lim sup and lim inf. (I have a wonderful proof of this, but alas, this book is too small to contain it.) Since limits are logically airtight, by defining a derivative in terms of limits, it becomes airtight as well—and puts calculus
on a solid foundation.
No longer was it necessary to divide by zeros. Mysticism vanished from the realm of mathematics and logic ruled once more. The peace lasted until the Reign of Terror.
Chapter 6
Infinity’s Twin
[THE INFINITE NATURE OF ZERO]
God made integers; all else is the work of man.
—LEOPOLD KRONECKER
Zero and infinity always looked suspiciously alike. Multiply zero by anything and you get zero. Multiply infinity by anything and you get infinity. Dividing a number by zero yields infinity; dividing a number by infinity yields zero. Adding zero to a number leaves the number unchanged. Adding a number to infinity leaves infinity unchanged.
These similarities were obvious since the Renaissance, but mathematicians had to wait until the end of the French Revolution before they finally unraveled zero’s big secret.
Zero and infinity are two sides of the same coin—equal and opposite, yin and yang, equally powerful adversaries at either end of the realm of numbers. The troublesome nature of zero lies with the strange powers of the infinite, and it is possible to understand the infinite by studying zero. To learn this, mathematicians had to venture into the world of the imaginary, a bizarre world where circles are lines, lines are circles, and infinity and zero sit on opposite poles.
The Imaginary
…a fine and wonderful refuge of the divine spirit—almost an amphibian between being and non-being.
—GOTTFRIED WILHELM LEIBNIZ
Zero is not the only number that was rejected by mathematicians for centuries. Just as zero suffered from Greek prejudice, other numbers were ignored as well, numbers that made no geometric sense. One of these numbers, i, held the key to zero’s strange properties.
Algebra presented another way of looking at numbers, entirely divorced from the Greek geometric ideas. Instead of trying to measure the area inside a parabola as the Greeks did, early algebraists sought to find the solutions to equations that encode relationships between different numbers. For instance, the simple equation 4x - 12 = 0 describes how an unknown number x is related to 4, 12, and 0. The task of the algebra student is to figure out what number x is. In this case x is 3. Substitute 3 for x in the above equation and you will quickly see that the equation is satisfied; 3 is a solution for the equation 4x - 12 = 0. In other words, 3 is a zero or a root of the expression 4x - 12.
Zero Page 11