Letters to a Young Mathematician

Home > Other > Letters to a Young Mathematician > Page 7
Letters to a Young Mathematician Page 7

by Ian Stewart


  The military learned long ago that when soldiers cross a bridge, they should fall out of step. The synchronized impact of several hundred right feet can set up vibrations and do serious damage. I suspect this fact was known to the Romans. No one expected a similar kind of synchronization to arise when individuals walked across the bridge at random. But people on a bridge respond to the movement of the bridge, and they do so in a similar way and at roughly the same time. So when the bridge moved slightly—perhaps in response to a gust of wind— the people on it started to move in synchrony. The more closely the footsteps of the people became synchronized, the more the bridge moved, which in turn increased the synchronization of the footsteps. Soon the whole bridge was swaying from side to side.

  Physicists use mathematics to study what they amusingly call the real world. It is real, in a sense, but much of physics addresses rather artificial aspects of reality, such as a lone electron or a solar system with only one planet. Physicists are often scathing about proofs, partly out of fear, but also because experiment gives them a very effective way to check their assumptions and calculations. If an intuitively plausible idea leads to results that agree with experiment, there’s not much point in holding the entire subject up for ten, fifty, or three hundred years until someone finds a rigorous proof. I agree entirely. For example, there are calculations in quantum field theory that have no rigorous logical justification, but agree with experiment to an accuracy of nine decimal places. It would be silly to suggest that this agreement is an accident, and that no physical principle is involved.

  The mathematicians’ take on this, though, goes further. Given such impressive agreement, it is equally silly not to try to find out the deep logic that justifies the calculation. Such understanding should advance physics directly, but if not, it will surely advance mathematics. And mathematics often impinges indirectly on physics, probably a different branch of physics, but if so, it’s a bonus.

  So that’s why proofs are necessary, Meg. Even for people who would prefer not to be bothered with them.

  9

  Can’t Computers Solve Everything ?

  Dear Meg,

  Yes, computers can calculate much faster than humans, and much more accurately, which I gather has led some of your friends to question the value of your studies. There is a point of view that this makes mathematicians obsolete. I can reassure you that we’re not.

  Anyone who thinks computers can supplant mathematicians understands neither computing nor mathematics. It’s like thinking we don’t need biologists now that we have microscopes. Possibly the underlying misconception is that math is just arithmetic, and since computers can do arithmetic faster and more accurately than humans, why do we need the humans? The answer, of course, is that math is not just arithmetic.

  Microscopes made biology more interesting, not less, by opening up new ways to approach the subject. It’s the same with computers and mathematics. One very interesting and important thing computers have done for the mathematician is to make it possible to carry out experiments quickly and rapidly. These experiments test conjectures, occasionally reveal that what we were hoping to prove is wrong, and—with increasing frequency—perform gigantic calculations that allow us to prove theorems that would otherwise be out of reach. Sometimes the people who think math consists of big sums are right.

  Take, for example, the Goldbach conjecture. In 1742 Christian Goldbach, an amateur mathematician, wrote to Leonhard Euler that as far as he could verify, every even number is a sum of two primes. For instance, 8 = 3 + 5, 10 = 5 + 5, 100 = 3 + 97. Calculating by hand, Gold-bach could test his conjecture on just a small list of numbers. On a modern computer you can quickly test billions of numbers: the current record is 2 × 1017. Every time anyone has carried out such a test, they have found that Goldbach was right. However, no proof of his conjecture (which would turn it into a theorem) has yet been found.

  Why worry? If a billion experiments agree with Goldbach, surely what he said must be true?

  The problem is that mathematicians use theorems to prove other theorems. A single false statement in principle ruins the whole of mathematics. (In practice, we would notice the falsehood, isolate the offending statement, and avoid using it.) For example, the number π is rather annoying, and it would be nice to get rid of it. We might decide that there’s no real harm in replacing π with 3 (as some say the Bible does, but only if you take an obscure passage extremely literally) or by 22/7. And if all you want to use π for is to calculate perimeters of circles and the like, a good enough approximation will do no harm.

  However, if you really think that π is equal to 3, the repercussions are much greater. A simple line of reasoning reveals an unintended consequence. If π = 3, then π – 3 = 0, and dividing both sides by π - 3 (which, thanks to Archimedes, we know is nonzero, so division is permitted), we get 1 = 0. Multiplying by any number we wish, we prove that all numbers are zero; it follows that any two numbers are the same. So when you go into your bank to draw $100 from your account, the teller can give you $1 and insist that this makes no difference; and in fact it doesn’t, because you can then walk into Neiman Marcus or a Rolls-Royce dealership and explain that your $1 equals a million. More interestingly, murderers should not be jailed, because killing one person is the same as killing none; on the other hand, someone who has never touched drugs in their life should be imprisoned, since possession of no cocaine is the same as possessing a million tons of it. And so on . . .

  Mathematical facts fit together and lead, via logic, to new facts. A deduction is only as strong as its weakest link. To be safe, all weak links must be removed. It would therefore be dangerous to verify Goldbach’s conjecture on a computer for numbers up to, say, twenty digits, and conclude that it must be true.

  Surely, you think, Ian’s being a bit pedantic. If a statement is true for such big numbers, it has to be true for any numbers, no?

  No. It doesn’t work like that. For a start, twenty digits, in the mathematical universe, is tiny. The great ocean of numbers stretches to infinity, and even a number with a billion digits is, in some contexts, small. A classic example occurs in prime number theory. Although there is no evident pattern to the sequence of primes, there are statistical regularities. Sometime before 1849, when he wrote a letter about the discovery, Carl Friedrich Gauss found a good approximate formula relating the number of primes less than some specified number to the “logarithmic integral” of that number. It was soon noticed that the approximation always seems to be slightly larger than the correct value. Again, computer experiments verified that property for billions of numbers.

  But the generalization is false. In 1914, John Little-wood proved that the correct value and the logarithmic integral approximation swap places infinitely often. But no one knew of a specific number for which the approximation was smaller than the correct value, until Littlewood’s student Samuel Skewes proved, in 1933, that such a number must have at most 1010000000000000000000000000000000000 digits. That’s thirty-four zeros in the exponent. Moreover, his proof involved an assumption, a notorious unproved statement known as the Riemann hypothesis. In 1955 Skewes proved that if you do not assume the Riemann hypothesis, then those thirty-four zeros in the exponent must be increased to one thousand zeros. And this gigantic number, please recall, is not the one we seek: it is the number of digits that number possesses.

  Skewes’s number has since been reduced to 1.4 × 10316, which is tiny by comparison.

  With numbers this large, the kind of experiment we can perform on a computer is totally irrelevant. And in number theory, that size is rather typical.

  It wouldn’t matter if all we were trying to do was approximate primes in terms of logarithmic integrals. But mathematics deduces new facts from old ones. And as we saw with π, if the old fact is actually wrong, consequent deductions can destroy the entire basis of mathematics. No matter how extensive the evidence of computer calculations may be, we still have to use the grey matter between our ears. Computers can
be valuable assistants, but only when a lot of human thought has gone into setting up the computations. We have not yet been made obsolete by our own creations.

  10

  Mathematical Storytelling

  Dear Meg,

  In my last letter I told you why proofs are necessary. Now I turn to the other question I raised: what is a proof?

  The earliest recorded proofs, along with the notion that proofs are necessary, occur in Euclid. His Elements, written around 300 BC, laid out much of Greek geometry in a logical sequence. It begins with two types of fundamental assumptions, which Euclid calls axioms and common notions. Both are basically a list of assumptions that will be made. For instance, axiom 4 states that “all right angles are equal,” and common notion 2 states that “if equals be added to equals, the wholes are equal.” The main difference is that the axioms are about geometric things and the common notions are about equality. The modern approach lumps them all together as axioms.

  These assumptions are stated in order to provide a logical starting point. No attempt is made to prove them; they are the “rules of the game” for Euclidean geometry. You are free to disagree with the assumptions or invent new ones, if you wish; but if you do, you are playing a different game with different rules. All Euclid is trying to do is make the rules of his game explicit, so that the players know where they stand.

  This is the axiomatic method, which remains in use today. Later mathematicians observed gaps in Euclid’s logic, unstated assumptions that should really be included as axioms. For example, any line passing through the center of a circle must meet its circumference if the line is extended sufficiently far. Some tried to prove Euclid’s most complex axiom, that parallel lines neither converge nor diverge from the others. Eventually, Euclid’s good taste was demonstrated when it was realized that all such attempts are bound to fail. To muddy the philosophical waters, over the centuries, various deep difficulties in the axiomatic approach have appeared, such as Gödel’s discovery that if mathematics is logically consistent, then we can never prove it to be so. We can live with Gödelian uncertainties if we have to, and we do have to.

  Textbooks of mathematical logic base their descriptions of “proof” on Euclid’s model. A proof, they tell us, is a finite sequence of logical deductions that begins with either axioms or previously proved results and leads to a conclusion, known as a theorem. Provided each step obeys the rules of logical inference—which can be found in textbooks on elementary logic—then the theorem is proved.

  If you dispute the axioms, you are also free to dispute the theorem. If you prefer alternative rules of inference, you are free to invent your own. The claim is only that with those rules of inference, acceptance of the stated axioms implies acceptance of the theorem. If you want to make π equal to 3, you have to accept that all numbers are equal. If you want different numbers to be different, you have to accept that π is not 3. What you can’t do is pick and mix, having π equal to 3 but zero different from one. It’s as simple and sensible as that.

  This definition of “proof” is all very well, but it is rather like defining a symphony as “a sequence of notes of varying pitch and duration, beginning with the first note and ending with the last.” Something is missing. Moreover, hardly anybody ever writes a proof the way the logic books describe. In 1999 I was musing on this discrepancy, having accepted an invitation to a conference in Abisko, Sweden, on “Stories of Science and the Science of Stories.” Abisko is north of the Arctic Circle, in Lapland, and a group of about thirty science fiction writers, popular science writers, journalists, and historians of science were going to spend a week there seeking common ground. Wondering what I was going to say to them, I suddenly realized what a proof really is.

  A proof is a story.

  It is a story told by mathematicians to mathematicians, expressed in their common language. It has a beginning (the hypotheses) and an end (the conclusion), and it falls apart instantly if there are any logical gaps. Anything routine or obvious can safely be omitted, because the audience knows about such things and wants the narrator to get on with the main story line. If you were reading a spy novel and the hero was dangling above a chasm on the end of a burning rope hanging from a helicopter, you would not want to read ten pages on the force of gravity and the physiological effects of a high-velocity impact. You’d want to find out how he saves himself. It is the same with proofs. “Don’t waste time solving the quadratic, I know how to do that. But tell me, why do its solutions determine the stability of the limit cycle?”

  In my paper (reprinted in Mission to Abisko) I said this: “If a proof is a story, then a memorable proof must tell a ripping yarn. What does that tell us about how to construct proofs? Not that we need a formal language in which every tiny detail can be checked algorithmically, but that the story line should come out clearly and strongly. It isn’t the syntax of the proof that needs improvement: it’s the semantics.” That is, the essence of a proof is not its “grammar” but its meaning.

  In that paper, I contrasted this admittedly vague and woolly notion with a much more formal one, the idea of a “structured proof,” advocated by the computer scientist Leslie Lamport. Structured proofs make explicit every step in the logic, be it deep or trivial, clever or obvious. Lamport makes a strong case in favor of structured proofs as a teaching aid, and there’s no doubt that they can be very effective in making sure that students really do understand details. Part of that case is an anecdote: a famous result called the Schröder–Bernstein theorem. Georg Cantor had found a way to count how many members a set has, even when that set is infinite, using a generalized type of number that he called a “transfinite number.” The Schröder–Bernstein theorem tells us that if two transfinite numbers are less than or equal to each other, then they must actually be equal. Lamport was teaching a course based on the classic text General Topology by John Kelley, which includes a proof of the theorem, but when the extra details needed for the students were added, Kelley’s proof turned out to be wrong.

  Years later, Lamport could no longer locate the error. The proof seemed obviously correct. But five minutes spent writing down a structured proof revealed the mistake again.

  I was worried, because I’d put a proof of the Schröder–Bernstein theorem into one of my own texts. I looked up Kelley’s proof for myself but could not discover a mistake. So I e-mailed Lamport, who suggested that I write down a structured proof. Instead, I worked my way very systematically through Kelley’s argument, in effect creating a structured proof in an informal way, and eventually I spotted the error.

  There is a classic proof of the Schröder–Bernstein theorem that starts with two sets, corresponding to the two transfinite cardinals. Each set is split into three pieces, using a notion of “ancestor” invented purely for this particular proof, and the pieces are matched. In effect, this proof tells a story about the two sets and their component pieces. It’s not the most gripping of stories, but it has a clear plot and a memorable punch line. Fortunately, I had used the classic proof in my textbook, and not Kelley’s reworking of it. Because Kelley had told the wrong story. Attempting, I suspect, to simplify the classic version, he had overdone things and violated Einstein’s dictum: “as simple as possible, but not more so.”

  The presence of this mistake supports Lamport’s view about the value of structured proofs. But, to quote my paper, “There’s another interpretation, not contradictory but complementary, which is that Kelley told a good story badly. It’s rather as if he’d introduced the Three Musketeers as Pooh, Piglet, and Eeyore. Some parts of the story would [still] have made sense—their inseparable companionship, for instance—but others, such as the incessant swordplay, would not. . . .”

  If we think of a proof in the textbook sense, all proofs are on the same footing, just as all pieces of music look like tadpoles sitting on a wire fence when expressed in musical notation, unless you are an expert and can “hear” sheet music in your head. But when we think of a proof as a story, there ar
e good stories and bad ones, gripping tales and boring ones, like stirring or insipid music. There is an aesthetic of proof, so that a really good story can be a thing of beauty.

  Paul Erd~s had an unorthodox line on the beauty of proofs. Erd~s was an eccentric and brilliant mathematician who collaborated with more people than anyone else on the planet; you can read his life story in Paul Hoffman’s The Man Who Loved Only Numbers. Anybody who coauthored a paper with him is said to have an “Erd~s number” of one. Their collaborators have Erd~s number two, and so on. It’s the mathematician’s version of the Oracle of Kevin Bacon, in which actors are linked to Bacon by their appearances in the same movies, or by their appearances with actors who’ve appeared with Bacon, and so on. My Erd~s number is three. I never collaborated with Erd~s, and I’m not on the list of people with Erd~s number two, but one of my collaborators is.

  Anyway, Erd~s reckoned that in Heaven, God had a book that contained all the best proofs. If Erd~s was really impressed by a proof, he declared it to be from “The Book.” In his view, the mathematician’s job was to sneak a look over God’s shoulder and pass on the beauty of His creation to the rest of His creatures.

  Erd~s’s deity’s Book is a book of stories. I ended my Abisko talk like this: “Psychologists now tell us that without emotional underpinnings, the rational part of our mind doesn’t work. It seems that we can only be rational about things if we have an emotional commitment to such a recently evolved technique as rationality . . . I don’t think I could get very emotional about a structured proof, however elegant. But when I can really feel the power of a mathematical story line, something happens in my mind that I can never forget . . . I’d rather we improved the storytelling of proofs, instead of dissecting them into bits that can be placed in stacks of file cards and sorted into order.”

 

‹ Prev