Letters to a Young Mathematician

Home > Other > Letters to a Young Mathematician > Page 5
Letters to a Young Mathematician Page 5

by Ian Stewart


  More-serious ecological models arose in the 1920s, when the Italian mathematician Vito Volterra was trying to understand a curious effect that had been observed by Adriatic fishermen. During World War I, when the amount of fishing was reduced, the numbers of food fish didn’t seem to increase, but the population of sharks and rays did.

  Volterra wondered why a reduction in fishing benefited the predators more than it benefited the prey. To find out, he devised a mathematical model, based on the sizes of the shark and food-fish populations and how each affected the other. He discovered that instead of settling down to steady values, populations underwent repetitive cycles: large populations became smaller but then increased, over and over again. The shark population peaked sometime after the food-fish population did.

  You don’t need numbers to understand why. With a moderate number of sharks, the food fish can reproduce faster than they are eaten, so their population soars. This provides more food for the sharks, so their population also begins to climb; but they reproduce more slowly, so there is a delay. As the sharks increase in number, they eat more food fish, and eventually there are so many sharks that the food-fish population starts to decline. Now the food fish cannot support so many sharks, so the shark numbers also drop, again with a delay. With the shark population reduced, the food fish can once more increase . . . and so it goes.

  The math makes this story crystal clear (within the assumptions built into the model) and also lets us work out how the average population sizes behave over a complete cycle, something the verbal argument can’t handle. Volterra’s calculations showed that a reduced level of fishing decreases the average number of food fish over a cycle but increases the average number of sharks. Which is just what happened during World War I.

  All of the examples I’ve told you about so far involve “advanced” mathematics. But simple math can also be illuminating. I am reminded of one of the many stories mathematicians tell each other after all the nonmathematicians leave the room. A mathematician at a famous university went to look around the new auditorium, and when she got there, she found the dean of the faculty staring at the ceiling and muttering to himself, “. . . forty-five, forty-six, forty-seven . . .” Naturally she interrupted the count to find out what it was for. “I’m counting the lights,” said the dean. The mathematician looked up at the perfect rectangular array of lights and said, “That’s easy, there are . . . twelve that way, and . . . eight that way. Twelve eights are ninety-six.” “No, no,” said the dean impatiently. “I want the exact number.”

  Even when it comes to something as simple as counting, we mathematicians see the world differently from other folk.

  6

  How Mathematicians Think

  Dear Meg,

  I would say you’ve lucked out. If you’re hearing about people like Newton, Leibniz, Fourier, and others, it means your freshman calculus teacher has a sense of the history of his subject; and your question “How did they think of these things?” suggests that he’s teaching calculus not as a set of divine revelations (which is how it’s too often done) but as real problems that were solved by real people.

  But you’re right, too, that the answer “Well, they were geniuses” isn’t really adequate. Let me see if I can dig a little deeper. The general form of your question— which is a very important one—is “How do mathematicians think?”

  You might reasonably conclude from looking at textbooks that all mathematical thought is symbolic. The words are there to separate the symbols and explain what they signify; the core of the description is heavily symbolic. True, some areas of mathematics make use of pictures, but those are either rough guides to intuition or visual representations of the results of calculations.

  There is a wonderful book about mathematical creation, The Psychology of Invention in the Mathematical Field, by Jacques Hadamard. It was first published in 1945, and it’s still in print and extremely relevant today. I recommend you pick up a copy. Hadamard makes two main points. The first is that most mathematical thinking begins with vague visual images and is only later formalized with symbols. About ninety percent of mathematicians, he tells us, think that way. The other ten percent stick to symbols the entire time. The second is that ideas in mathematics seem to arise in three stages.

  First, it is necessary to carry out quite a lot of conscious work on a problem, trying to understand it, exploring ways to approach it, working through examples in the hope of finding some useful general features. Typically, this stage bogs down in a state of hopeless confusion, as the real difficulty of the problem emerges.

  At this point it helps to stop thinking about the problem and do something else: dig in the garden, write lecture notes, start work on another problem. This gives the subconscious mind a chance to mull over the original problem and try to sort out the confused mess that your conscious efforts have turned it into. If your subconscious is successful, even if all it manages is to get part way, it will “tap you on the shoulder” and alert you to its conclusions. This is the big “aha!” moment, when the little lightbulb over your head suddenly switches on.

  Finally, there is another conscious stage of writing everything down formally, checking the details, and organizing it so that you can publish it and other mathematicians can read it. The traditions of scientific publication (and of textbook writing) require that the “aha!” moment be concealed, and the discovery presented as a purely rational deduction from known premises.

  Henri Poincaré, probably my favorite among the great mathematicians, was unusually aware of his own thought processes and lectured about them to psychologists. He called the first stage “preparation,” the second “incubation followed by illumination,” and the third “verification.” He laid particular emphasis on the role of the subconscious, and it is worth quoting one famous section of his essay Mathematical Creation:

  For fifteen days I strove to prove that there could not be any functions like those I have since called Fuchsian functions. I was then very ignorant; every day I seated myself at my table, stayed an hour or two, tried a great number of combinations and reached no results.

  One evening, contrary to my custom, I drank black coffee and could not sleep. Ideas rose in crowds; I felt them collide until pairs interlocked, so to speak, making a stable combination. By the next morning I had established the existence of a class of Fuchsian functions, those which come from the hypergeometric series; I had only to write out the results, which took but a few hours.

  This was but one of several occasions on which Poincaré felt that he was “present at his own unconscious work.”

  A recent experience of my own also fits Poincaré’s three-stage model, though I did not have the feeling that I was observing my own subconscious. A few years ago, I was working with my long-term collaborator Marty Golubitsky on the dynamics of networks. By “network” I mean a set of dynamical systems that are “coupled together,” with some influencing the behavior of others.The systems themselves are the nodes of the network— think of them as blobs—and two nodes are joined by an arrow if one of them (at the tail end) influences the other (at the head end). For example, each node might be a nerve cell in some organism, and the arrows might be connections along which signals pass from one cell to another.

  Marty and I were particularly interested in two aspects of these networks: synchrony and phase relations. Two nodes are synchronous if the systems they represent do exactly the same thing at the same moment. That trotting dog synchronizes diagonally opposite legs: when the front left foot hits the ground, so does the back right. Phase relations are similar, but with a time lag. The dog’s front right foot (which is similarly synchronized with its back left foot) hits the ground half a cycle later than the front left foot. This is a half-period phase shift.

  We knew that synchrony and phase shifts are common in symmetric networks. In fact, we had worked out the only plausible symmetric network that could explain all of the standard gaits of four-legged animals. And we’d sort of assume
d, because we couldn’t think of any other reason, that symmetry was also necessary for synchrony and phase shifts to occur.

  Then Marty’s postdoc Marcus Pivato invented a very curious network that had synchrony and phase shifts but no symmetry. It had sixteen nodes, which synchronized in clusters of four, and each cluster was separated from one of the others by a phase shift of one quarter of a period. The network was almost symmetric at first sight, but when you looked closely you could see that the apparent symmetry was imperfect.

  To us, Marcus’s example made absolutely no sense. But there was no question that his calculations were correct. We could check them, and we did, and they worked. But we were left with a nagging feeling that we didn’t really understand why they worked. They involved a kind of coincidence, which definitely happened, but “shouldn’t have.”

  While Marty and Marcus worked on other topics, I worried about Marcus’s example. I went to Poland for a conference and to give some lectures, and for the whole of that week I doodled networks on notepads. I doodled all the way from Warsaw to Krakow on the train, and two days later I doodled all the way back. I felt I was close to some kind of breakthrough, but I found it impossible to write down what it might be.

  Tired and fed up, I abandoned the topic, shoved the doodles into a filing cabinet, and occupied my time elsewhere. Then one morning I woke up with a strange feeling that I should dig out the file and take another look at the doodles. Within minutes I had noticed that all the doodles that did what I wanted had a common feature, one that I’d totally missed when I was doodling them. Not only that; all of the doodles that didn’t do what I wanted lacked that feature. At that moment I “knew” what the answer to the puzzle was, and I could even write it down symbolically. It was neat, tidy, and very simple.

  The trouble with that kind of knowledge, as my biologist friend Jack Cohen often says, is that it feels just as certain when you’re wrong. There is no substitute for proof. But now, because I knew what to prove and had a fair idea of why it was true, that final stage didn’t take very long. It was blindingly obvious how to prove that the feature that I had observed in my doodles was sufficient to make happen everything I thought should happen. Proving that it was also necessary was trickier, but not greatly so. There were several relatively obvious lines of attack, and the second or third worked.

  Problem solved.

  This description fits Poincaré’s scenario so perfectly that I worry that I have embroidered the tale and rearranged it to make it fit. But I’m pretty sure that it really did happen the way I’ve just told you

  What was the key insight? I’ve just looked through my notes from the Warsaw–Krakow train, and they are full of networks whose nodes have been colored. Red, blue, green . . . At some stage I had decided to color the nodes so that synchronous nodes got the same color. Using the colors, I could spot hidden regularities in the networks, and those regularities were what made Marcus’s example work. The regularities weren’t symmetries, not in the technical sense used by mathematicians, but they had a similar effect.

  Why had I been coloring the networks? Because the colors made it easy to pick out the synchronous clusters. I had colored dozens of networks and never noticed what the colors were trying to tell me. The answer had been staring me in the face. But only when I stopped working on the problem did my subconscious have the freedom to sort it out.

  It took a week or two to turn this insight into formal mathematics. But the visual thinking—the colors—came first, and my subconscious had to grapple with the problem before I was consciously aware of the answer. Only then did I start to reason symbolically.

  There’s more to the tale. Once the formal system was sorted out, I noticed a deeper idea, which underlay the whole thing. The similarities between colored cells formed a natural algebraic structure. In our previous work on symmetric systems we had put a similar structure in from the very start, because all mathematicians know how to formalize symmetries. The concept concerned is called a group. But Marcus’s network has no symmetry, so groups won’t help. The natural algebraic structure that replaces the symmetry group in my colored diagrams is something less well known, called a “groupoid.”

  Pure mathematicians have been studying groupoids for years, for their own private reasons. Suddenly I realized that these esoteric structures are intimately connected with synchrony and phase shifts in networks of dynamical systems. It’s one of the best examples, among the topics that I’ve been involved with, of the mysterious process that turns pure math into applications.

  Once you understand a problem, many aspects of it suddenly become much simpler. As mathematicians the world over say, everything is either impossible or trivial. We immediately found lots of simpler examples than Marcus’s. The simplest has just two nodes and two arrows.

  Research is an ongoing activity, and I think we have to go further than Hadamard and Poincaré to understand the process of invention, or discovery, in math. Their three-stage description applies to a single “inventive step” or “advance in understanding.” Solving most research problems involves a whole series of such steps. In fact, any step may break down into a series of sub-steps, and those substeps may also break down in a similar manner. So instead of a single three-stage process, we get a complicated network of such processes. Hadamard and Poincaré described a basic tactic of mathematical thought, but research is more like a strategic battle. The mathematician’s strategy employs that tactic over and over again, on different levels and in different ways.

  How do you learn to become a strategist? You take a leaf from the generals’ book. Study the tactics and strategies of the great practitioners of the past and present. Observe, analyze, learn, and internalize. And one day, Meg—closer than you might think—other mathematicians will be learning from you.

  7

  How to Learn Math

  Dear Meg,

  By now, you’ve surely noticed that the quality of teaching in a college setting varies widely. This is because, for the most part, your professors and their teaching assistants are not hired, kept on, or promoted based on their ability to teach. They are there to do research, whereas teaching, while necessary and important for any number of reasons, is decidedly secondary for many of them. Many of your professors will be thrilling lecturers and devoted mentors; others, you’ll find, will be considerably less thrilling and devoted. You’ll have to find a way to succeed even with teachers whose greatest talents are not necessarily on display in the classroom.

  I once had a lecturer who, I was convinced, had discovered a way to make time stand still. My classmates disagreed with this thesis but felt that his sleep-inducing powers must surely have military uses.

  The vast amounts that have been written about teaching math might give the impression that all of the difficulties encountered by math students are caused by teachers, and it is always the teacher’s responsibility to sort out the student’s problems. This is, of course, one of the things teachers are paid to do, but there is some onus on the student as well. You need to understand how to learn.

  Like all teaching, math instruction is rather artificial. The world is complicated and messy, with lots of loose ends, and the teacher’s job is to impose order on the confusion, to convert a chaotic set of episodes into a coherent narrative. So your learning will be divided into specific modules, or courses, and each course will have a carefully specified syllabus and a text. In some settings, such as some American public schools, the syllabus will specify exactly which pages of the text, and which problems, are to be tackled on a given day. In other countries and at more advanced levels, the lecturer has more of a free hand to pick his or her own path through the material, and the lecture notes take the place of a textbook.

  Because the lectures progress through set topics, one step at a time, it is easy for students to think that this is how to learn the material. It is not a bad idea to work systematically through the book, but there are other tactics you can use if you get stuck.

 
; Many students believe that if you get stuck, you should stop. Go back, read the offending passage again; repeat until light dawns—either in your mind or outside the library window.

  This is almost always fatal. I always tell my students that the first thing to do is read on. Remember that you encountered a difficulty, don’t try to pretend that all is sweetness and light, but continue. Often the next sentence, or the next paragraph, will resolve your problem.

  Here is an example, from my book The Foundations of Mathematics, written with David Tall. On page 16, introducing the topic of real numbers, we remark that “the Greeks discovered that there exist lines whose lengths, in theory, cannot be measured exactly by a rational number.”

  One might easily grind to a halt here—what does “measured by” mean? It hasn’t been defined yet, and— oh help—it’s not in the index. And how did the Greeks discover this fact anyway? Am I supposed to know it from a previous course? From this course? Did I miss a lecture? The previous pages of the book offer no assistance, however many times you reread them. You could spend hours getting nowhere.

  So don’t. Read on. The next few sentences explain how Pythagoras’s theorem leads to a line whose length is the square root of two, and state that there is no rational number m/n such that (m/n)2 = 2. This is then proved, cleverly using the fact that every whole number can be expressed as a product of primes in only one way. The result is summarized as “no rational number can have square 2, and hence that the hypotenuse of the given triangle does not have rational length.”

  By now, everything has probably slotted into place. “Measured by” presumably means “has a length equal to.” The Greek reasoning alluded to in such an offhand manner is no doubt the argument using Pythagoras’s theorem; it helps to know that Pythagoras was Greek. And you should be able to spot that “the square root of two is not rational” is equivalent to “no rational number can have square 2.”

 

‹ Prev