Hidden Harmonies
Page 13
All has been just as it should and had to be. What is surprising now, however, is to take this operation (multiplying of a vector’s coordinates by themselves and then adding them all together) as a special case—with one vector—of what we did above with two vectors, X = (x1, . . . , xn) and Y = (y1, . . . , yn):
In keeping with that innermost doll we’re after, this expression is called the ‘inner product’ of the vectors X and Y, and is represented by :
Notice that this inner product of two vectors isn’t a vector, but a number—a scalar, in the setting of vector spaces.
Wonderfully enough, it will let us recognize when two vectors are perpendicular. Take X = (4, 0) and Y = (0, 3):
What about another pair of perpendiculars, X = (1, −2) and Y = (4, 2)?
Could it be that two vectors X and Y are perpendicular if and only if their inner product is 0? Yes—and in fact its proof sped by in Chapter Six, disguised as the Law of Cosines! You will find the details worked out in this chapter’s appendix.
Looking more broadly, we see a space of vectors (with attendant scalars) and an inner product defined on it, which lets us calculate length and judge perpendicularity. For any vector X in n-dimensional vector space, . Peculiar as they look, inner products seem to unlock inner doors.
We have, therefore, for any two perpendicular vectors X and Y:
This holds in any n-dimensional vector space for a set of mutually perpendicular vectors:
A Dijkstra dream of elegance has dawned: “The Pythagorean Theorem? Oh, you mean ‘for mutually perpendicular vectors, the squared norm of the sum equals the sum of the squared norms.’” Surely, though, we have here something more substantial than elegance—a clarifying effect on our rambling knowledge which isn’t decorative but structural. For we have found that we aren’t all at sea in n-space, but can always navigate just from a planar map within it. You saw this in the context of the n-dimensional box: the length of the new hypotenuse in (n + 1)-dimensional space is calculated from the new perpendicular in that space and the old hypotenuse in n-space: on that two-dimensional plane, therefore.
And can this view of space via vectors bring us now to that second voyage as well, and a continent hardly dreamed of in our milling about the close streets of the old capital? Past the next dimension and the next and the next after that, their sequence stretching endlessly away, might there be a space which was in itself infinite-dimensional? The great accommodating generality of vector spaces will let us see that there is; will let us grasp what finite needs of ours drive us to it; how we should imagine it, and ourselves in it; and how we are to find familiar shores there—such as the Pythagorean Theorem. You might think that such plans would materialize only in the gauziest panoramas—
All the most childish things of which idylls are made1
—in fact it is an ultimate accuracy, possible only in infinite dimensional space, to which the lust for ever improving approximation will lead us.
You saw in Chapter Five how adding higher and higher powers of x (each with the right coefficient) more and more closely matched the value of the sine function at a specific point. The whole of the infinite power series would give its value at that point precisely. Even with all its terms, however, the power series representation of a function, at a point, fits it less and less well as you move away from that point. Here you see several terms of the power series for sin x, at x = 0, peeling away from it as we move off from 0.
The urge from physics to analyze complex motion (heat diffusing across metal, waves crashing on the shore) into simple components led to a search for how to replace any function by a sum of more basic ones, not just at a point but over a suitable stretch. Some two hundred years ago in France, Joseph Fourier (a tailor’s son who became a mathematician, as well as Napoleon’s governor of Lower Egypt) stitched together a way to do this. As a result, Fourier series—no more than sines and cosines of various frequencies and amplitudes, added up—better and better match the required function over an interval as more and more terms appear, and match it exactly when the sum is infinite. The choice of trigonometric functions for these threads may strike you as peculiar, but the oddity is lessened if you’ve ever played with an oscilloscope, or seen in science fiction or hospital dramas those green waves coalescing into a straight line, since oscilloscopes make Fourier series visible.
Very real demands, therefore (from wavering bridges to magnetic resonance imaging)—not only a dream beyond induction—have led us to infinite-dimensional vector spaces, whose flexible neutrality gives us the perfect way to understand, build, and manipulate these wonderful series. Why infinite-dimensional? Because—while the vectors will now be functions, f (x), g(x), etc.—the axes e1, e2, . . . will no longer be (1,0, 0, . . . , 0), (0, 1, 0, 0, . . . , 0) and so on, but those basic components, sin mx and cos nx, for each of the infinitely many natural numbers m and n.
To set this up requires, as before, an inner product which will guarantee that these axes are mutually perpendicular, and will also let us derive a norm from it, just as in the finite case: . We needn’t fetch far: the inner product for functions will be analogous to the one we had. See this chapter’s appendix for details.
To see how quickly a Fourier series converges to the function it represents, let’s take f (x) = x, whose series is
and look at graphs with 1, 3, 5, and 10 terms:
The eye takes this in more quickly than the mind struggling with notation. The ear is even faster: we hear by breaking the sound wave f (x) that strikes the eardrum into the terms of such series, and find that the coefficients in these terms measure the sound’s unique qualities. A sound is familiar if it has the same Fourier coefficients as another we have heard. An inner product indeed.
We now have our infinite dimensional vector space, and the instruments for steering our way through it. These lead us at once to the exact analogy of the Pythagorean Theorem in n-dimensional space: just as there, if we have a set {Xi} of mutually perpendicular vectors, then
so long as the infinite sum on the right converges.
As everywhere in math, the form remains, however various the content that takes it on. This latest incarnation of the Pythagorean Theorem is called Parseval’s Identity, after Marc-Antoine Parseval, who—twenty years before Fourier—was thinking in terms of summing infinite series, without Fourier’s specifics in mind. He is remembered as a French nobleman who survived the Terror, wrote poetry against Napoleon, failed (even after five attempts) to be elected to the Académie des sciences, and was eccentric—in unspecified ways. This equation having been his identity may explain why his mortal one remains as shadowy as the space he pictured it in.
DISTANCE AND CLARITY
If you think that the infinity we have been savoring belongs to the ivoriest of towers, look about you: we live in the midst of limits. Our speculations ever speciate, our generalizations endlessly generate, so that a whole is grasped only through the vision of a horizon. Each of us stands at the center of a circle of infinite radius (since, as Pascal first saw, that center will be everywhere). This is no idle metaphysics: in a world founded on the straight and the rectangular, where like Masons we all feign living on the Square, the paths we make are curved and our expectations jagged. Yet errors average out, the curves are sums of infinitely many infinitesimally small straight lines, and the Pythagorean Theorem holds the key to understanding both.
Suppose that a curve like this
is the graph of a smooth function f (x) from point A = (a, f (a)) to B = (b, f (b)).
What is its length l? The roughest approximation l1 would be the length of the straight line joining its end-points:
and by the Pythagorean Theorem,
A better approximation, l2, breaks the straight line in two,
Keep subdividing the interval from a to b, getting more and more successively shorter segments, each the hypotenuse h of a triangle with base x and height y. Here is a typical one:
So the length of the curve is approximately
But it is the exact length we’re after: the limit, therefore, of the sum of these hypotenuse secants, each touching the curve not at two points (no matter how close) but ultimately so short as to touch it only at one—not secants anymore, that is, but tangents.
We want something like
with the sides xk and yk diminished to 0.
Here differential calculus comes to our rescue: that application of the old Scottish proverb “Mony a mickle maks a muckle.” It guarantees that a curve smooth enough (no peaks ^ or breaks ~ ~) to have a slope at every point on it (if the graph is of f (x), these slopes are read off by the derivative function, f '(x)) will have at least one point pk in every interval xk, where the slope of the graph is the same as the slope of the hypotenuse secant:
This means we can rewrite (1) this way:
But we want not only infinitely many of these xk: we want the width of each to shrink to 0 (which will also constrain the pk in it to a single x):
Now integral calculus seizes on this clumsy expression and recognizes it as the sum from a to b of the ‘areas’ of trapezoids with height and infinitesimal width dx—so that the length of the curve becomes exactly
aa
Thus the curves of life are each the living limit of yet another avatar of the Pythagorean Theorem, whose image multiplies and miniaturizes and so more fully pervades thought, the closer we look.
Why is the Pythagorean Theorem everywhere? Because life is movement, movement begets measurement, and we measure distance along shortest paths, which the Theorem gives us. Must it be so? Not really: anything will serve to measure d (X, Y), the distance between two points X and Y, if it yields positive values (and 0 only if X is Y); makes the distance from here to there the same as that from there to here—d (X, Y) = d (Y, X); and which makes d (X, Y) the shortest length between them: that is, d (X, Y) d (X, Z) + d (Y, Z), with equality only if Z is on the line XY.
This ‘triangle in equality’ is the signature on the guarantee that you have bought a genuine measurer of distance.
Here are a few examples of other walnut and brass measuring devices:
Let d (X, Y) be 0 if X = Y and 1 if X Y. This satisfies all three of our requirements (though it does seem a bit odd that it makes all nonzero distances the same).
If we stick to a line, d (X ,Y) = | X − Y | works perfectly well, as does . This last has the peculiar feature that all distances will be less than or equal to 1.
Why not use one of these, or some other measure? Partly because some are too coarse; mostly because the Pythagorean way of measuring is so deeply natural (as long as we act as if we lived on a plane): if X = (x1, x2) and Y = (y1, y2), then
which generalizes nicely to 3-or n-dimensional Euclidean space:
ab
We become so used to measuring distance by the Theorem that we forget how awesome are the paths of pursuit. The third baseman and shortstop trap their swerving opponent on the shrinking diagonal he takes between them. The predators on the veldt that ran right at their moving targets are extinct: those survived who ran to where their inner Pythagoras calculated the prey was going.ac
But even if we grant that the Theorem dictates trajectories in physical space, do we recognize its power when we wonder what direction our lives should take? Longing can perplex us, as it did the war poet Henry Reed:
. . . I may not have got
The knack of judging a distance; I will only venture
A guess that perhaps between me and the apparent lovers
. . . is roughly a distance
Of about one year and a half.2
We weigh up the odds; we try to come to terms with the random residues of our best-laid plans, and the errors that swarm like mosquitoes around our every accuracy. Looking for some way to draw order out of data, we naturally cast it in the Pythagorean mold. Given a set of numerical readings, X = {x1, x2, . . . , xn}, statisticians find its mean, M,
to lend some stability to these points, then measure with the Theorem how far each datum is from this mean (there’s that n-dimensional form of the Theorem again); average out the measurements; and so find the taming ‘standard deviation’, σ(X):
This gives some sort of shape to chaos. Pythagoras has thus returned in his original role as a seer—by reading not entrails but the detritus of the day.ad
We live under the Theorem’s guidance not only in a landscape of curves at the limit of approximating chords, but at the one and only limit of the likely: the actual world as it hurtles through phase space (defined back in Chapter Six). This is where we make our calculated dashes along the hypotenuse from now to then, and stride along the bell curves, crest to crest.
DISTANCE AND MYSTERY
In generalizing the Pythagorean Theorem, we altered the angle between the sides, and the shapes of the figures on them, as well as the dimension of the whole. What we haven’t yet touched is the hypotenuse. What if we replaced it by a jagged line?
Do you think it would be possible, given n points in the interior of a right triangle, for a path made of straight-line segments from one end of the hypotenuse through them (in some order) to the other end, to have the sum of the squares on its segments less than the square on the hypotenuse?
This doesn’t seem very likely, at first sight: it looks as if the sum of their areas might (at least for some distribution of points) always mount up to be quite large indeed, like sail crowded on in a windjammer, the masts layered with canvas. Yet the statement is in fact true—and its truth is a comment on the limitations of our intuition for two dimensions. Here is the proof 3 :
Let’s call a path polygonal if it connects a finite number of points in succession by straight-line segments. Our points will start at one end, A, of the hypotenuse, and end at the other, B, with t1, t2 and so on, up to tn, in between.
If we call the path P, let k(P) be the sum of the squares on its segments.
DEF.: A polygonal path P is admissible if it connects A and B, and if k(P) < (AB)2.
We want to prove the existence of admissible paths that have at least three points on them. Divided, this problem will fall.
LEMMA: In ABC, right angled at C, with CD the altitude to AB, if there are admissible paths H in ADC and J in CDB, then there is an admissible path in ABC.
PROOF:
However, if we remove the last segment of H (connected to C), and the first segment of J (from C), by then bridging the gap so formed with one segment from H’s new end to J’s new beginning, we obtain a path with yet less area of the squares on its segments:
for the angle between these two segments at C can’t be obtuse (since both lie within the right angle at C), so by the law of cosines the area on the new segment will be less than the sum of the two removed.
This new path (HJ, fastened together as described) is the desired admissible path for ABC.
We can now prove our theorem, by induction on the number of points on the path.
THEOREM: In any right triangle ABC, and for any finite number n > 0 of points in its interior, there is an admissible path.
PROOF:
1. If the points on our path are A, B, and Z, where Z is an interior point of ABC, AZB is obtuse, so by the law of cosines,
2. If there are more than two interior points, drop the altitude CD from C. Then either these points lie in both ACD and DCB, or not.
CASE I: If in both, then by the induction hypothesis there are admissible paths in both, and the result follows from the lemma.
CASE II: If all the interior points lie wholly in one of those two triangles—say ACD—then construct the altitude from D to
AC at E, and repeat this process if necessary, until Case I arises—as eventually it must, when a new pair of the ever smaller triangles will have hypotenuses (their maximum lengths) less than the shortest distance among the interior points of this new pair of triangles: for then at least two are guaranteed to lie in different triangles, and Case I applies.
Q. E. D.
What if we now let lightning strike the hypote
nuse, in a storm to end all storms, so that it becomes an infinitely long lightning bolt itself. Could it possibly happen, in this apocalypse, that the sum of the areas on its segments nevertheless dwindled toward zero? This sounds even more preposterous than what just turned out not to be. How could length increase without end, like a fractal coastline, yet the area built on it amount to nothing? But mathematics is made of the unexpected, and there is a glimmer of hope in the fact that area is a square function, and if a square’s side s is less than 1, then s2 < s.
So let’s make our zigzag path from segments of length 1, and so on forever, with the path anchored at vertex A and twisting toward B.
On the one hand, we know that the length of this path is infinite—the harmonic series . . . diverges (to convince yourself, notice that the third and fourth terms add up to more than ½, as do the next four terms and the next eight terms after that, then the next sixteen—and so on: an endless number of halves to go along with the 1+½ at the beginning).
On the other hand, these would be the areas of the squares built on those segments: