Book Read Free

Borderlands of Science

Page 29

by Charles Sheffield


  Take this one step further. The size of computers constantly gets smaller and smaller. Already, we could tuck a powerful machine away into a little spare head room, maybe in one of the sulci of the brain or in a sinus cavity. Even if the machine lacked the full thinking capacity of a human brain, it could certainly perform the routine mathematical functions which for most of us are anything but routine: lengthy arithmetic, detailed logical analyses, symbol manipulation, and compound probabilities. (Maybe we should add elementary arithmetic to the list, since the ability to do mental calculations, such as computing change from twenty dollars, seems to be waning fast.)

  Nature took a billion years to provide us with brains able to perceive, manipulate, and move our bodies with effortless ease. Maybe we can give ourselves powers of effortless thought, to go with those other desirable features, in a century or two.

  CHAPTER 11

  Chaos: The Unlicked Bear-Whelp

  My original plan was to leave this chapter out of the book, as too technical. However, it was suggested to me that the science of chaos theory can be a fertile source of stories; more than that, it was pointed out that a story ("Feigenbaum Number," Kress, 1995) had been written drawing explicitly on an earlier article of mine on the subject. Faced with such direct evidence, I changed my mind. I did, however, remove most of the equations. I hope the result is still intelligible.

  11.1 Chaos: new, or old? The Greek word "chaos" referred to the formless or disordered state before the beginning of the universe. The word has also been a part of the English language for a long time. Thus in Shakespeare's Henry VI, Part Three, the Duke of Gloucester (who in the next play of the series will become King Richard III, and romp about the stage in unabashed villainy) is complaining about his physical deformities. He is, he says, "like to a Chaos, or an unlick'd bear-whelp, that carries no impression like the dam." Chaos: something essentially random, an object or being without a defined shape.

  Those lines were written about 1590. The idea of chaos is old, but chaos theory is a new term. Twenty years ago, no popular article had ever been written containing that expression. Ten years ago, the subject was all the rage. It was hard to find a science magazine without finding an article on chaos theory, complete with stunning color illustrations. Today, the fervor has faded, but the state of the subject is still unclear (perhaps appropriate, for something called chaos theory). Most of the articles seeking to explain what it is about are even less clear.

  Part of the problem is newness. When someone writes about, say, quantum theory, the subject has to be presented as difficult, and subtle, and mysterious, because it is difficult, and subtle, and mysterious. To describe it any other way would be simply misleading. In the past sixty years, however, the mysteries have had time to become old friends of the professionals in the field. There are certainly enigmas, logical whirlpools into which you can fall and never get out, but at least the locations of those trouble spots are known. Writing about any well-established subject such as quantum theory is therefore in some sense easy.

  In the case of chaos theory, by contrast, everything is new and fragmented; we face the other extreme. We are adrift on an ocean of uncertainties, guided by partial and inadequate maps, and it is too soon to know where the central mysteries of the subject reside.

  Or, worse yet, to know if those mysteries are worth taking the time to explore. Is chaos a real "theory," something which will change the scientific world in a basic way, as that world was changed by Newtonian mechanics, quantum theory, and relativity? Or is it something essentially trivial, a subject which at the moment is benefiting from a catchy name and so enjoying a certain glamour, as in the past there have been fads for orgone theory, mesmerism, dianetics, and pyramidology?

  I will defer consideration of that question, until we have had a look at the bases of chaos theory, where it came from, and where it seems to lead us. Then we can come back to examine its long-term prospects.

  11.2 How to become famous. One excellent way to make a great scientific discovery is to take a fact that everyone knows must be the case—because "common sense demands it"—and ask what would happen if it were not true.

  For example, it is obvious that the Earth is fixed. It has to be standing still, because it feels as though it is standing still. The Sun moves around it. Copernicus, by suggesting that the Earth revolves around the Sun, made the fundamental break with medieval thinking and set in train the whole of modern astronomy.

  Similarly, it was clear to the ancients that unless you keep on pushing a moving object, it will slow down and stop. By taking the contrary view, that it takes a force (such as friction with the ground, or air resistance) to stop something, and otherwise it would just keep going, Galileo and Newton created modern mechanics.

  Another case: To most people living before 1850, there was no question that animal and plant species are all so well-defined and different from each other that they must have been created, type by type, at some distinct time in the past. Charles Darwin and Alfred Russel Wallace, in suggesting in the 1850s a mechanism by which one form could change over time to another in response to natural environmental pressures, allowed a very different world view to develop. The theory of evolution and natural selection permitted species to be regarded as fluid entities, constantly changing, and all ultimately derived from the simplest of primeval life forms.

  And, to take one more example, it was clear to everyone before 1900 that if you kept on accelerating an object, by applying force to it, it would move faster and faster until it was finally traveling faster than light. By taking the speed of light as an upper limit to possible speeds, and requiring that this speed to be the same for all observers, Einstein was led to formulate the theory of relativity.

  It may make you famous, but it is a risky business, this offering of scientific theories that ask people to abandon their long-cherished beliefs about what "just must be so." As Thomas Huxley remarked, it is the customary fate of new truths to begin as heresies.

  Huxley was speaking metaphorically, but a few hundred years ago he could have been speaking literally. Copernicus did not allow his work on the movement of the Earth around the Sun to be published in full until 1543, when he was on his deathbed, nearly 30 years after he had first developed the ideas. He probably did the right thing. Fifty-seven years later Giordano Bruno was gagged and burned at the stake for proposing ideas in conflict with theology, namely, that the universe is infinite and there are many populated worlds. Thirty-three years after that, Galileo was made to appear before the Inquisition and threatened with torture because of his "heretical" ideas. His work remained on the Catholic Church's Index of prohibited books for over two hundred years.

  By the nineteenth century critics could no longer have a scientist burned at the stake, even though they may have wanted to. Darwin was merely denounced as a tool of Satan. However, anyone who thinks this issue is over and done with can go today and have a good argument about evolution and natural selection with the numerous idiots who proclaim themselves to be scientific creationists.

  Albert Einstein fared better, mainly because most people had no idea what he was talking about. However, from 1905 to his death in 1955 he became the target of every crank and scientific nitwit outside (and often inside) the lunatic asylums.

  Today we will be discussing an idea, contrary to common sense, that has been developing in the past twenty years. So far its proposers have escaped extreme censure, though in the early days their careers may have suffered because no one believed them—or understood what they were talking about.

  11.3 Building models. The idea at the heart of chaos theory can be simply stated, but we will have to wind our way into it.

  Five hundred years ago, mathematics was considered essential for bookkeeping, surveying, and trading, but it was not considered to have much to do with the physical processes of Nature. Why should it? What do abstract symbols on a piece of paper have to do with the movement of the planets, the flow of rivers, the blowing of s
oap bubbles, the flight of kites, or the design of buildings?

  Little by little, that view changed. Scientists found that physical processes could be described by equations, and solving those equations allowed predictions to be made about the real world. More to the point, they were correct predictions. By the nineteenth century, the fact that manipulation of the purely abstract entities of mathematics could somehow tell us how the real world would behave was no longer a surprise. Sir James Jeans could happily state, in 1930, "all the pictures which science now draws of nature, and which alone seem capable of according with observational fact, are mathematical pictures," and " . . . the universe appears to have been designed by a pure mathematician."

  The mystery had vanished, or been subsumed into divinity. But it should not have. It is a mystery still.

  I would like to illustrate this point with the simplest problem of Newtonian mechanics. Suppose that we have an object moving along a line with a constant acceleration. It is easy to set up a situation in the real world in which an object so moves, at least approximately.

  It is also easy to describe this situation mathematically, and to determine how the final position depends on the speed and initial position. When we do this, we find that a tiny change in initial speed or position causes a small change in final speed and position. We say that the solution is a continuous function of the input variables.

  This is an especially simple example, but scientists are at ease with far more complex cases.

  Do you want to know how a fluid will move? Write down a rather complex equation (to be specific, the three-dimensional time-dependent Navier-Stokes equation for compressible, viscous flow). Solve the equation. That's not a simple proposition, and you may have to resort to a computer. But when you have the results, you expect them to apply to real fluids. If they do not, it is because the equation you began with was not quite right—maybe we need to worry about electromagnetic forces, or plasma effects. Or maybe the integration method you used was numerically unstable, or the finite difference interval too crude. The idea that the mathematics cannot describe the physical world never even occurs to most scientists. They have in the back of their minds an idea first made explicit by Laplace: the whole universe is calculable, by defined mathematical laws. Laplace said that if you told him (or rather, if you told a demon, who was capable of taking in all the information) the position and speed of every particle in the Universe, at one moment, he would be able to define the Universe's entire future, and also its whole past.

  The twentieth century, and the introduction by Heisenberg of the Uncertainty Principle, weakened that statement, because it showed that it was impossible to know precisely the position and speed of a body. Nonetheless, the principle that mathematics can exactly model reality is usually still unquestioned.

  It should be, because it is absolutely extraordinary that the pencil and paper scrawls that we make in our studies correspond to activities in the real world outside.

  Now, hidden away in the assumption that the world can be described by mathematics there is another one; one so subtle that most people never gave it a thought. This is the assumption that chaos theory makes explicit, and then challenges. We state it as follows:

  Simple equations must have simple solutions.

  There is no reason why this should be so, except that it seems that common sense demands it. And, of course, we have not defined "simple."

  Let us return to our accelerating object, where we have a simple-seeming equation, and an explicit solution. One requirement of a simple solution is that it should not "jump around" when we make a very small change in the system it describes. For example, if we consider two cases of an accelerated object, and the only difference between them is a tiny change in the original position of the object, we would expect a small change in the final position. And this is the case. That is exactly what was meant by the earlier statement, that the solution was a continuous function of the inputs.

  But now consider another simple physical system, a rigid pendulum (this was one of the first cases where the ideas of chaos theory emerged). If we give the pendulum a small push, it swings back and forward. Push it a little harder, and a little harder, and what happens? Well, for a while it makes bigger and bigger swings. But at some point, a very small change to the push causes a totally different type of motion. Instead of swinging back and forward, the pendulum keeps on going, right over the top and down the other side. If we write the expression for the angle as a function of time, in one case the angle is a periodic function (back and forth) and in the other case it is constantly increasing (round and round). And the change from one to the other occurs when we make an infinitesimal change in the initial speed of the pendulum bob. This type of behavior is known as a bifurcation in the behavior of the solution, and it is a worrying thing. A simple equation begins to exhibit a complicated solution. The solution of the problem is no longer a continuous function of the input variables.

  At this point, the reasonable reaction might well be, so what? All that we have done is show that certain simple equations don't have really simple solutions. That does not seem like an earth-shaking discovery. For one thing, the boundary between the two types of solution for the pendulum, oscillating and rotating, is quite clear-cut. It is not as though the definition of the location of the boundary itself were a problem.

  Can situations arise where this is a problem? Where the boundary is difficult to define in an intuitive way? The answer is, yes. In the next section we will consider simple systems that give rise to highly complicated boundaries between regions of fundamentally different behavior.

  11.4 Iterated functions. Some people have a built-in mistrust of anything that involves the calculus. When you use it in any sort of argument, they say, logic and clarity have already departed. The solutions for examples I have given so far implied that we write down and solve a differential equation, so calculus was needed to define the behavior of the solutions. However, we don't need calculus to demonstrate fundamentally chaotic behavior; and many of the first explorations of what we now think of as chaotic functions were done without calculus. They employed what is called iterated function theory. Despite an imposing name, the fundamentals of iterated function theory are so simple that they can be done with an absolute minimum knowledge of mathematics. They do, however, benefit from the assistance of computers, since they call for large amounts of tedious computation.

  Consider the following very simple operation. Take two numbers, x and r. Form the value y=rx(1-x).

  Now plug the value of y back in as a new value for x. Repeat this process, over and over.

  For example, suppose that we take r=2, and start with x=0.1. Then we find y=0.18.

  Plug that value in as a new value for x, still using r=2, and we find a new value, y=0.2952.

  Keep going, to find a sequence of y's, 0.18, 0.2952, 0.4161, 0.4859, 0.4996, 0.5000, 0.5000 . . .

  In the language of mathematics, the sequence of y's has converged to the value 0.5. Moreover, for any starting value of x, between 0 and 1, we will always converge to the same value, 0.5, for r=2.

  Here is the sequence when we begin with x=0.6:

  0.4800, 0.4992, 0.5000, 0.5000 . . .

  Because the final value of y does not depend on the starting value, it is termed an attractor for this system, since it "draws in" any sequence to itself.

  The value of the attractor depends on r. If we start with some other value of r, say r=2.5, we still produce a convergent sequence. For example, if for r=2.5 we begin with x=0.1, we find successive values: 0.225, 0.4359, 0.6147, 0.5921, 0.6038, 0.5981, . . . 0.6. Starting with a different x still gives the same final value, 0.6.

  For anyone who is familiar with a programming language such as C or even BASIC (Have you noticed how computers are used less and less to compute?), I recommend playing this game for yourself. The whole program is only a dozen lines long. Suggestion: Run the program in double precision, so you don't get trouble with round-off errors. Warn
ing: Larking around with this sort of thing will consume hours and hours of your time.

  The situation does not change significantly with r=3. We find the sequence of values: 0.2700, 0.5913, 0.7250, 0.5981, 0.7211 . . . 0.6667. This time it takes thousands of iterations to get to a final converged value, but it makes it there in the end. Even after only a dozen or two iterations we can begin to see it "settling-in" to its final value.

  There have been no surprises so far. What happens if we increase r a bit more, to 3.1? We might expect that we will converge, but even more slowly, to a single final value.

  We would be wrong. Something very odd happens. The sequence of numbers that we generate has a regular structure, but now the values alternate between two different numbers, 0.7645, and 0.5580. Both these are attractors for the sequence. It is as though the sequence cannot make up its mind. When r is increased past the value 3, the sequence "splits" to two permitted values, which we will call "states," and these occur alternately.

  Let us increase the value of r again, to 3.4. We find the same behavior, a sequence that alternates between two values.

  But by r=3.5, things have changed again. The sequence has four states, four values that repeat one after the other. For r=3.5, we find the final sequence values: 0.3828, 0.5009, 0.8269, and 0.8750. Again, it does not matter what value of x we started with, we will always converge on those same four attractors.

  Let us pause for a moment and put on our mathematical hats. If a mathematician is asked the question, Does the iteration y=rx(1-x) converge to a final value?, he will proceed as follows:

  Suppose that there is a final converged value, V, towards which the iteration converges. Then when we reach that value, no matter how many iterations it takes, at the final step x will be equal to V, and so will y. Thus we must have V=rV(1-V).

  Solving for V, we find V=0, which is a legitimate but uninteresting solution, or V=(r-1)/r. This single value will apply, no matter how big r may be. For example, if r=2.5, then V=1.5/2.5=0.6, which is what we found. Similarly, for r=3.5, we calculate V=2.5/3.5=0.7142857.

 

‹ Prev