Book Read Free

The Compleat McAndrew

Page 39

by Charles Sheffield


  This “shadow matter” produced at the time of gravitational decoupling lacks any such interaction with the matter of the familiar Universe. We can determine its existence only by the gravitational effects it produces, which, of course, is exactly what we need to “close the Universe,” and also exactly what we needed for the fifth chronicle.

  One can thus argue that the fifth chronicle is all straight science; or, if you are more skeptical, that it and the theories on which it is based are both science fiction. I think that I prefer not to give an opinion.

  Invariance and science.

  In mathematics and physics, an invariant is something that does not change when certain changes of condition are made. For example, the “connectedness” or “connectivity” of an object remains the same, no matter how we deform its surface shape, provided only that no cutting or merging of surface parts is permitted. A grapefruit and a banana have the same connectedness—one of them can, with a little effort, be squashed to look like the other (at least in principle, though it does sound messy). A coffee cup with one handle and a donut have the same connectedness; but both have a different connectedness from that of a two-handled mug, or from a mug with no handle. You and I have the same connectedness—unless you happen to have had one or both of your ears pierced, or wear a ring through your nose.

  The “knottedness” of a piece of rope is similarly unchanging, provided that we keep hold of the ends and don’t break the string. There is an elaborate vocabulary of knots. A “knot of degree zero” is one that is equivalent to no knot at all, so that pulling the ends of the rope in such a case will give a straight piece of string—a knot trick known to every magician. But when Alexander the Great “solved” the problem of the Gordian Knot by cutting it in two with his sword, he was cheating.

  Invariants may sound useless, or at best trivial. Why bother with them? Simply for this reason: they often allow us to make general statements, true in a wide variety of circumstances, where otherwise we would have to deal with lots of specific and different cases.

  For example, the statement that a partial differential equation is of elliptic, parabolic, or hyperbolic type is based on a particular invariant, and it tells us a great deal about the possible solutions of such equations before we ever begin to try to solve them. And the statement that a real number is rational or irrational is invariant, independent of the number base that we are using, and it too says something profound about the nature of that number.

  What about the invariants of physics, which interested McAndrew? Some invariants are so obvious, we may feel they hardly justify being mentioned. For example, we certainly expect the area or volume of a solid body to be the same, no matter what coordinate system we may use to define it.

  Similarly, we expect physical laws to be “invariant under translation” (so they don’t depend on the actual position of the measuring instrument) and “invariant under rotation” (it should not matter which direction our experimental system is pointing) and “invariant under time translation” (we ought to get the same results tomorrow as we did yesterday). Most scientists took such invariants for granted for hundreds of years, although each of these is actually making a profound statement about the physical nature of the Universe.

  So, too, is the notion that physical laws should be “invariant under constant motion.” But assuming this, and rigorously applying it, led Einstein straight to the theory of special relativity. The idea of invariance under accelerated motion took him in turn to the theory of general relativity.

  Both these theories, and the invariants that go with them, are linked inevitably with the name of one man, Albert Einstein. Another great invariant, linear momentum, is coupled in my mind with the names of two men, Galileo Galilei and Isaac Newton. Although the first explicit statement of this invariant is given in Newton’s First Law of Motion (“Every body will continue in its state of rest or of uniform motion in a straight line except in so far as it is compelled to change that state by impressed force.”), Galileo, fifty years earlier, was certainly familiar with the general principle.

  Some of the other “great invariants” needed the efforts of many people before they were firmly defined and their significance was appreciated. The idea that mass was an invariant came about through the efforts of chemists, beginning with Dalton and Lavoisier, who weighed combustion products and found that the total was the same before and after. The equivalence of different forms of energy (heat, motion, potential energy, and electromagnetic energy), and the invariance of total energy of all forms, developed even later. It was a combined effort by Count Rumford, Joule, Maxwell, Lord Kelvin, Helmholtz and others. The merger of the two invariants became possible when Einstein showed the equivalence of mass and energy, after which it was only the combined mass-energy total that was conserved.

  Finally, although the idea that angular momentum must be conserved seems to arise naturally in classical mechanics from the conservation of linear momentum, in quantum physics it is much more of an independent invariant because particles such as protons, neutrons, electrons, and neutrinos have an intrinsic, internal spin, whose existence is not so much seen as deduced in order to make angular momentum a conserved quantity.

  This sounds rather like a circular argument, but it isn’t, because intrinsic spin couples with orbital angular momentum, and quantum theory cannot make predictions that match experiments without both of them. And as McAndrew remarks, Wolfgang Pauli in 1931 introduced the idea of a new particle to physics, the neutrino, just in order to preserve the laws of conservation of energy and momentum.

  There are other important invariants in the quantum world. However, some things which “common sense” would insist to be invariants may be no such thing. For example, it was widely believed that parity (which is symmetry upon reflection in a mirror) must be a conserved quantity, because the Universe should have no preference for left-handed sub-nuclear processes over right-handed ones. But in 1956, Tsung Dao Lee and Chen Ning Yang suggested this might not be the case, and their radical idea was confirmed experimentally by C.S. Wu’s team in 1957. Today, only a combination of parity, charge, and time-reversal is regarded as a fully conserved quantity.

  Given the overall importance of invariants and conservation principles to science, there is no doubt that McAndrew would have pursued any suggestion of a new basic invariant. But if invariants are real, where is the fiction in the sixth chronicle? I’m afraid there isn’t any, because the nature of the new invariant is never defined.

  Wait a moment, you may say. What about the Geotron?

  That is not fiction science, either, at least so far as principles are concerned. Such an instrument was seriously proposed a few years ago by Robert Wilson, the former director of the Fermilab accelerator. His design called for a donut-shaped device thirty-two miles across, in which protons would be accelerated to very high energies and then strike a metal target, to produce a beam of neutrinos. The Geotron designers wanted to use the machine to probe the interior structure of the Earth, and in particular to prospect for oil, gas, and valuable deep-seated metal deposits.

  So maybe there is no fiction at all in the sixth chronicle—just a little pessimism about how long it will take before someone builds a Geotron.

  Rogue planets.

  The Halo beyond the known Solar System offers so much scope for interesting celestial objects of every description that I assume we will find a few more there. In the second chronicle, I introduced collapsed objects, high-density bodies that are neither stars nor conventional planets. The dividing line between stars and planets is usually decided by whether or not the center of the object supports a nuclear fusion process and contains a high density core of “degenerate” matter. Present theories place that dividing line at about a hundredth of the Sun’s mass—smaller than that, you have a planet; bigger than that you must have a star. I assume that there are in-between bodies out in the Halo, made largely of degenerate matter but only a little more massive than Jupiter.

&
nbsp; I also assume that there is a “kernel ring” of Kerr-Newman black holes, about 300 to 400 AU from the Sun, and that this same region contains many of the collapsed objects. Such bodies would be completely undetectable using any techniques of present-day astronomy. This is science fiction, not science.

  Are rogue planets also science fiction? This brings us to Vandell’s Fifth Problem, and the seventh chronicle.

  David Hilbert did indeed pose a set of mathematical problems in 1900, and they served as much more than a summary of things that were “hard to solve.” They were concise and exact statements of questions, which, if answered, would have profound implications for many other problems in mathematics. The Hilbert problems are both deep and difficult, and have attracted the attention of almost every mathematician of the twentieth century. Several problems of the set, for example, ask whether certain numbers are “transcendental”—which means they can never occur as solutions to the usual equations of algebra (more precisely, they cannot be roots of finite algebraic equations with algebraic coefficients). These questions were not disposed of until 1930, when Kusmin and Siegel proved a more general result than the one that Hilbert had posed. In 1934 Gelfond provided another generalization.

  At the moment there is no such “super-problem” set defined for astronomy and cosmology. If there were, the one I invented as Vandell’s Fifth Problem would certainly be a worthy candidate, and might take generations to solve. (Hilbert’s Fifth Problem, concerning a conjecture in topological group theory, was finally solved in 1952 by Gleason, Montgomery, and Zippin.) We cannot even imagine a technique, observational instrument or procedure that would have a chance of detecting a rogue planet. The existence, frequency of occurrence, and mode of escape of rogue planets raise many questions concerning the stability of multiple-body systems moving under their mutual gravitational attractions—questions that cannot be answered yet by astronomers and mathematicians.

  In general relativity, the exact solution of the “one-body problem” as given by Schwarzschild has been known for more than 80 years. The relativistic “two-body problem,” of two objects orbiting each other under mutual gravitational influence, has not yet been solved. In nonrelativistic or Newtonian mechanics, the two-body problem was disposed of three hundred years ago by Newton. But the nonrelativistic solution for more than two bodies has not been found to this day, despite three centuries of hard work.

  A good deal of progress has been made for a rather simpler situation that is termed the “restricted three-body problem.” In this, a small mass (such as a planet or small moon) moves under the influence of two much larger ones (stars or large planets). The large bodies define the gravitational field, and the small body moves in this field without contributing significantly to it. The restricted three-body problem applies to the case of a planet moving in the gravitational field of a binary pair of stars, or an asteroid moving in the combined fields of the Sun and Jupiter. It also offers a good approximation for the motion of a small body moving in the combined field of the Earth and Moon. Thus the problem is of practical interest, and the list of workers who have studied it in the past 200 years includes several of history’s most famous mathematicians: Euler, Lagrange, Jacobi, Poincaré, and Birkhoff. (Lagrange in particular provided certain exact solutions that include the L-4 and L-5 points, famous today as proposed sites for large space colonies.)

  The number of papers written on the subject is huge—Victor Szebehely, in a 1967 book on the topic, listed over 500 references, and restricted himself to only the major source works.

  Thanks to the efforts of all these workers, a good deal is known about the possible solutions of the restricted three-body problem. One established fact is that the small object cannot be thrown away to infinity by the gravitational interactions of its two large companions. Like much of modern astronomy, this result is not established by looking at the orbits themselves. It is proved by general arguments based on a particular constant of the motion, termed the Jacobian integral.

  Unfortunately, those arguments cannot be applied in the general three-body problem, or in the N-body problem whenever N is bigger than two. It is presently conjectured by astronomers, but not generally proved, that ejection to infinity is possible whenever more than three bodies are involved. In such a situation, the lightest member of the system is most likely to be the one ejected. Thus, rogue planets can probably be produced when a stellar system has more than two stars in it. As it happens, this is rather common. Solitary stars, like the Sun, are in the minority. Once separated from its stellar parents, the chances that a rogue world will ever again be captured to form part of a star system are remote. To this point, the seventh chronicle’s discussion of solitary planets fits known theory, although it is an admittedly incomplete theory.

  So how many rogue planets are there? There could conceivably be as many as there are stars, strewn thick across the Galaxy but completely undetectable to our instruments. Half a dozen may lie closer to us than the nearest star. Or they may be an endangered species, vanishingly rare among the varied bodies that comprise the celestial zoo.

  In the seventh chronicle I suggest that they are rather common—and that’s acceptable to me as science fiction. Maybe they are, because certainly planets around other stars seem far more common than we used to think. Up to 1996, there was no evidence at all that even one planet existed around any star other than Sol. Now we know of a dozen or more. Every one is Jupiter’s size or bigger, but that does not imply that most planets in the universe are massive. It merely shows that our detection methods can find only big planets. Possibly there are other, smaller planets in every system where a Jupiter-sized giant has been discovered.

  If we cannot actually see a planet, how can we possibly know that it exists? There are two methods. First, it is not accurate to say that a planet orbits a star. The bodies orbit around their common center of mass. That means, if the orbit lies at right angles to the direction of the star as seen from Earth, the star’s apparent position in the sky will show a variation over the period of the planetary year. That change will be tiny, but if the planet is large, the movement of the star might be small enough to measure.

  The other (and to this date more successful) method of detection relies on the periodic shift in the wavelengths of light that we receive from a star and planet orbiting around their common center of gravity. When the star is approaching us because the planet is moving away from us, the light will be shifted toward the blue. When the star is moving away from us because the planet is approaching us, the star’s light will be shifted toward the red. The tiny difference between these two cases allows us, from the wavelength changes in the star’s light, to infer the existence of a planet in orbit around it.

  Since both methods of detection depend for their success on the planet’s mass being an appreciable fraction of the star’s mass, it is no surprise that we are able to detect only the existence of massive planets, Jupiter-sized or bigger. And so far as rogue worlds are concerned, far from any stellar primary, our methods for the detection of extra-solar planets are no use at all.

  The solar focus.

  We go to general relativity again. According to that theory, the gravitational field of the Sun will bend light beams that pass by it (actually, Newtonian theory also turns out to predict a similar effect, a factor of two less in magnitude). Rays of light coming from a source at infinity and just missing the Sun will be bent the most, and they will converge at a distance from the Sun of 550 astronomical units, which is about 82.5 billion kilometers. To gain a feeling for that number, note that the average distance of the planet Pluto from the Sun is 5.9 billion kilometers; the solar focus, as the convergence point is known, is a fair distance out.

  Those numbers apply for a spherical Sun. Since Sol rotates and so has a bulge at its equator, the Sun considered as a lens is slightly astigmatic.

  If the source of light (or radio signal, which is simply another form of electromagnetic wave) is not at infinity, but closer, then
the rays will still be converged in their passage by the Sun, but they will be drawn to a point at a different location. As McAndrew correctly points out in the eighth chronicle, a standard result in geometrical optics applies. If a lens converges a parallel beam of light at a distance F from the lens, then light starting at a distance S from the lens will be converged at a distance D beyond it, where 1/F = 1/S + 1/D.

  This much is straightforward. The more central element of this chronicle involves far more speculation. When, or if you prefer it, if, will it be possible to produce an artificial intelligence, an “AI,” that rivals or surpasses human intelligence?

  It depends which writers you believe as to how you answer that question. Some, such as Hans Moravec, have suggested that this will happen in fifty years or less. Others, while not accepting any specific date, still feel that it is sure to come to pass. Our brains are, in Marvin Minsky’s words, “computers made of meat.” It may be difficult and take a long time, but eventually we will have an AI able to think as well or better than we do.

  However, not everyone accepts this. Roger Penrose, whom we have already mentioned in connection with energy extraction from kernels, has argued that an AI will never be achieved by the further development of computers as we know them today, because the human brain is “non-algorithmic.”

  In a difficult book that was a surprising best-seller, The Emperor’s New Mind (1989), he claimed that some functions of the human brain will never be duplicated by computers developed along today’s lines. The brain, he asserts, performs some functions for which no computer program can be written.

  This idea has been received with skepticism and even outrage by many workers in the field of AI and computer science. So what does Penrose say that is so upsetting to so many? He argues that human thought employs physics and procedures drawn from the world of quantum theory. In Penrose’s words, “Might a quantum world be required so that thinking, perceiving creatures, such as ourselves, can be constructed from its substance?”

 

‹ Prev