Wizards, Aliens, and Starships: Physics and Math in Fantasy and Science Fiction
Page 21
The Casimir energy is exotic; the exoticity parameter η = 3. Unfortunately the classical Casimir effect between two plates probably doesn’t work. In the original derivation, Casimir used a mathematical trick, essentially cutting off the calculation at high energies. The effects of vacuum modes above the cutoff point and of the mass of the plates themselves cannot be ignored, meaning that attempting to use the Casimir effect due to real, physical metal plates is not going to work [242, pp. 123–124].
How much exotic matter do we need to produce a wormhole? The mathematics needed to answer this question goes beyond the scope of this book. Matt Visser made the following estimates for the tension in the wormhole throat and the total “mass” of exotic matter needed.
These are
and
where b is the throat radius, τ is the tension, and M is the “mass” of the exotic matter. One point that Visser discusses is that M isn’t exactly mass because of the strange nature of exotic matter and space-time curvature. However, it is a measure of the energy content of the exotic matter we need. A one-meter-diameter wormhole will require a quantity of exotic matter equivalent to the mass of all of the planets in the Solar System. One meter probably won’t do it, however: throats that small would probably tear apart anyone going through them because of tidal forces. The tension is far, far greater than any possible material can sustain.
One really interesting feature of wormholes used for interstellar travel is that their mass depends on the mass of whatever passes through them. I don’t think that any science fiction writer has ever used this point in any story. The issue is this: the conservation laws of physics are local in nature. That is, we think that mass and energy are conserved. However, they are conserved locally: you can’t simply have 10,000 kg disappear in one place (say, near the Sun) and 10,000 kg reappear in another place (say, near the star Betelgeuse, 600 light-years away) at the same time. Why not? Because “at the same time” is a relative statement: in one reference frame, they will disappear and reappear at the same time. In another, the mass will disappear and there will be an interval before it reappears. In another, the mass will reappear near Betelgeuse before disappearing. Richard Feynman stated the law thus: if you have a certain amount of mass-energy inside a box, the only way the amount inside the box can change is if you move a mass through the walls of the box [81, pp. 63–65].
Let’s consider the two mouths of the wormhole. Put a box around each mouth. A 10,000 kg spacecraft goes through the mouth of the wormhole near the Sun and reappears out of the other mouth near Betelgeuse. Well, 10,000 kg just went through the box near the Sun and didn’t come out again, according to an observer near the Sun. The wormhole mouth near the Sun just gained 10,000 kg, if we believe the conservation of mass-energy. An observer near Betelgeuse just saw 10,000 kg emerge from the wormhole mouth near that star. The mouth near that star must have lost 10,000 kg. This can be rigorously justified using the general theory of relativity [242, p. 111]. In his book, Visser raises the interesting and unanswered question: what happens if one mouth loses so much that its mass becomes negative?
The total charge of a system is conserved in the same way mass is. If a positive charge goes through the wormhole, the field lines from the charge still stick out of the mouth it entered. This makes it “look like” a positive charge to an observer at the mouth near the Sun. When the positive charge emerges from the other mouth, the field lines from the charge will be bunched up as they pass through the second mouth, making the mouth near Betelgeuse “look like” a negative charge. John Wheeler once proposed that the reason why the universe is charge neutral is that there really is no such thing as charge on the most basic level: charges are really electric field lines threading the twin mouths of wormholes. The idea probably doesn’t hold up, but it is pretty neat.
Can one use wormholes for time travel? Yes. Kip Thorne showed that if you took one mouth of the wormhole and accelerated it away from the other mouth and then back, the mouth going on the journey would age less than the stationary mouth. This is just the twin paradox of the last chapter [169]. Entering through the mouth that was taken on the trip and exiting the other one, one goes backward in time. Oddly enough, this is almost exactly the same situation as posed in Time for the Stars; assuming (for lack of any other feasible mechanism) that Tom’s and Pat’s minds were linked by some sort of flexible wormhole, thoughts going from Tom’s mind would go to a decades-younger version of Pat, at least when Tom got close enough to Pat on his return journey to enter Pat’s light cone.
There’s a snag, however: as mentioned above, Hawking’s chronology protection hypothesis states that the universe will not allow time machines to exist. It seems that vacuum fluctuations amplified by the wormhole (probably) destroy it as soon as it becomes a time machine, meaning that once Tom gets close enough to Pat on his return journey, both of their brains are fried by high-energy gamma rays and particles created from the vacuum.
13.7 THE GRANDFATHER PARADOX AND OTHER ODDITIES
One of the major problems of time travel is not that of accidentally becoming your own father or mother. There is no problem involved in becoming your own father or mother that a broad-minded or well-adjusted family can’t cope with.
—DOUGLAS ADAMS, THE RESTAURANT AT THE END OF THE UNIVERSE
A lot has been written about the logical paradoxes involved in time travel. Paul Nahin has examined them in detail in his popular book, Time Machines [174]. In chapter 4, “Time Travel Paradoxes and Some of Their Explanations,” he covers some of the same ground I cover in this section. I recommend it for anyone interested in the subject. In particular, he discusses John Wheeler and Richard Feynman’s reformulation of electrodynamics allowing “advanced” wave (i.e., waves from the future) solutions of the Maxwell equations governing the propagation of light, which I don’t cover here [250]. This 1945 work can be seen as a precursor to the work done by Kip Thorne and others on the physical solutions to the “grandfather paradox” discussed below. As the title implies, Paul Nahin’s book discusses the philosophical implications of time travel in addition to its physics. This is something I don’t cover in detail in this chapter, so his book is a very good complement to the discussion here. He is also a long-time science fiction fan and writer, so the book is written in an engaging style and has copious references to the science fiction literature.
I’m going to write about two of the logical paradoxes in this section. Together, these two probably cover about 99% of what one might encounter either in science fiction or in scientific works. They are:
1. Paradoxes involving creation of matter, energy, or information out of nothing, and
2. Paradoxes involving causality (usually called “grandfather paradoxes”).
Paradoxes involving matter, energy, or information creation have to do with the issue that if someone gets into a time machine now and travels back to, say, the Cretaceous period, then we effectively see a large amount of mass-energy disappear now. This is forbidden by the most fundamental of the conservation laws of physics, the conservation of mass-energy. In addition, 150 million years ago, a dinosaur observer saw the creation of mass ex nihilo. From then until now, there was extra matter around that wasn’t present at the Big Bang. It looks like we got something from nothing.
However, this is easily handled. In the last section I mentioned that local conservation laws imply that a 10,000 kg spacecraft entering one mouth of a wormhole will increase the measured mass by 10,000 kg. The mouth it exits from will lose the same amount of mass. This is justified by calculation of the “back reaction” of the gravitational field as the spaceship passes through it using the general theory of relativity. The only plausible way anyone knows to create a time machine is by using the mouths of a wormhole, so the law of conservation of mass-energy seems to be safe. The mouth through which the time traveler enters gains, and the mouth through which she exits loses, exactly enough to satisfy the law of conservation of mass-energy. Other conservation laws are satisfied
as well, such as the conservation of charge. It shouldn’t be surprising that the same rules apply, as time travel into the past is faster than light travel: in some reference frame, entering the wormhole will happen at the same time as exiting it but separated by a very large distance—nearly 150 million light-years. We can postulate that if time machines or FTL travel exists, both must satisfy the local conservation laws of physics.
Information creation is more tricky. This is a topic many science fiction writers have used in their stories. Let’s say I go to the Victoria and Albert Museum in London. In a dim, rarely visited section, I find a paper in a hidden corner that provides detailed instructions for building a time machine. I build the machine and visit H. G. Wells in 1892, telling him how to build the machine. He writes down the instructions, which he hides in a dim, rarely visited section of the V&A. Who figured out how to build the time machine? Where did that information come from?
In examining this paradox we have to look at information in the same way a physicist does to make sense of it. By this I mean let’s remove the human factor. Let’s imagine we have a wormhole that we have made into a time machine. Going into mouth A causes you to emerge from mouth B at a fixed time interval before entering A. It doesn’t have to be long for our purposes. For the sake of definiteness, let’s make it 1 ms (=10−3 s). Put a computer between the two mouths and run a cable from some output of the computer through mouth A, out mouth B, and into an input into the computer. We now have a computer that sends information to itself in its past.
We write a simple program for the computer to take the input from the computer, calculate some function f(x), and send that to the output. But the output is fed back to the input in the past. Because of this, the input to the program is the output from the program, or
In mathematical terms, the program must output a fixed point of the function f (x).
Has information been created here? This is a tricky issue, especially if the function’s fixed points are hard to determine. For example, if we use the function
the input/output x will be equal to , so we have computed a square root using much less computational power than one would have to use normally. This is a trivial example of the computational power available as a result of this “fixed-point” behavior of the computer. Todd Brun has shown that a computer plus time machine can be used to factor large numbers in effectively zero time [42]. This is interesting because the RSA algorithm used to encrypt information is good only so long as factorization remains a “difficult” problem. (Of course, building a time machine represents a difficult problem by itself.) I think that by using this fixed-point principle, one can make Brun’s factorization algorithm much simpler by merely using the function call:
to find out if N has any factors.
One other issue that pops up is that by the appropriate choice of function we can turn this problem into the “grandfather paradox.” The grandfather paradox is the classic causality paradox. Say your grandfather was an evil man, the dictator of a large country. In his life he was responsible for the deaths of thousands—nay, millions—of innocent people. As his successor, you gather the best scientists in your country together to create a time machine. You go back in time (long before your father was conceived) and shoot the old man to death. So, if your grandfather never fathered your father, who fathered you? Who built the time machine to stop his actions? Time machines seemingly allow the violation of cause-and-effect relations and lead to situations in which we have, paradoxically, two mutually impossible things happening at once.4
Again, let’s take the human side out of the equation, so to speak, and set up a simple program on our computer: if it gets an input of “1”, it outputs a “0”, and vice versa. What will it do if we hook up input to output in the way we did above? It’s pretty easy to see that there doesn’t seem to be any way to satisfy this problem: if a 0 comes out of the wormhole, we send a 1 into it, which sends a 1 back in time as the input to the computer, which then sends a 0 as the output back in time to become the input, which.… We can make the “computer” very simple indeed: below is an electric circuit that will mimic what our program will do. The circuit shown is a NOT gate. A 0 V signal (representing a 0 input) produces a 1 V output (standing for a 1 output), and vice versa. The wire from the NOT gate goes through the wormhole mouth A, out B, and back to its input. What happens?
I think there are two possible resolutions to this paradox:
1. Time machines are impossible. If we can’t build one, we certainly can’t set this apparatus up.
2. Time machines are possible but a paradox is avoided because of the physical limitations of our device.
The first possibility is most likely the correct one but is less interesting, so let’s examine the second one. I’ll offer an analogy: if you flip a coin, there aren’t merely two states possible, heads or tails, but three: the coin can also land on its edge. We think of digital logic circuits as having only two possible states, 0 and 1, but that is an approximation. In reality these circuits are built of transistors, devices that obey the laws of physics and that can take a continuous range of input voltages and output a continuous range of voltages. Computer devices use feedback techniques to force them to either go high or low, but these feedbacks fail in the presence of a time machine. My belief, for what it is worth, is that in this situation our computer would find itself in a state that was neither a 1 or a 0 but somewhere between the two. Figure 13.2 shows a realistic response curve for a NOT circuit. Ideally, we would want any voltage under 0.5 V to give us a 1 V output, and any voltage over that to give us a 0 V output. Because the output voltage is a continuous function of the input voltage, the real response will be “softer” than our ideal. As the graph shows, there will be a point where the curve y = x intersects the response curve for the circuit; that is the fixed point. That is what the circuit will output if we hook it up through a time machine.5
Figure 13.2. Idealized and realistic NOT circuit response curves.
From this discussion it shouldn’t be surprising that time travel into the past, in addition to all of its other problems, leads to violations of the increase in entropy. Computers hooked through time machines can factorize huge numbers simply because we can write programs so that the only consistent output is one of the factors. We can force exceedingly low-probability events to happen because of travel into the past. This is used in a lot of science fiction and fantasy. One can view Rowling’s novel Harry Potter and the Prisoner of Azkaban in this light. Hermione’s time travel forces a lot of low-probability events to occur, including the escape of Buckbeak and Sirius Black, Harry’s escape from the Dementors, and so on [202]. In Matt Visser’s words, “In the presence of closed timelike curves the consistency conjecture forces certain low probability events to become virtual certainties” [242, p. 256]. Or as Larry Niven wrote, much earlier and more elegantly, “try to save Jesus with a submachine gun, and the gun will positively jam” [178].
Finally, I need to address whether the quantum mechanical collapse of a wave function transmits information faster than light. This idea is used in science fiction stories where authors want a quasi-scientific justification for faster-than-light communication such as Ursula K. Le Guin’s ansible communicator in The Left Hand of Darkness and other novels, and in Daniel Simmon’s novels Hyperion and The Fall of Hyperion [145] [219][220]. In quantum mechanics the wave function replaces the idea of the trajectory of an atom. The wave function is defined at all points in space and time. Its square gives the probability of various properties of the particle: its position, energy, and spin (quantized angular momentum), to name a few. According to the Copenhagen interpretation of quantum mechanics, which most physicists think is correct, these properties don’t have values until they are measured [102]. This is a little weird, but not so bad. The difficulty comes when we have two separate particles whose properties are linked together by one or more of the conservation laws. For example, we can produce photons (particles of light) whose s
pin is anticorrelated because of the law of conservation of angular momentum. In physics terms, if one has spin “up,” the other will automatically have spin “down.”
Let’s do an experiment where we generate these two photons and send them in opposite directions to two observers, Al and Bert, located 2 light-seconds apart. (The experimental apparatus is midway between them.) Al measures his photon as spin “up.” The paradox comes in that he knows that Bert will measure his photon to have spin “down” a full 2 seconds before the information from Bert can be transmitted to him. It’s even worse when we remember that in some reference frames Al is measuring his photon’s spin before Bert measures his. If quantum mechanics is to be believed, the spin of Bert’s photon doesn’t have any value before it is measured! So what’s going on?
This seeming paradox was first discussed by Albert Einstein, Boris Podolsky, and Nathan Rosen in one of the most cited papers in the history of physics, “Can Quantum-Mechanical Description of Reality Be Considered Complete?” [78]. Einstein disliked the conventional probabilistic interpretation of the quantum wave function. The paper was an argument for what are today known as “hidden-variable theories,” ones in which the probabilistic interpretation hides a more complicated machinery. The hidden machinery produces well-defined nonprobabilistic values for the quantum mechanical properties of a particle. The EPR paper, as it came to be known, represented a philosophical curiosity for several decades, as there didn’t seem to be any good way to test whether the Copenhagen interpretation or hidden-variable theory was correct. This changed in 1964 when John Bell published his paper “On the Einstein-Podolsky-Rosen Paradox” [36]. He showed that certain measurements could distinguish between the conventional view and the hidden-variable theory. All at once the EPR paradox moved to a central position in physics, as there was now a way forward to experimentally test it. Unfortunately, the means to resolve the paradox are complicated, and go beyond the scope of this book. The first experiments were performed in the early 1980s, and the conventional view of quantum mechanics passed with flying colors! Hidden-variable theory simply doesn’t work.