Book Read Free

Psychedelic Apes

Page 6

by Alex Boese


  Rubin studied individual galaxies, rather than entire clusters, but, like Zwicky, she noticed something odd about their motion. The outer arms of the galaxies she examined were moving too fast. At the speed they were going, the arms should have spun off entirely, unless there was some extra gravitational mass keeping them on. In 1970, she published an article with the astronomer Kent Ford in which they noted that, in order to account for the speed of rotation of the arms of the Andromeda Galaxy, it needed to contain almost ten times as much mass as was visible.

  In the following years, Rubin and Ford continued to produce similar observational data for more galaxies, and other researchers confirmed their findings. As a result, by the end of the 1970s, scientific opinion had swung decisively in favour of the existence of dark matter. The consensus was that there was simply no other explanation for all this data.

  But what exactly was dark matter? Zwicky had assumed it was just non-visible regular matter, but, if that was the case, there should have been various ways to detect its presence other than by the effect of its gravity. Astronomers tried all these techniques, but they all came up blank. So, gradually, the belief grew that dark matter had to be made of something more peculiar.

  A whole laundry list of candidates have been considered and rejected, leading to the current most popular hypothesis, which is that dark matter must be some kind of as-yet-unknown type of subatomic particle that doesn’t interact with regular matter except through gravity. Presumably this stuff is all around, but it concentrates in vast halos around galaxies, providing the scaffolding that allows them to form.

  However, all attempts to directly detect dark-matter particles have, so far, failed, leading some sceptics to question whether the stuff actually exists. They’ve suggested that the effects being attributed to dark matter might actually be caused by gravity somehow operating in a different way on galaxy-sized scales, as opposed to smaller, solar-system-sized scales. This alternative theory, first put forward by the Israeli physicist Mordechai Milgrom in 1983, is called Modified Newtonian Dynamics, or MOND, and its models work surprisingly well to explain the motion of individual galaxies, though far less well to account for the motion of entire clusters of them.

  To most scientists, however, the idea of revising the law of gravity borders on heresy. For which reason, MOND has attracted only a handful of supporters. But, as long as dark-matter advocates can’t completely close the deal by detecting particles of the stuff, hope remains alive for MOND. And, the longer dark matter eludes direct detection, the more doubts about it will come to the fore. Which means that although Zwicky’s weird idea has now been embraced by the scientific mainstream, there’s a small but real chance that one day its fortunes could change again, which would make it a weird theory that became true, but then became no longer true after all.

  What if we live forever?

  Have you ever had a close brush with death? Perhaps you were crossing a street, one foot off the curb, when a car sped by and missed you by inches. Perhaps an object fell from a tall building and almost hit you, but instead crashed to the ground a few feet away. Or perhaps you were deathly ill, but made a miraculous recovery. The varieties of potential near-misses are endless.

  Here’s a disturbing thought: perhaps you did die. Or rather, you died in one version of reality, while, in another (your here and now), you remained alive. This possibility is suggested by one of the strangest theories in physics: the many-worlds theory of Hugh Everett. It imagines that everything that can happen does happen, because the universe is constantly splitting into parallel realities in which every possibility is realized. So, all near-death experiences should produce outcomes both in which you survive and in which you don’t. These scenarios will exist simultaneously.

  Everett’s theory may sound more like science fiction than actual science. It does, regardless, offer an elegant solution to several perplexing problems in both physics and cosmology. This has earned it the endorsement of a number of leading scientists. It might have won over even more supporters if it weren’t for the seriously bizarre implication that, if everything that can happen does happen, then, surely, in at least a handful of the many parallel worlds that might exist, we’re all going to find a way to keep cheating death and live forever.

  Everett’s theory emerged from the discipline of quantum mechanics, which was formed in the early twentieth century as researchers began to gain an understanding of the mysterious subatomic world. What physicists realized as they explored this realm was that, to their utter astonishment, the rules governing the behaviour of subatomic objects were very different from the rules governing objects in the everyday world around us. In particular, it became apparent that subatomic particles such as electrons could be in more than one place at the same time. In fact, they could be in many places simultaneously.

  This phenomenon is called superposition, and, if you’re not familiar with quantum mechanics, it may sound strange, but physicists have no doubt about it. They believe it to be absolutely real. Nowadays, it’s even being employed in real-life applications, such as quantum computers, which use the principle to perform multiple calculations simultaneously, giving them unprecedented speed. Superposition is, however, exactly as weird as it sounds.

  Physicists were led to accept its reality as they struggled to predict and understand the movement of subatomic particles. In classical mechanics, if you fire a bullet out of a gun, you can predict exactly where it will hit. Its trajectory follows very logical rules. In the subatomic world, however, there’s no such certainty. Researchers realized that, when they fired a photon (a particle of light) or electron out of a gun, there was no way to predict exactly what trajectory it would take and where it would hit. It simply couldn’t be done.

  What they could do was map out the probability of where these particles would hit, but this added a new layer of confusion because it turned out that this map didn’t form anything like the pattern they expected. Instead, it indicated that subatomic particles were moving in very illogical ways. For example, when they fired a beam of photons at a metal plate perforated by two slits, at a rate of one photon per second, what they expected was that a strike pattern of two straight lines would form on the wall behind the slits. This is what would happen if bullets were fired at a double-slitted metal plate. But, instead, the photons created a wave-interference pattern, like two ripples colliding in a pond would make.

  By the rules of classical mechanics, this should have been impossible. Researchers couldn’t make head nor tail of what was happening, until they considered the possibility that each individual photon was moving through both slits simultaneously. As counter-intuitive as it seemed, the wave-interference pattern had to be the result of each photon interfering with itself.

  To visualize this, imagine a photon simultaneously following every possible trajectory it can take. All these paths fan out like a wave that hits both slits. As the wave passes through the slits, it produces an interference pattern on the other side. All its potential trajectories coexist in superposition, interacting with each other like ghostly invisible lines of force.

  This concept was put into mathematical form in 1925 by the German physicist Erwin Schrödinger. He devised an equation that predicted the behaviour of quantum systems by plotting this ‘wave function’ of subatomic particles, producing a map of every possible trajectory a particle might take. Physicists continue to use Schrödinger’s equation to model the behaviour of quantum-mechanical systems with very high accuracy.

  But there was a catch. The concept of superposition worked brilliantly to explain the seemingly bizarre way that subatomic particles were moving, but it didn’t explain why, when a particle eventually hit the wall behind the double-slitted plate, it actually only hit it in one place. This defied the claim that the particle was following multiple trajectories simultaneously. How did the particle go from being in superposition to being in just one position?

  This was the question that perplexed physicists as they c
ame to accept the reality of superposition. It was the enigma of how particles transformed, from probabilistic objects existing in multiple locations simultaneously, into definite objects fixed in one position in space. This came to be known as the measurement problem, because the transformation seemed to occur at the moment of measurement, or detection.

  In the 1920s, the physicists Niels Bohr and Werner Heisenberg came up with a solution to the measurement problem. They proposed that it was the act of observation which, by a method unknown, caused the infinite number of possible trajectories described by Schrödinger’s wave function to collapse down into just one trajectory. By merely looking at a particle in a state of superposition, they argued, an observer caused it to select a single position. Because Bohr was Danish and lived in Copenhagen, this came to be known as the Copenhagen interpretation of quantum mechanics.

  The interpretation had some odd philosophical implications. It suggested that reality didn’t exist unless it was observed – that we, as observers, are constantly creating our own reality somehow from the waves of probability surrounding us. Despite this weirdness, the majority of the scientific community rapidly accepted the Copenhagen interpretation as the solution to the measurement problem. This might have been because Bohr commanded enormous respect. No one dared contradict him. Plus, there didn’t seem to be any other compelling solution available.

  Not all scientists, however, were happy with Bohr and Heisenberg’s solution. It seemed ludicrous to some that the act of observation might shape physical reality. Einstein himself was reported to have complained that he couldn’t believe a mouse could bring about drastic changes in the universe simply by looking at it.

  Hugh Everett, a graduate student in physics at Princeton during the early 1950s, was among these sceptics. The entire premise of the Copenhagen interpretation seemed illogical to him. He couldn’t understand how a subatomic particle would even know it was being observed. Then, one night, as he was sharing a glass of sherry with some friends, a different solution to the measurement problem popped into his head. It occurred to him that perhaps the strange phenomenon of superposition never actually disappeared upon being observed. Perhaps the wave function never collapsed. Perhaps all the possible trajectories of a particle existed and continued to exist simultaneously in parallel realities. It just seemed to us observers that the wave function collapsed because we were unable to perceive more than one of these realities at a time.

  Everett quickly became enamoured of his idea and decided to make it the subject of his doctoral dissertation, which he completed in 1957. His argument, as he developed it, was that the Schrödinger equation wasn’t just a mathematical equation. It was a literal description of reality. Every possible trajectory that the wave function described was equally real, in a state of superposition – as were we, the observers. Therefore, when a researcher measured a particle, multiple copies of his own self were viewing every possible trajectory of that particle, each of his copies thinking it was seeing the only trajectory.

  Carrying this argument to its logical conclusion, Everett posited that the fundamental nature of the universe was very different from what we perceived it to be. Our senses deceived us into believing that there was only one version of reality, but the truth was that there were many versions – a vast plurality of possible worlds – existing simultaneously in superposition.

  What this implied was that anything that physically could happen must happen, because every possible trajectory of every wave function in the universe was unfolding simultaneously. This didn’t allow for the existence of supernatural phenomena, such as magic or extrasensory powers, because these aren’t physically possible. But it did suggest that there were versions of the universe in which every physically possible scenario played out. Somewhere out there, in the great quantum blur through which our consciousness was navigating, there had to be versions of reality in which the Earth never formed, life never began, the dinosaurs never went extinct and every single one of us won the mega-lottery. Every possible chain of events, no matter how improbable, had to exist.

  Initially, the scientific community ignored Everett’s dissertation. For thirteen years, it languished in obscurity. In response to this silent rejection, Everett abandoned academia and took a job with the Pentagon, analysing the strategy of nuclear weapons. He never published again on the topic of quantum mechanics. But Everett’s theory did eventually catch the attention of cosmologist Bryce DeWitt, who became its first fan. Thanks to his promotional efforts, which included republishing Everett’s dissertation in book form and coining the name ‘many-worlds theory’, it reached a wider audience.

  And, once they became aware of it, physicists didn’t dismiss Everett’s idea out of hand. Predictably, many of them bristled at the idea that we’re all constantly splitting into parallel copies. After all, the world around us seems reassuringly solid and singular. But the theory did manage to find a handful of converts, who noted that, as a solution to the measurement problem, it worked, and it did so without having to invest magical powers in the act of observation, as the Copenhagen interpretation did.

  Some cosmologists were also intrigued by it. They noted that it could explain the ‘fine-tuning’ problem that had recently come to their attention. This problem was that, in order for life to be possible, there are hundreds of aspects of the design of the universe that had to be fine-tuned just right during the Big Bang. For instance, it was necessary for protons and neutrons to have almost the same mass, for the relative strengths of electromagnetism and gravity to be exactly as we observe them to be, and for the universe to be expanding at precisely the speed that it is. If any of these values had been different, even just slightly, life would never have been possible, and there are many more constants and ratios of this kind that had to be perfectly calibrated. All of these values, however, seem somewhat arbitrary. It’s easy to imagine they could have been different. So why did they all come in at exactly the right numbers needed for life to emerge?

  If there’s only one universe, the odds of life existing seem beyond incredible. It would be like rolling double sixes a million times in a row. But if there are many parallel universes, in which all physical possibilities occur, then some of them are bound to be appropriately fine-tuned to allow the emergence of life.

  A technical issue, however, still made many physicists reluctant to accept Everett’s theory. They couldn’t understand why all the trajectories of a particle would become independent of each other. Why would we ever perceive ourselves to be locked in one reality? Why didn’t all the many worlds remain blurred together, as one?

  In the 1980s, the physicist Dieter Zeh, of the University of Heidelberg, developed the theory of decoherence, which provided a possible answer. It hypothesized that the wave function of a particle interacts with the wave functions of surrounding particles, and, as it does so, it tends to ‘decohere’. Particle trajectories entangle with each other, tying up, as if in knots, and this causes the various parallel worlds to become independent of each other.

  Decoherence was a highly sophisticated mathematical theory, and, when coupled with Everett’s many-worlds theory, the two provided a compellingly complete model of the underlying subatomic reality we inhabit. As a result, support began to shift away from the Copenhagen interpretation and towards the many-worlds theory, and this trend has continued to the present. One reason the many-worlds theory continues to be resisted by most scientists, though, is because it leads to such bizarre implications. In particular, there’s that immortality feature.

  The first acknowledgement in print of the death-defying implication of the many-worlds theory appeared in 1971. The physicist Mendel Sachs wrote to the journal Physics Today noting that, if Everett’s theory was true, it could cheer up a passenger on an aeroplane about to crash, because he could reflect that in some other branch of the universe the plane was certain to land, safe and sound.

  By the 1980s, it had sunk in among scientists that it wasn’t just a temporary es
cape from death that the many-worlds theory promised, but full-blown immortality. After all, if whatever can happen, does happen, then every time we face the possibility of death, at least one version of our self must find a way to carry on, because there’s always some combination of events that would save us. Our chances of survival will grow increasingly improbable as time goes on, but improbable is not the same as impossible. In an obscure branch of the quantum universe, we’ll live forever.

  In 1998, the physicist Max Tegmark, one of Everett’s most vocal champions, pointed out an interesting aspect of this immortality feature: it provides us with a way to test the many-worlds theory so we can know for sure whether it’s true or false.

  Tegmark’s idea was to design a gun that would randomly either fire a bullet or merely make an audible click each time someone pulled the trigger. The chance of either outcome was fifty–fifty. The experimenter would then place his head in front of the barrel and instruct an assistant to fire. Tegmark figured that, if the gun clicked instead of firing ten times in a row, this would be ample proof of the validity of the many-worlds theory.

  This ‘quantum suicide’ experiment is well within the scope of available technology. The problem, of course, is that no one in their right mind would want to be the guinea pig, and no one would actually believe the experimenter even if they survived.

  Philosophers have also been intrigued by the concept of quantum immortality. There’s now a small genre of philosophical literature devoted to exploring its practical and ethical implications. One of the ongoing debates is about whether it would be rational for a believer in the many-worlds theory to play Russian roulette, assuming a large monetary award was involved. After all, if the theory is true, at least one copy of the believer is guaranteed to win the bet every time. The other copies will be dead and therefore beyond caring.

  The philosophical verdict on this question is mixed. It’s true, some acknowledge, that you might make a few of your future selves wealthier, but most point out that there are many negatives to consider. Even if you don’t care about killing off some of your future selves, what about the friends and family of those selves that will suffer the loss? Also, there’s the simultaneous guarantee that some of your future selves will survive in a diminished capacity, either with brain damage or half their face blown off. It seems a high price to pay for the knowledge that, in some parallel world, you’re slightly richer.

 

‹ Prev