Death By Black Hole & Other Cosmic Quandaries
Page 32
There you have it—the algebraic recipe for all occasions when you want to convert matter into energy or energy into matter. In those simple sentences, Einstein unwittingly gave astrophysicists a computational tool, E=mc2, that extends their reach from the universe as it now is, all the way back to infinitesimal fractions of a second after its birth.
The most familiar form of energy is the photon, a massless, irreducible particle of light. You are forever bathed in photons: from the Sun, the Moon, and the stars to your stove, your chandelier, and your night-light. So why don’t you experience E=mc2 every day? The energy of visible-light photons falls far below that of the least massive subatomic particles. There is nothing else those photons can become, and so they live happy, relatively uneventful lives.
Want to see some action? Start hanging around gamma-ray photons that have some real energy—at least 200,000 times more than that of visible photons. You’ll quickly get sick and die of cancer, but before that happens you’ll see pairs of electrons—one matter, the other antimatter; one of many dynamic duos in the particle universe—pop into existence where photons once roamed. As you watch, you will also see matter-antimatter pairs of electrons collide, annihilating each other and creating gamma-ray photons once again. Increase the light’s energy by a factor of another 2,000, and you now have gamma rays with enough energy to turn susceptible people into the Hulk. But pairs of these photons now have enough energy to spontaneously create the more massive neutrons, protons, and their antimatter partners.
High-energy photons don’t hang out just anywhere. But the place needn’t be imaginary. For gamma rays, almost any environment hotter than a few billion degrees will do just fine.
The cosmological significance of particles and energy packets transmuting into each other is staggering. Currently the temperature of our expanding universe, calculated from measurements of the microwave bath of light that pervades all of space, is a mere 2.73 degrees Kelvin. Like the photons of visible light, microwave photons are too cool to have any realistic ambitions to become a particle via E=mc2; in fact, there are no known particles they can spontaneously become. Yesterday, however, the universe was a little bit smaller and a little bit hotter. The day before, it was smaller and hotter still. Roll the clocks backward some more—say, 13.7 billion years—and you land squarely in the primordial soup of the big bang, a time when the temperature of the cosmos was high enough to be astrophysically interesting.
The way space, time, matter, and energy behaved as the universe expanded and cooled from the beginning is one of the greatest stories ever told. But to explain what went on in that cosmic crucible, you must find a way to merge the four forces of nature into one, and find a way to reconcile two incompatible branches of physics: quantum mechanics (the science of the small) and general relativity (the science of the large).
Spurred by the successful marriage of quantum mechanics and electromagnetism in the mid-twentieth century, physicists set off on a race to blend quantum mechanics and general relativity (into a theory of quantum gravity). Although we haven’t yet reached the finish line, we know exactly where the high hurdles are: during the “Planck era.” That’s the phase up to 10-43 seconds (one ten-million-trillion-trillion-trillionth of a second) after the beginning, and before the universe grew to 10-35 meters (one hundred-billion-trillion-trillionth of a meter) across. The German physicist Max Planck, after whom these unimaginably small quantities are named, introduced the idea of quantized energy in 1900 and is generally credited with being the father of quantum mechanics.
Not to worry, though. The clash between gravity and quantum mechanics poses no practical problem for the contemporary universe. Astrophysicists apply the tenets and tools of general relativity and quantum mechanics to very different classes of problems. But in the beginning, during the Planck era, the large was small, and there must have been a kind of shotgun wedding between the two. Alas, the vows exchanged during that ceremony continue to elude us, and so no (known) laws of physics describe with any confidence the behavior of the universe during the brief interregnum.
At the end of the Planck era, however, gravity wriggled loose from the other, still-unified forces of nature, achieving an independent identity nicely described by our current theories. As the universe aged through 10-35 seconds it continued to expand and cool, and what remained of the unified forces split into the electroweak and the strong nuclear forces. Later still, the electroweak force split into the electromagnetic and the weak nuclear forces, laying bare the four distinct forces we have come to know and love—with the weak force controlling radioactive decay, the strong force binding the nucleus, the electromagnetic force binding molecules, and gravity binding bulk matter. By now, the universe was a mere trillionth of a second old. Yet its transmogrified forces and other critical episodes had already imbued our universe with fundamental properties each worthy of its own book.
While the universe dragged on for its first trillionth of a second, the interplay of matter and energy was incessant. Shortly before, during, and after the strong and electroweak forces parted company, the universe was a seething ocean of quarks, leptons, and their antimatter siblings, along with bosons, the particles that enable their interactions. None of these particle families is thought to be divisible into anything smaller or more basic. Fundamental though they are, each comes in several species. The ordinary visible-light photon is a member of the boson family. The leptons most familiar to the nonphysicist are the electron and perhaps the neutrino; and the most familiar quarks are…well, there are no familiar quarks. Each species has been assigned an abstract name that serves no real philological, philosophical, or pedagogical purpose except to distinguish it from the others: up and down, strange and charmed, and top and bottom.
Bosons, by the way, are simply named after the Indian scientist Satyendranath Bose. The word “lepton” derives from the Greek leptos, meaning “light” or “small.” “Quark,” however, has a literary and far more imaginative origin. The physicist Murray Gell-Mann, who in 1964 proposed the existence of quarks, and who at the time thought the quark family had only three members, drew the name from a characteristically elusive line in James Joyce’s Finnegans Wake: “Three quarks for Muster Mark!” One thing quarks do have going for them: all their names are simple—something chemists, biologists, and geologists seem incapable of achieving when naming their own stuff.
Quarks are quirky beasts. Unlike protons, each with an electric charge of +1, and electrons, with a charge of–1, quarks have fractional charges that come in thirds. And you’ll never catch a quark all by itself; it will always be clutching onto other quarks nearby. In fact, the force that keeps two (or more) of them together actually grows stronger the more you separate them—as if they were attached by some sort of subnuclear rubber band. Separate the quarks enough, the rubber band snaps and the stored energy summons E=mc2 to create a new quark at each end, leaving you back where you started.
But during the quark-lepton era the universe was dense enough for the average separation between unattached quarks to rival the separation between attached quarks. Under those conditions, allegiance between adjacent quarks could not be unambiguously established, and they moved freely among themselves, in spite of being collectively bound to each other. The discovery of this state of matter, a kind of quark soup, was reported for the first time in 2002 by a team of physicists at the Brookhaven National Laboratories.
Strong theoretical evidence suggests that an episode in the very early universe, perhaps during one of the force splits, endowed the universe with a remarkable asymmetry, in which particles of matter barely outnumbered particles of antimatter by a billion-and-one to a billion. That small difference in population hardly got noticed amid the continuous creation, annihilation, and re-creation of quarks and antiquarks, electrons and antielectrons (better known as positrons), and neutrinos and antineutrinos. The odd man out had plenty of opportunities to find someone to annihilate with, and so did everybody else.
But
not for much longer. As the cosmos continued to expand and cool, it became the size of the solar system, with the temperature dropping rapidly past a trillion degrees Kelvin.
A millionth of a second had passed since the beginning.
This tepid universe was no longer hot enough or dense enough to cook quarks, and so they all grabbed dance partners, creating a permanent new family of heavy particles called hadrons (from the Greek hadros, meaning “thick”). That quark-to-hadron transition soon resulted in the emergence of protons and neutrons as well as other, less familiar heavy particles, all composed of various combinations of quark species. The slight matter-antimatter asymmetry afflicting the quark-lepton soup now passed to the hadrons, but with extraordinary consequences.
As the universe cooled, the amount of energy available for the spontaneous creation of basic particles dropped. During the hadron era, ambient photons could no longer invoke E=mc2 to manufacture quark-antiquark pairs. Not only that, the photons that emerged from all the remaining annihilations lost energy to the ever-expanding universe and dropped below the threshold required to create hadron-antihadron pairs. For every billion annihilations—leaving a billion photons in their wake—a single hadron survived. Those loners would ultimately get to have all the fun: serving as the source of galaxies, stars, planets, and people.
Without the billion-and-one to a billion imbalance between matter and antimatter, all mass in the universe would have annihilated, leaving a cosmos made of photons and nothing else—the ultimate let-there-be-light scenario.
By now, one second of time has passed.
The universe has grown to a few light-years across, about the distance from the Sun to its closest neighboring stars. At a billion degrees, it’s still plenty hot—and still able to cook electrons, which, along with their positron counterparts, continue to pop in and out of existence. But in the ever-expanding, ever-cooling universe, their days (seconds, really) are numbered. What was true for hadrons is true for electrons: eventually only one electron in a billion survives. The rest get annihilated, together with their antimatter sidekicks the positrons, in a sea of photons.
Right about now, one electron for every proton has been “frozen” into existence. As the cosmos continues to cool—dropping below 100 million degrees—protons fuse with protons as well as with neutrons, forming atomic nuclei and hatching a universe in which 90 percent of these nuclei are hydrogen and 10 percent are helium, along with trace amounts of deuterium, tritium, and lithium.
Two minutes have now passed since the beginning.
Not for another 380,000 years does much happen to our particle soup. Throughout these millennia the temperature remains hot enough for electrons to roam free among the photons, batting them to and fro.
But all this freedom comes to an abrupt end when the temperature of the universe falls below 3,000 degrees Kelvin (about half the temperature of the Sun’s surface), and all the electrons combine with free nuclei. The marriage leaves behind a ubiquitous bath of visible-light photons, completing the formation of particles and atoms in the primordial universe.
As the universe continues to expand, its photons continue to lose energy, dropping from visible light to infrared to microwaves.
As we will soon discuss in more detail, everywhere astrophysicists look we find an indelible fingerprint of 2.73-degree microwave photons, whose pattern on the sky retains a memory of the distribution of matter just before atoms formed. From this we can deduce many things, including the age and shape of the universe. And although atoms are now part of daily life, Einstein’s equilibrious equation still has plenty of work to do—in particle accelerators, where matter-antimatter particle pairs are created routinely from energy fields; in the core of the Sun, where 4.4 million tons of matter are converted into energy every second; and in the cores of every other star.
It also manages to occupy itself near black holes, just outside their event horizons, where particle-antiparticle pairs can pop into existence at the expense of the black hole’s formidable gravitational energy. Stephen Hawking first described that process in 1975, showing that the mass of a black hole can slowly evaporate by this mechanism. In other words, black holes are not entirely black. Today the phenomenon is known as Hawking radiation and is a reminder of the continued fertility of E=mc2.
But what happened before all this? What happened before the beginning?
Astrophysicists have no idea. Or, rather, our most creative ideas have little or no grounding in experimental science. Yet certain types of religious people tend to assert, with a tinge of smugness, that something must have started it all: a force greater than all others, a source from which everything issues. A prime mover.
In the mind of such a person, that something is, of course, God.
But what if the universe was always there, in a state or condition we have yet to identify—a multiverse, for instance? Or what if the universe, like its particles, just popped into existence from nothing?
Such replies usually satisfy nobody. Nonetheless, they remind us that ignorance is the natural state of mind for a research scientist on the ever-shifting frontier. People who believe they are ignorant of nothing have neither looked for, nor stumbled upon, the boundary between what is known and unknown in the cosmos. And therein lies a fascinating dichotomy. “The universe always was” goes unrecognized as a legitimate answer to “What was around before the beginning?” But for many religious people, the answer “God always was” is the obvious and pleasing answer to “What was around before God?”
No matter who you are, engaging in the quest to discover where and how things began tends to induce emotional fervor—as if knowing the beginning bestows upon you some form of fellowship with, or perhaps governance over, all that comes later. So what is true for life itself is no less true for the universe: knowing where you came from is no less important than knowing where you are going.
FORTY-ONE
HOLY WARS
At nearly every public lecture that I give on the universe, I try to reserve adequate time at the end for questions. The succession of subjects is predictable. First, the questions relate directly to the lecture. They next migrate to sexy astrophysical subjects such as black holes, quasars, and the big bang. If I have enough time left over to answer all questions, and if the talk is in America, the subject eventually reaches God. Typical questions include, “Do scientists believe in God?” “Do you believe in God?” “Do your studies in astrophysics make you more or less religious?”
Publishers have come to learn that there is a lot of money in God, especially when the author is a scientist and when the book title includes a direct juxtaposition of scientific and religious themes. Successful books include Robert Jastrow’s God and the Astronomers, Leon M. Lederman’s The God Particle, Frank J. Tipler’s The Physics of Immortality: Modern Cosmology, God, and the Resurrection of the Dead, and Paul Davies’s two works God and the New Physics and The Mind of God. Each author is either an accomplished physicist or astrophysicist and, while the books are not strictly religious, they encourage the reader to bring God into conversations about astrophysics. Even the late Stephen Jay Gould, a Darwinian pitbull and devout agnostic, joined the title parade with his work Rock of Ages: Science and Religion in the Fullness of Life. The financial success of these published works indicates that you get bonus dollars from the American public if you are a scientist who openly talks about God.
After the publication of The Physics of Immortality, which suggested whether the law of physics could allow you and your soul to exist long after you are gone from this world, Tipler’s book tour included many well-paid lectures to Protestant religious groups. This lucrative subindustry has further blossomed in recent years due to efforts made by the wealthy founder of the Templeton investment fund, Sir John Templeton, to find harmony and consilience between science and religion. In addition to sponsoring workshops and conferences on the subject, the Templeton Foundation seeks out widely published religion-friendly scientists to receive an annua
l award whose cash value exceeds that of the Nobel Prize.
Let there be no doubt that as they are currently practiced, there is no common ground between science and religion. As was thoroughly documented in the nineteenth-century tome A History of the Warfare of Science with Theology in Christendom, by the historian and onetime president of Cornell University Andrew D. White, history reveals a long and combative relationship between religion and science, depending on who was in control of society at the time. The claims of science rely on experimental verification, while the claims of religions rely on faith. These are irreconcilable approaches to knowing, which ensures an eternity of debate wherever and whenever the two camps meet. Although just as in hostage negotiations, it’s probably best to keep both sides talking to each other.
The schism did not come about for want of earlier attempts to bring the two sides together. Great scientific minds, from Claudius Ptolemy of the second century to Isaac Newton of the seventeenth, invested their formidable intellects in attempts to deduce the nature of the universe from the statements and philosophies contained in religious writings. Indeed, by the time of his death, Newton had penned more words about God and religion than about the laws of physics, which included futile attempts to invoke the biblical chronology to understand and predict events in the natural world. Had any of these efforts succeeded, science and religion today might be largely indistinguishable.
The argument is simple. I have yet to see a successful prediction about the physical world that was inferred or extrapolated from the content of any religious document. Indeed, I can make an even stronger statement. Whenever people have tried to make accurate predictions about the physical world using religious documents they have been famously wrong. By a prediction, I mean a precise statement about the untested behavior of objects or phenomena in the natural world, logged before the event takes place. When your model predicts something only after it has happened then you have instead made a “postdiction.” Postdictions are the backbone of most creation myths and, of course, of the Just So Stories of Rudyard Kipling, where explanations of everyday phenomena explain what is already known. In the business of science, however, a hundred postdictions are barely worth a single successful prediction.