By the late 1950s, steady state was starting to look a little unsteady. Newly developed radio telescopes were beginning to peer into the deep cosmos, and their initial results revealed an unexpectedly high number of intense radio sources, at great distances, and a relative radio silence nearby. That's a hint—only a hint—that the universe may be the same in space, but different in time. If light takes a certain amount of time to travel from place to place, then the further we look, the deeper we see into the past. If these radio emitters, known as quasars, are out there but not around here, then that means they were more common in the past. Perhaps then the universe has changed character with time.
Around the same time that Hoyle was pooh-poohing the big bang, other scientists were working out the full physical implications of such a radical universe. Four in particular—Ralph Alpher, Robert Herman, George Gamov, and Robert Dicke—semi-independently came to a remarkable conclusion. If the universe were smaller in the past, then it must have been hotter. Eventually, at some distant time, it should have been so small, dense, and hot that it was a plasma.
But at a specific time, a switch would have flipped, juuuust as the universe cooled enough to the right amount, and the cosmos would have gone from plasma to not-plasma. And that radiation has a calculable temperature, based on our knowledge of plasma physics (which was all the rage at the time), but that temperature would be reduced by the present epoch.
They initially calculated a temperature of a bare few degrees above absolute zero—apparently our present-day universe is indeed very cold—corresponding to a peak blackbody wavelength firmly in the microwave band. Additionally, we should be completely soaked in this radiation; if our universe is truly homogeneous, it should fill out the sky equally in all directions, with nary a deviation in sight.
So all they needed to do was design, build, test, and operate a microwave antenna and search for this “cosmic microwave background,” and they'd be set.
At the same time, two engineers for Bell Labs, Arno Penzias and Robert Wilson, who, bless their hearts, knew absolutely nothing about cosmology, were designing, building, testing, and operating a microwave antenna for their own industrial purposes.
It was the “testing” part that was giving them trouble. It was the first time in human history that someone had attempted to construct such a microwave detector, so I can't fault them for making things up as they went along. They built everything perfectly, but try as they might, they couldn't get rid of a constant background hiss from their instrument.
They tried the usual things. Turning it off and on again. Making sure the cables were plugged into the right spot. Replacing said cables. Testing for interference.
They tried the unusual things. Calling up the nearby army base to ask—politely—if they were transmitting at these frequencies. Cleaning the pigeon poop off the antenna. Just shooting all the dang pigeons.
I'm sure it was the oddest thing they'd ever seen. No matter where they pointed the antenna, no matter the time of day, no matter the season, there was this constant background static.
After years of banging their heads against the wall, they wondered if this hiss might be real—and might be extraterrestrial. So they sent out some feelers to the astronomical community, and before long the Dicke crew caught wind of it. They met with Penzias and Wilson. They chatted. They both came to the same conclusion: they found it. The cosmic microwave background. The afterglow of the big bang itself.
The result was two side-by-side papers, a flash heard around the world. One paper, written by the physicists, summarized the current state of the art in big bang thinking, this astounding insight that our universe was fundamentally different in the past than it is today, and that this difference is detectable and measurable.4 The other, written by the engineers, summarized the observations.5
Penzias and Wilson won a Nobel Prize for their work in failing to find a source of static hiss in their fancy antenna. There's a good chance you've probably never heard of the others before you read their names a few paragraphs ago.
C'est la vie.
The cosmic microwave background, or CMB for those in a hurry, was the nail in the coffin for Hoyle's steady-state theory. The perfect cosmological principle, as lovely as it sounds, doesn't appear to apply to our universe. While Hoyle would continue to fight the good fight past 1965, the games he would have to play to reconcile the steady-state model with the overwhelming abundance of data were stretching far too thin.
Steady state predicted that the universe ought to be the same in the past. But here we are, bathed in relic radiation generated billions of years ago. It was found across the sky, almost perfectly matching a blackbody spectrum (indeed, it's the most perfect blackbody found in nature, besting even humanmade ones), with almost no variation from point to point. It couldn't be generated by stars or galaxies—their distribution is far too lumpy to explain the smoothness of the background signal. It truly appeared to be a background, a source of light sitting behind everything else we can see.
If you could put on microwave goggles, you could detect this bath of radiation—if only faintly. Although the CMB is the largest reservoir of photons in the universe, our universe is very, very large nowadays. But build a simple microwave receiver, and you'll pick it up. If you've ever encountered an old rabbit-ears TV that's stuck between channels, you've seen with your own eyes this fossil from a distant age—about 25 percent of the static in our lives comes from the cosmic microwave background.
The cosmic microwave background pretty much killed off any other competing theory as well. Nothing else could fit, no other idea could make the cut. The raw observational data from the past three decades were simply too overwhelming.
Our universe was different in the past, and it will be different in the future. This is the ultimate, if initially unpalatable, answer to Olbers’ paradox. Why aren't we surrounded by an infinity of stars covering every square degree of the sky above? Because at a certain time in the past, there were no stars. Our universe may be old, but it's only so old.
The cosmos may be infinite in size (we'll get to that later), but it certainly isn't infinite in time, and it's growing larger every day. In the past it was smaller, hotter, and denser. How did it arise? What were the earliest moments like? These are very difficult questions to answer, if indeed they are valid questions at all.
But they are questions we'll eventually have to face, because the facts of our observations push us to that inevitable, uncomfortable conclusion.
Have your own personal theory of the history of the universe? That's fine—science thrives on creativity. But the knife of observation is sharp and is perfectly willing to cut your precious idea down to size. If you want your cosmology to work, you have to explain the existence and properties of the cosmic microwave background. Its presence is inarguable and its implications unavoidable.
While the subject was difficult to think about, at least theory and observations were in accord…for a brief moment. The angst of the nineteenth century was slowly dissolving as new understanding poured forth from the chalkboards—and now computers!—of theorists and the instruments of observers. To be clear, nobody really enjoyed the answers they were getting, but at least the picture of the universe was clicking into place.
The consternation about the complex nature of the stars was still there, and I'll resolve that tension in later chapters. It was replaced instead by a growing despair in the true scale, both in time and space, of the universe.
In some ways, a finite age to the universe is more troubling than the alternative. In an infinite (either in the static or steady sense), at least you can take comfort in the fact that this is just the way things are—that the universe simply persists, unchanging, through the deepness of time. But with the big bang, we know now that the universe has a past…and a future. And that both of these are different from the present.
While the picture of the cosmos was starting to sharpen into an unpleasant focus, at least things were making sen
se. Gravity, the feeblest of forces, was able to shape and govern the majesty of the heavens—Newton would have never guessed the magnitude of his initial insight! Over time, the discovery of the nuclear forces would help us understand the earliest epochs in our cosmic history, and also the mysterious processes in the hearts of stars.
The universe was revealing itself to be larger, more complex, and made of deeper stuff than we ever realized before. And while it looked, for a brief moment, like we had solved some of the largest riddles of our age—the true scope of the cosmos—seeds of mystery were already beginning to grow.
The story of the initial moments in our universe, from the deepest mysteries of the first second to the exotic yet understandable plasma physics of the generation of the cosmic microwave background, has been a study in contrasts. Of radiation versus matter. Of the ungluing of forces and incredible expansion. Of detailed particle interactions leading to an imbalance of matter.
And now we've reached a point where to paint a better picture of those first instants, and to give context for what's to come, we have to turn our focus inward. Deeply inward, into the subatomic realm. When Kepler asserted that the motions of the heavens governed our lives here on Earth, and then Newton realized that the physics of gravity is universal, I doubt they would have suspected that we were going to take things this far.
For here we are, in both the story of the universe and the story of our understanding of it, at a point where our knowledge of fundamental physics doesn't just govern arcane and complicated interactions in particle colliders. No, it determines the history and even fate of the universe at the largest scales.
Over the centuries we've come to realize that it's not just the laws of gravity that hold across the heavens and the Earth. The same goes for every force, every law, every interaction. Thermodynamics, electromagnetism, nuclear physics, the whole lot are what bind us to the cosmos. We may not fully understand the initial moments of our universe, but we are not afraid of attempting an explanation. We cannot visit the era of recombination and the birth of the cosmic microwave background, but we can recreate it—in miniature—in our laboratories. The inflationary epoch is inaccessible to direct observation, but we can probe it with mathematics.
The universe across both time and space is hopelessly messy, but in a good and bad way. Bad because it makes it much harder to understand than we previously thought. But good because it's just as messy as our experiences here on Earth—which means we can perform experiments, test ideas, and form hypotheses to guide us. Sciencey stuff.
Is physics truly universal? Do the laws and relationships we reveal in this place at this time hold across the cosmos? I'll get to that question in a later chapter, but for now we can rest assured that it seems on all accounts to work. And the perfect starting place is the humble spectral line.
Max Planck wasn't directly working on the problem of spectral lines, but his simple but pioneering work laid the pavement for the road to quantum mechanics—which does explain spectral lines.
Max was working on the blackbody problem. Remember all that stuff about blackbodies? Of course you don't—time to reread the last chapter. The hotter the thing, the brighter and bluer it glows, and the cooler the thing, the dimmer and redder it glows. What's the beef? The deal was that while all those relationships were sorted out experimentally in the late 1800s, nobody could explain it. You know, with physics and math.
One of the best models we could come up with, thanks to physicists Lord (John) Rayleigh and Sir (James) Jeans, was pretty straightforward: the atoms and molecules in a blackbody dumped some of their vibrational energy into radiation, which will get emitted, making it glow. But as far as their physics could tell, the transfer of vibration to radiation energy was totally egalitarian: some energy would go to low-frequency radiation and some to high-frequency radiation.
Given the pedigree of the originators, it's a surprisingly communistic approach to physics: from each frequency according to its ability, to each frequency according to its need. While this approach works in a limited set of cases, it quickly broke down in the wonderfully named “ultraviolet catastrophe.” If all frequencies each get a little bit of energy, then any common household object ought to be emitting everything possible, including high-energy ultraviolet rays, X-rays, and even gamma rays!
This, uh, doesn't happen, which everybody realized but nobody could figure out why.
It took regular guy Max Planck to come up with a solution: you gotta pay to play in this game. In his attempts to coerce the mathematics to fit the observations—to provide a halfway decent explanation for the blackbody phenomenon—he introduced what he considered to be an ugly hack: quantization. If he assumed that radiation couldn't be emitted with any energy level it pleased—if radiation came in discrete packets—then his equations worked.1
Those packets are called quanta, which means, well, packets. Compare a glass of water to a bag of potato chips. I know that water is made up of zillions of tiny molecules, but at the human level, it's a continuous fluid: you can have any amount of water you want, from the teensiest drop to the gushiest geyser. But your potato chips are quantized. You can, if you're hungry enough, have a lot of potato chips. But you can't have less than one—a single potato chip is the quantum limit of the bag. And your choices for the number of chips are always whole numbers: one chip, two chips, twenty-seven chips (slow down there, fella), and so on.
Yes, I know in reality that you can break a chip in half, smarty-pants. But just roll with the analogy; it's the best I could come up with, probably because I'm hungry.
So Planck fudged the math to make radiation behave less like water and more like potato chips, and this solved the ultraviolet catastrophe. To make one “chip” of radiation (let's call it a “photon”), you need to expend a fixed amount of energy. For a given temperature, lower-frequency radiation is easy to make: each individual photon takes just a tiny amount of energy to manufacture, so you can spit out a lot of them.
But the high-frequency photons take a lot of energy just to make a single one, and if you only have half the required energy, or three-fourths, or 99.999999 percent, it's not gonna happen. You have to grab the whole chip or you don't get any chips at all. This explains why we're not awash in cancer-inducing radiation from a hot cup of coffee or cookies fresh from the oven: they don't have enough energy to produce the hard stuff.
This may sound obvious now, after the world has had a hundred years to get used to the idea, but back then it was pretty radical stuff. Even Planck himself didn't really take it seriously: he was willing to try anything to get the mathematics to work, even this, but he considered it a stopgap measure until something better came along.
Nothing better ever did come along, but at least he ended up with a Nobel Prize for it.
That fundamental relationship between the frequency of a photon and its energy birthed a new constant of nature, one that told us about the ground-state potato-chippiness of reality: Planck's constant, which we first met way back in the earliest, sketchiest moments of the universe. It's just a simple number with no cool superhero origin story. Planck himself calculated the necessary ratio using all the known blackbody experimental results. It was a kludge, an ugly hack, a number tossed in to make the math work.
And just a few years after his initial preposterous proposition, Einstein continued the game by studying the so-called photoelectric effect, positing that it's not just the emission of radiation that's quantized (which is all you technically need to explain the blackbody effect) but its absorption and transmission as well.2 Radiation of all forms only comes in discrete little packets.
And then physicists went nuts. What if it wasn't just light that was quantized, but, like, everything? What if all energy was quantized? What if—bear with me here—our fundamental reality is just a bag of potato chips?
Like I've said before, in science you're free to say whatever crazy thing pops into your head, but if you want to play the physics game, you have to think thro
ugh the consequences of that crazy idea and test those consequences against observations.
One consequence is the nature of the atom, a subject under considerable debate and study in the opening decades of the twentieth century. The same time that astronomers were pushing the boundaries of the extent of the cosmos, physicists were trying to probe the tiniest structures known. Relatively quickly it was realized that an atom is composed of a small, dense, positively charged nucleus (a bundle of protons and neutrons), surrounded by a buzz of distinct negatively charged particles (the electrons).
The electrons were bound to the nucleus but could be knocked off if given a sufficient kick. Additional electrons could be added to an atom, which would change some of its chemical properties but otherwise leave it the same.
OK, fair enough, but the major question was how electrons arrange themselves in an atom. If you just consider them as little electrically charged balls whizzing around a nucleus like planets around the sun—which has inexplicably become the universal default symbol for “Science!”—it just doesn't work. Electrically charged balls whizzing around emit radiation, which saps energy, which should send them crashing into the nucleus. They don't, so they aren't.
The answer is potato chips. Electrons, bound to an atomic nucleus, don't get to have any sort of energy they want. No, there's a minimum energy level that they can settle into—their behavior around the nucleus is quantized. This prevents the electron from slamming into the nucleus. It simply can't, because the quantum nature of reality prevents it from having a fraction of its minimum energy. You can only have one chip, not half a chip, and an electron can only get so close to a nucleus, and no closer.
Your Place in the Universe Page 12