Is Einstein Still Right?

Home > Other > Is Einstein Still Right? > Page 25
Is Einstein Still Right? Page 25

by Clifford M. Will


  The event prompts an “all hands on deck” response by the thousand-member team to verify or refute the idea that this was a gravitational wave detection. Several independent computer analyses of the data reveal the same signal in each detector. One simple analysis applies a technique that is similar to that used in high-end headphones and hearing aids to cancel part of the background noise so that you can hear the music or dialogue you are interested in, even in a noisy airplane or restaurant. Known as a “band-pass filter,” it suppresses noise in the frequency range above and below the frequency range where the signal resides, between about thirty and a few hundred hertz, while not altering anything in that critical frequency range. What emerges when that simple filter is applied to the data from each detector is shown in the two panels of Figure 8.2.

  Figure 8.2 Data from the Hanford and Livingston detectors during the crucial 0.2 seconds, after being run through a band-pass filter, and after certain noise effects associated with well-understood vibrations in the instrument have been removed. Credit: Gravitational Wave Open Science Center.

  Each panel shows a stretch of filtered output lasting about two tenths of a second, occurring at the same time in each detector. The clocks at the detectors are all set to Greenwich Mean Time in order to avoid any possible confusion with time zones or daylight savings time. From about 0.25 seconds to about 0.34 seconds the outputs look very spiky and do not resemble each other at all. This is the random, independent noise present in each detector. From 0.34 seconds to about 0.38 seconds we see three peaks and valleys that roughly match each other in each detector, but with some spiky noise superimposed. Those three peaks represent two complete cycles of the wave, over a time of 0.04 seconds, corresponding to about 50 cycles per second or 50 hertz. These are followed by four more peaks and valleys that are significantly higher than the first three, but also are more closely spaced than the previous peaks. Those three full cycles span only about 0.025 seconds, corresponding to a frequency of about 120 hertz. This implies that not only is the strength of the signal increasing with time, but its frequency is also increasing with time. But the last of these four peaks in each detector is already lower than its predecessor. After this, the output looks once again like independent random noise in each detector.

  Even if you had no idea what this signal represented, you would be tempted to think that this was a candidate gravitational wave signal. First, the signal is almost exactly the same in the two detectors. To be sure, it is conceivable that some event, such as a tree falling near the Livingston detector (the surrounding forest there happens to undergo active logging) could, by some fluke, produce exactly the right vibrations of the ground to produce the signal seen in the bottom panel of the figure. But those ground vibrations in Louisiana could not possibly affect the Hanford detector, 3,000 kilometers away in Washington State. And the chance of some unrelated event at Hanford (which is surrounded by almost treeless high desert) producing exactly the same response at exactly the same time is astronomically small (we will quote a number later). This is one of the positive legacies of Joseph Weber’s failed attempt to detect gravitational waves, the principle that for a claimed detection of gravitational waves to be credible, the same signal must be sensed in independent, widely separated detectors. The fact that the two signals are not exactly the same reflects the everyday fact that two people listening to a third person in a very noisy room might not hear exactly the same thing, but will still get the gist of what is being said.

  Another feature of the two signals is important. While the peaks and valleys in the two panels seem to line up in time, the features in the Hanford detector are consistently about 7 milliseconds (0.007 seconds) later than those in the Livingston detector (the difference is too small to show up on the figures, but it is easily measured from the data). Now, if the gravitational waves were propagating from somewhere in the sky exactly perpendicular to the line joining the Livingston and Hanford detectors (the baseline), they would arrive at the two detectors at exactly the same time (see Figure 8.3). If they were traveling exactly parallel to the baseline, then they would arrive at one detector 10 milliseconds before the other. This is the time it would take a signal traveling at the speed of light to traverse the 3,000 kilometers between the two detectors. The actual time difference of 7 milliseconds was comfortably between these two limits, indicating that the signal actually arrived from a direction about 45 degrees from the baseline (right panel of Figure 8.3). On the other hand, if the time difference had been greater that 10 milliseconds, this would not have been accepted as a candidate gravitational wave.

  Figure 8.3 Left: Waves approaching the Hanford and Livingston detectors from any direction perpendicular to the line joining them will reach the detectors at the same time. Middle: Waves approaching parallel to the line joining the detectors will reach one detector 10 milliseconds before the other, because of the 3,000 kilometers separating them. Right: Waves approaching from a direction approximately 45 degrees relative to the baseline will reach the Livingston detector about 7 milliseconds before the Hanford detector. The measured time differences give important information about the location of the source on the sky.

  The researchers at the LIGO scientific collaboration actually had a very good idea of what the signals in Figure 8.2 represented: the “chirp” signal from the final inward spiral and merger of two bodies such as black holes or neutron stars. We will describe the history and physics of this idea later in this chapter, but the bottom line is this: as the two bodies orbit each other they emit gravitational waves, thus losing energy, getting closer to each other and orbiting faster (recall the binary pulsars from Chapter 5). This “inspiral” phase leads to waves of increasing strength or amplitude and increasing frequency, as shown in Figure 8.4. This part of the signal is called a chirp because of the similarity between a sound with these characteristics and the songs of some birds. The two bodies then merge, forming a black hole, a process that leads to a brief burst of strong waves (the “merger” part of Figure 8.4). The newly formed black hole is very distorted and it oscillates or “rings” a few times, emitting “ringdown” waves and quickly settling down to a stationary black hole that ceases to emit gravitational waves. The wave shown in Figure 8.4, calculated using an approximate solution from general relativity, displays all the features shown in the two panels of Figure 8.2.

  Figure 8.4 A chirp signal from two black holes calculated using general relativity, showing the inspiral part, when the two black holes are orbiting each other with increasing speed, the merger part, when the two holes merge to form one very distorted black hole, and the ringdown part, when the distorted black hole emits gravitational waves and settles down to a final stationary black hole.

  One can imagine the level of euphoria that occurred within the collaboration. But with this euphoria came an accompanying sense of paranoia. What if somebody maliciously inserted an artificial signal just to fool us? Surely this is unlikely. Hackers are not typically interested in astronomical data sets, and it would be unfathomable for a collaboration member to be so malicious.

  More worrisome was the possibility that some bizarre noise artifact, erroneous instrumental setting or faulty line of computer code was making them believe they had detected a gravitational wave, when in reality they had not. It would not be the first time in the history of physics that an erroneous claim had been made. After all, physics is done by people, and people make mistakes. The important fact about physics is that it is self-correcting, and errors are eventually fixed and the record is set straight. But nobody wants to be known for the mistake rather than the discovery.

  Luckily, examples of such errors are rare in physics. But when they occur, they generally make headlines and cause much embarrassment.

  In 1989, electrochemists Stanley Pons and Martin Fleischmann announced they had observed “cold fusion” in their lab. Nuclear fusion occurs regularly inside the Sun, converting hydrogen into helium, releasing energy sufficient to warm and illuminate the E
arth (the same process occurs in thermonuclear bombs). But this process requires extremely high temperatures. Achieving fusion at room temperature would have been revolutionary, as it would provide an effectively limitless energy source. Many scientists tried immediately to replicate their experiment but most failed, and in time the flaws in the original experiment that had led Pons and Fleischmann to the wrong conclusion were identified. Apart from the effect on the careers of the two scientists, the episode was a major embarrassment for the University of Utah, which had exploited the discovery for maximum publicity.

  In 2011, an experiment with the acronym OPERA made a dramatic announcement. The instrument was designed to study subatomic particles called neutrinos emitted in the CERN accelerator in Geneva, Switzerland and directed toward detectors 730 kilometers away, inside the Gran Sasso mountain in Italy. In September of that year, the OPERA collaboration announced they had measured an anomaly that might be a sign of neutrinos traveling faster than the speed of light. Such a discovery, if correct, would have been revolutionary, as it would have contradicted Einstein’s special theory of relativity, and consequently, his general theory. But a few months later, the OPERA team reported two flaws in their equipment: one related to a fiber optic cable that was not connected properly and another related to a clock that ran fast. These flaws, they concluded, were responsible for the anomaly. After correcting the problems, they found that neutrinos indeed travel at the speed of light, up to measurement uncertainty. But in the end, OPERA is remembered more for the mistake than for the final valid result.

  And finally, there is the internal history of gravitational wave science. As we described in Chapter 7, an announcement of the discovery of gravitational waves had already been made in the late 1960s by Weber. Immediately after this announcement experimentalists set out to replicate Weber’s results, but they failed. Eventually, a consensus arose that Weber’s result had to be wrong, so when the LIGO detectors were built, the collaboration wanted to be particularly careful to not make the same mistake again. Many checks and counter-checks were established and tested to ensure that a detection was real prior to any announcement of a discovery.

  This system of checks was so rigorous that it led to what is now known as the infamous “Big Dog” event. On 16 September 2010, an initial, less sensitive version of the LIGO detectors was in science mode, collecting data in the (admittedly unlikely) event that a sufficiently loud gravitational wave would pass through the Earth. And on that day, the alarms went off. A candidate event was identified, and it seemed to be coming from somewhere near the direction of Sirius, the Dog Star. In a fit of creativity, the event was named the “Big Dog.” Eight minutes after the event was detected, roughly twenty-five people in the collaboration were notified to follow up on it and see whether it was worthy of further study. These twenty-five people concluded that this was the case, and the collaboration sent a circular to a group of collaborating astronomers to tell them that a candidate event had been detected, while they continued to analyze the data.

  Everybody involved was sworn to secrecy because the Big Dog could have been a false alarm, either from a rare simultaneous disturbance at both detectors, or from a “blind injection.” A blind injection is an internal test carried out routinely by the collaboration in which a tiny group of pre-selected technical people in the collaboration inject a fake signal in the data without telling anybody else, except an even tinier group of pre-selected VIPs in the collaboration. The purpose of this test is to see if the automated data analysis tools they have created can catch the blind injection and if the collaboration can identify it properly. For several months, the collaboration carried out all the tests and checks on the Big Dog, verified that the signal represented a gravitational wave, and wrote a draft of the discovery paper. On 14 March 2011, the “envelope was opened” and (drum roll) the LIGO leadership announced to the team that the Big Dog had been a blind injection after all. The good news was that the collaboration had caught it and so the data analysis tools were working as expected. Well, not exactly: some of the inferred parameters, like the location of the source in the sky, were not the same as those of the injected signal. This led to the discovery and correction of a line of computer code with a wrong sign. The bad news was that they hadn’t detected a real signal.

  You might now understand why, in September 2015, when the data analysis tools signaled that a candidate event had just been detected, the collaboration was extremely cautious and secretive. Not satisfied with the simple filter that revealed the signals shown in Figure 8.2, they analyzed the data using sophisticated computer code and managed to extract a beautiful chirp using a sum of short waves of fixed frequency. This was an important test because it was agnostic to the true theory of gravity, using almost no information about Einstein’s theory. Simultaneously, the collaboration also compared the data to an array of detailed general relativity predictions of the gravitational wave signal emitted by merging black holes, finding agreement with their initial conclusions based on cruder analyses. Those comparisons also allowed them to measure such quantities as the masses of the two black holes, and to test Einstein’s theory. We will describe what was learned in a moment.

  The collaboration also calculated the probability that this was a fluke, a pair of random events at each detector that happened to jiggle the mirrors just so. After ten million computer simulations, they found that such an accident would happen less often than once every 200,000 years. So this was not a fluke, and it was not an injection of any kind. In fact, right after the detection, LIGO management had revealed that there was no “Big Dog”-style blind injection. This event was the real thing.

  This extraordinary degree of caution, secrecy and obsessively detailed analysis explains why it took five months from the initial “ping” that caught Marco Drago’s eye to David Reitze’s announcement at the National Press Club in February 2016.

  As we have said, the first detection was of waves from the final few inspiraling orbits and the merger of two black holes. This seems like a ridiculously special, once-in-a-lifetime event. Although we knew that binary neutron stars exist (see Chapter 5), there was no observational evidence that binary black holes exist. Surely a much more plausible possibility for the first detection would have been a supernova, examples of which had been observed by humans for millenia. These were the sources that Joe Weber was after when he built his resonant bar detectors.

  But in fact, theorists had been thinking about black hole or neutron star inspirals and mergers for some time, and by the time the LIGO detectors were being considered by the NSF for major funding, gravitational waves from inspirals and mergers already formed the centerpiece of the science case that LIGO advocates were making.

  Strangely enough, the idea was first proposed in 1963 by physicist Freeman Dyson while he was studying how advanced extraterrestrial civilizations could sustain their energy needs. Born in England in 1923, Dyson moved to the US in 1947 to study for a Ph.D. at Cornell (although he never actually received the degree). Over his seventy-year career (he is a professor emeritus at the Institute for Advanced Studies in Princeton) he made important contributions to an eclectic array of scientific subjects, including pure mathematics, quantum field theory (in 1947 he proved that the seemingly discordant theories for quantum electrodynamics that had been devised by Richard Feynman and by Julian Schwinger were actually different versions of the same theory, today called QED), biology and space exploration, as well as topics in the public interest, such as nuclear warfare and climate change. In 1955 he met Joe Weber during Weber’s sabbatical with John Wheeler, and became intrigued with the idea of detecting gravitational radiation.

  However, in 1963 he was interested in whether there was a better source of energy to sustain an advanced extraterrestrial civilization than the light and heat from its host star. In an article entitled “Gravitational Machines” he imagined a civilization stationing its home planet or base station not too far from a binary star system. If the civilization se
nds a probe toward one of the stars, allowing it to make a close flyby of the star at a time when the star is approaching the base station, then the probe would return to the base station with more kinetic energy than it had when it departed. That energy could then be extracted and used to sustain the civilization. The effect he was employing in his model is called the “gravitational slingshot,” well known to planetary scientists as a way to boost the speed of spacecraft to higher levels than they could ever acquire from rockets; it is routinely exploited to get spacecraft to Jupiter and Saturn and beyond, for example. The problem with Dyson’s idea is that binary stars typically move so slowly in their orbits that the energy obtained from the slinghot effect is trivial compared to the conventional energy from the light of the stars. A binary system of white dwarfs would be better because, being a hundred to a thousand times smaller than a solar-type star, they can orbit more closely to each other and thus achieve much higher velocities. This would be more promising for the civilization, particularly since white dwarfs are too dim to provide sufficient light and heat.

  But then, Dyson reasoned, even better would be a binary system of neutron stars. These bodies are so small compared to their masses—roughly 20 kilometers in diameter—that they can orbit each other in very close proximity and with speeds that are a significant fraction of the speed of light, and thus the energy available to the civilization on each slingshot is even larger. This was quite a radical idea in 1963, since, as we saw in Chapter 5, neutron stars were at the time little more than a figment of Baade’s and Zwicky’s imaginations, and the first neutron star, in the form of a pulsar, would not be detected until four years later. Of course, there was also no evidence of extra-solar planets at the time, let alone other civilizations. In any event, Dyson immediately realized that this idea would not work. Such a close neutron star binary would emit copious amounts of gravitational radiation, and would then inspiral and merge so quickly that the civilization would soon lose its source of energy. On the other hand, he noted, the gravitational wave signal itself might be of interest, prophesying that “[it] would seem worthwhile to maintain a watch for events of this kind, using Weber’s equipment or some suitable modification of it.”

 

‹ Prev