by Sean Carroll
One thing working against the physicists was their natural inclination to be both precise and honest, often to the detriment of getting their point across. The fears that the LHC could destroy the world were based in part on respectable, if speculative, physical theories. If gravity is much stronger than usual at the high energies of an LHC particle collision, for example, it’s possible to make tiny black holes. Everything we know about physics predicts that such a black hole will evaporate harmlessly away. But it’s possible that everything we know is wrong. So maybe black holes are formed and are stable, and the LHC will produce them, and they will settle into the earth’s core and gradually eat at it from the inside, leading to a collapse of the planet over the course of time. You can calculate how much time it would actually take, and the answer turns out to be much longer than the current age of the universe. Of course, your calculations could be incorrect. But in that case, collisions of high-energy cosmic rays should be producing tiny black holes all over the universe. (The LHC isn’t doing anything the universe doesn’t do at much higher energies all the time.) And those black holes should eat up white dwarfs and neutron stars, but we see plenty of white dwarfs and neutron stars in the sky, so that can’t be quite right either.
You get the point. There are many variations on the theme, but the general pattern is universal: We can come up with very speculative scenarios that seem dangerous, but upon closer inspection the most dangerous possibilities are already ruled out by other considerations. But because scientists like to be precise and consider many different possibilities, they tend to dwell lovingly on all the scary-sounding scenarios before reassuring us that they are all quite unlikely. Every time they should have said, “No!” they tended to say, “Probably not, the chance is really very small,” which doesn’t have the same impact. (A shining counterexample is CERN theorist John Ellis, who was asked by The Daily Show what chance there was that the LHC would destroy the earth, and simply replied, “Zero.”)
Imagine opening your refrigerator and reaching for a jar of tomato sauce, planning to make pasta for tonight’s dinner. An alarmist friend grabs you before you can open the lid, saying, “Wait! Are you sure that opening that jar won’t release a mutant pathogen that will quickly spread and wipe out all life on earth?” The truth is, you can’t be sure, with precisely 100 percent certainty. There are all sorts of preposterously small probability disaster scenarios that we ignore in our everyday lives. It’s conceivable that turning on the LHC will start a chain of events that destroys the earth, but many things are conceivable; what matters is whether they are reasonable, and in this case none of them was.
Fighting against the doomsayers turned out to be good practice for the physics community. The level of public scrutiny given to the search for the Higgs boson is unprecedented. Scientists, who are at their best when discussing abstract and highly technical ideas with other scientists, have had to learn to craft a clear and compelling message for the outside world. In the long run, that can only be good news for science.
Making the sausage
One of the biggest misconceptions many people have about results that come from giant particle physics experiments is about the journey from taking data to announcing a result. It’s not an easy one. In science, the traditional way that results are communicated and made official is through papers published in peer-review journals. That’s certainly true for ATLAS and CMS, but the complexity of the experiments guarantees that essentially the only competent referees are the collaboration members themselves. To deal with this state of affairs, each experiment has set up an extremely rigorous and demanding procedure that must be carried out before new results can be shared with the public.
The thousands of collaborators on the LHC experiments are mostly not employed by CERN. A typical working physicist will be a student, professor, or postdoc (a research position in between the PhD and a faculty job) at a university or laboratory somewhere in the world, although they may spend a substantial portion of their year in Geneva. Most often, the first step toward a publishable paper is that one of these physicists asks a question. It might be a perfectly obvious question: “Is there a Higgs boson?” Or it could be something more speculative: “Is electric charge really conserved?” “Are there more than three generations of fermions?” “Do high-energy particle collisions create miniature black holes?” “Are there extra dimensions of space?” Questions may be inspired by a new theoretical proposal, or an unexplained feature of some existing data, or simply by the new capabilities of the machine itself. Experimentalists are generally down-to-earth people, at least in their capacity as working scientists, so they tend to ask questions that can be addressed by the flood of data the LHC provides.
The idea-bearing physicists might chat with some of their friends and colleagues to judge whether the question is worth pursuing. If they are students, they may consult with an adviser, usually a professor at their home university; if they are professors, they may hand off the idea to a student to work on. An idea that seems promising is then brought to one of the “working groups” each experiment has. The different working groups are devoted to various areas of interest: “top quarks” or “Higgs” or “exotics.” (Exotics would include particles predicted by some of the speculative theories out there, or not predicted by anybody at all.) The working groups mull over the idea, after which the “convener” who leads the group makes a decision about whether it’s worth moving forward with the analysis of this particular question. The experimentalists keep detailed Web pages that list each ongoing analysis, to help prevent duplication of effort—that’s the reason the World Wide Web was invented.
Assuming an idea is given the nod by the relevant working group, the analysis moves forward. The physicist’s life now alternates between working at a computer and participating in meetings, usually via videoconference. Doing an analysis is not by any means the only duty of an experimentalist; there is also hardware work, taking “shifts” overseeing the experiment as it’s running, teaching (or taking) classes, giving talks, applying for grant money, and of course serving on committees and the thousand other bits of academic nonsense that are an inescapable part of university life. Occasionally the experimentalists are allowed to visit with their families or see the sun, but such frivolities are kept to a minimum.
At this point the data have been collected and safely stored on disk drives around the world; the job of an analyst is to turn that data into a useful physics result. It’s rarely a matter of turning a crank. There are “cuts” to be made, throwing away some data that is noisy or irrelevant to the question being asked. (Maybe you want to look at events that feature two jets, but only with total energies greater than 40 GeV, and with an angle between them of at least 30 degrees.) Very often it is necessary to write specialized software to help tackle the specific problem under consideration. Data isn’t very useful unless it can be compared with some theoretical expectation, so other pieces of software are used to calculate the predictions for what the data should look like according to different models. Even after cuts are applied to the data, it remains necessary to estimate the background noise that threatens to drown out your precious signal, which involves a give-and-take between calculations and other measurements. Throughout the process, regular updates are provided to the working group in charge, both in the form of written documentation and videoconference presentations.
Eventually one obtains a result. The next task is to convince the rest of the collaboration that your result is right—and nothing pleases a mob of cranky physicists like showing that someone else’s analysis is wrong. Every project must first go through “preapproval” by the working group before eventually being approved by the collaboration as a whole. There is a committee whose sole job is to check that you’ve done your statistics correctly. The eventual goal is to publish a paper in a refereed journal, but the written paper must first be circulated throughout the collaboration, before ultimately being “blessed” by the publications committee. Only
then can it be sent to a journal.
Nonscientists would be forgiven if they assumed that the author of a paper had actually written the paper. Of course, the person who writes the paper is an author, but everyone who contributes in an important way to the work being described is included on the list of authors. In experimental particle physics, the tradition is that every member of the collaboration is recognized as contributing to every paper produced by the experiment. You read that correctly: Every paper that comes out of CMS or ATLAS has more than three thousand authors. What’s more, the authors are listed in alphabetical order, so that to an outsider it’s completely impossible to determine who did the analysis or actually wrote the words in the paper. It’s not an uncontroversial system, but it helps bring the collaboration together to stand behind every result they publish.
Generally, only after a paper is ready are the result of the analysis made public and the physicists involved permitted to give talks on the subject. The search for the Higgs boson is a special case, of course; everyone has known for years that this was a major goal for both experiments, and much of the preliminary groundwork was laid well ahead of time, allowing for the most rapid possible route from data to announcement. Still, until the experiments have verified that the data have been analyzed correctly, every effort is made to keep those results quiet.
I asked one physicist whether the results that ATLAS was getting were generally known within CMS, and vice versa. “Are you kidding?” I was told with a laugh. “Half of ATLAS is sleeping with half of CMS. Of course they know!” Superhuman levels of dedication to their craft notwithstanding, physicists are people too.
There are errors, and there are errors
The December updates on the Higgs search by Fabiola Gianotti and Guido Tonelli weren’t the only seminars at CERN to garner public attention in 2011. In September of that year, Italian physicist Dario Autiero announced a result that ended up being more infamous than famous: neutrinos that appeared to be moving faster than the speed of light. The finding came from the OPERA experiment, which tracked neutrinos that were produced at CERN and traveled 450 miles underground to a detector in Italy. Because neutrinos interact so weakly, they can pass through many miles of solid rock with very little loss of intensity, making this kind of arrangement a uniquely effective window onto their properties.
The problem was obvious: Nothing is supposed to travel faster than light. Einstein figured that out, and it’s one of the bedrock principles of modern physics. There are many good arguments in favor of this principle that had previously been verified in countless precision experiments. If it were to be overturned, it would be the most important finding in physics since the advent of quantum mechanics. We wouldn’t have to start over completely from scratch, but new laws of nature would clearly be required. One worrisome consequence was that if you can go faster than light, you might also be able to travel backward in time, which instantly inspired a new genre of jokes. “The bartender says, ‘We don’t serve leptons here.’ A neutrino walks into a bar.”
Most physicists were immediately skeptical. On Cosmic Variance I wrote: “The things you need to know about this result are 1. It’s enormously interesting if it’s right. 2. It’s probably not right.” Even the OPERA collaboration members themselves seemed dubious of the implications of their findings, asking the physics community to help them understand why it might be incorrect. Of course, even the most confidently held theoretical belief must give way to an unimpeachable experimental result. The question was, how reliable was the result?
The OPERA finding was extremely statistically significant. The discrepancy between theory and observation was bigger than six sigma, more than strong enough to declare a discovery. Yet there were skeptics. And those skeptics were right. In March 2012, a different experiment, called ICARUS, attempted to replicate the OPERA findings but ended up with a very different result: that the neutrinos were completely consistent with the light-speed barrier.
Was this one of those cases where we just got preposterously (un)lucky, with a bizarre series of unlikely events conspiring to lead us astray? Not at all. The OPERA collaboration eventually pinpointed an important source of error in their original analysis, namely a loose cable that connected their master clock to a GPS receiver. The faulty cable led to a delay in the timing as measured by their detector, more than enough to account for the original anomaly. Once that was fixed, the effect went away.
The crucial lesson here is that sigmas aren’t always enough. Statistics can help you decide how likely it is that your data are consistent with the null hypothesis, but only if your data are reliable in the first place. Scientists speak of “statistical errors” (because you don’t have that much data, or there is intrinsic but random uncertainty in your measurements) and also “systematic errors” (due to some unknown effect that shifts the data uniformly in some direction). Just because you get a result that is statistically significant doesn’t mean that it’s true. This is a lesson taken very seriously by the physicists searching for the Higgs boson at the LHC.
Another issue is more murky: Were the OPERA physicists right to release their results to the world, and even to call a press conference at CERN about them? Arguments on either side have flown back and forth since the first announcement was made. On the one hand, the leaders of OPERA knew perfectly well that what they were claiming was astonishing, and they took the position that it was better to spread the news widely so that other scientists could help figure out whether something could have gone wrong. On the other, many people felt that the public image of science was hurt by the incident, first by raising the possibility that Einstein could have been wrong, and then admitting it was just a mistake. It could be a moot point; in an interconnected world where news travels rapidly, it may no longer be possible for large collaborations to keep surprising findings secret for very long.
Web 2.0
Tommaso Dorigo, a physicist on the CMS experiment and blogger at A Quantum Diaries Survivor, made a bold prediction in a 2009 talk to the World Conference of Science Journalists: The first time the outside world would hear about the final discovery of the Higgs boson, it would be through an anonymous comment left on a blog. In the end he wasn’t exactly right, but close.
Prior to the Higgs boson, the last elementary particle in the Standard Model to be discovered was the top quark, pinned down by the Tevatron at Fermilab in 1995. That was about the same time that blogs were first coming into existence; the word “weblog” wasn’t coined until 1997. There was no such thing as Facebook or Twitter; even MySpace, now long since condemned as hopelessly outdated, didn’t start until 2003. The physicists working at the Tevatron might share some juicy gossip with other physicists, but there was not a lot of danger that a big discovery would go public ahead of time.
Things have changed. With the ease of communication on the Internet, anyone can spread news widely, and the ATLAS and CMS collaborations each have more than three thousand members. No matter how the leaders try to keep things under control, the chance that absolutely every one of them keeps knowledge of a major result to themselves is very slim indeed.
I will confess to being an enthusiastic proponent of blogs, although I try not to spread rumors that people don’t want spread. I started blogging back in 2004 at a personal site called Preposterous Universe, and in 2005, switched to the group blog Cosmic Variance, which is now hosted by Discover magazine. The great thing about blogs is that they can be used for whatever purpose the author chooses. A wide variety of authors take full advantage of this freedom; just within the tiny subculture of blogs run by scientists and science writers, the examples range from the chatty and informal to the rigorous and mathematical, with everything from hard news to satire and gossip in between. Our goal at Cosmic Variance is to share interesting ideas and discoveries in science with a wide variety of readers, while allowing ourselves to muse and pontificate over whatever stirs our fancy. Some of our most popular posts have focused on the LHC, including a group eff
ort live-blogging the startup in 2008 and the Higgs seminars in 2012.
One of my co-bloggers is John Conway, who is a professor of physics at the University of California, Davis, and an experimental physicist working on CMS. (JoAnne Hewett is another.) Conway’s very first post, entitled “Bump Hunting,” offered an insightful view of what it is like to be a working particle physicist. Sometimes the data can surprise you, and it’s not always easy to tell whether you’ve stumbled on a world-changing discovery or are merely the victim of a statistical fluctuation.
Conway related the story of searching for the Higgs boson in Fermilab data (the LHC wasn’t online yet) using his personal favorite channels, ones where a tau lepton is produced. They were doing a blind analysis of data from the CDF experiment at the Tevatron and finally got to the point where they were ready to open the box and see what was there. And the answer was . . . something was there! A small but unmistakable bump in the rate of producing two taus, the kind of thing you might expect from a Higgs boson with a mass of about 160 GeV. Only 2.5 sigma, but worth looking into. Most small bumps go away, but every real discovery begins with a small bump, so any breathing person in this situation would naturally get very excited. “The hair literally rose up on the back of my neck,” he recalled.
In a follow-up post, Conway talked about the subsequent analysis, and revealed what he only learned later: Their sister experiment at Fermilab, known as “D Zero,” actually saw a deficit where CDF was seeing an excess of events. That made it much less likely they were discovering a new particle. Further data didn’t support the possibility that there was a new particle lurking there. But this story was a fantastic example of the emotional roller-coaster ride that is an inevitable part of life as an experimental scientist.
Sadly, not everyone interpreted it as such. An unfortunate number of readers got the impression that Fermilab had actually discovered the Higgs boson or something like it, and Conway had decided to spread the news by posting on our humble blog rather than writing a scientific paper and perhaps holding a press conference. This misimpression wasn’t limited to overly enthusiastic commenters at our site; several journalists picked up on the prospect, which led to stories in The Economist and New Scientist and elsewhere. It was another cautionary lesson for physicists. People are extremely eager to hear anything and everything about the quest for the Higgs boson; great care is required to make sure that excitement is properly conveyed without giving people the impression that we’ve discovered more than we have.