The system they focused on was originally known as KOI-377; by the time the paper was published in Science, it had been rechristened Kepler-9, signifying that it was no longer just a star with planet candidates, but with actual planets. (A word about the Kepler numbering system: The first five stars where Kepler discovered planets, announced months earlier at the AAS meeting in Washington, were called Kepler-4, -5, -6, -7, and -8. There was no Kepler-1, -2, or -3, however, since Kepler numbers go only to planets actually discovered by the Kepler Mission. The first three stars where Kepler spotted transiting planets had already been discovered by ground-based telescopes. Borucki and his team had decided to ease into the project with three detections that should be absurdly easy. If Kepler couldn’t see planets everyone already knew were there, it would have been a very bad sign.)
In the case of Kepler-9, there were two transiting planets, one with a nineteen-day orbit and the other at thirty-nine. But the timing of those transits wasn’t like clockwork: They varied by four minutes and thirty-nine minutes, respectively. Holman was the lead author on the Science paper that announced the new result, but he had more than three dozen coauthors. They were all listed by name, but they simply have been identified as the Exoplaneteer All-Stars, since they included Bill Borucki, Natalie Batalha, Dave Charbonneau, Eric Ford, Geoff Marcy, Dave Latham, Debra Fischer, and others.
This discovery was a very big deal. Not only was it conceptually elegant, but it also allowed scientists to calculate, using the laws of gravity, exactly how massive each of the planets in the system was. Until now, the only way to get a clue about a transiting planet’s mass and density was to use the radial-velocity-wobble technique to see how hard it tugged on its star. For most of the stars in the Kepler catalog, this wasn’t possible; the stars were too faint. Transit-timing variations were a second way into the problem, and the brightness of the star was irrelevant. You did need more than one transiting planet, obviously.
But the Kepler insiders knew something they hadn’t yet revealed to the world: There were far more systems with multiple transiting planets than anyone had suspected. “It was a big surprise,” Dimitar Sasselov told me confidentially a month or so after Holman’s paper came out. Sasselov, who got his Ph.D. in Communist Bulgaria before immigrating first to Toronto and then on that well-worn path from Toronto to Cambridge, Massachusetts, was a full-fledged Kepler co-investigator, along with Geoff Marcy and Dave Latham and—until she was promoted to deputy principal investigator—Natalie Batalha. “We didn’t expect it,” he said. “We kind of had the feeling, ‘Who ordered this?’ You know? We expected maybe 5 systems with multiple transiting planets. We’ve found 170.” He noticed a look of astonishment on my face. “Yeah, 170 multiple-transiting systems, many of them with double transits, but there are plenty of triple transits. One has six. One has five, and there are a few with four.”
This was really more than a nonphysicist’s mind can grasp all at once, but the second author on Matt Holman’s Science paper, a Santa Cruz postdoc named Daniel Fabrycky, came up with a rather brilliant way to illustrate it. It happened by accident, really. When the six-planet system (it was dubbed Kepler-11) was finally announced early in 2011, Fabrycky, who would co-author that system’s discovery paper as well, wanted to visualize what was just a bunch of numbers only a scientist could love.
“Someone had figured out,” he explained, “that three of the planets would sometimes cross in front of the star at the same time, and I wanted to figure out where the other three were when they did.” So he wrote a computer routine that would show where all the planets were at a given time. To advance the simulation one notch forward in time you’d press on the computer’s F key. Knowing where all the planets were allowed NASA to create a vivid and accurate artist’s rendering of what the system might look like if we could travel there and see it up close. The editors of Nature would end up putting the illustration based on Fabrycky’s simulation on the magazine’s cover.
But Fabrycky’s son and daughter, ages six and four at the time, respectively, figured out something else. If you kept hitting the F key over and over, you’d turn the simulation into a crude movie of the planets going around and around and around. It was pretty entertaining. Fabrycky, who has since joined the faculty at the University of Chicago, also realized that his program would work equally well for all 170 or so of the multiple-planet systems as it did for Kepler-11. So he put together an animation showing all of these systems, as though seen from above, on a single screen, and set them in motion. Fabrycky titled his creation the Kepler Orrery, after the gorgeous mechanical solar systems built in the 1700s to illustrate the motions of the planets.
Then he added one perfect final touch. Since the animation was virtually buzzing with a swarm of planets, he created a sound track with Rimsky-Korsakov’s “Flight of the Bumblebee,” played on a xylophone. When he (or anyone else) flashes it on the screen during a talk, there’s a moment of stunned silence while the audience takes it in. Then the room invariably erupts into delighted laughter.
Chapter 13
BEYOND KEPLER
When Geoff Marcy and Michel Mayor began finding planets in the mid-1990s, their discoveries triggered a wave of excitement in the astronomical community that was unlike anything the field had seen, probably ever. It was also immediately clear to everyone that the ultimate goal would be to detect life beyond Earth. But it was also clear that existing telescopes weren’t nearly powerful enough to do that. As Bill Borucki had discovered back when he first started thinking about Kepler, NASA had been playing with ideas for finding planets, and even finding life, for decades. The agency had produced a number of reports and white papers on the topic, but hadn’t done much more than that.
All that planning, however, wasn’t entirely in vain. At the same meeting where Geoff Marcy announced his first two planets back in 1996, Daniel Goldin, then the NASA administrator, was able, thanks to those years of study, to step up to a microphone the day after Marcy’s talk and lay out a fully formed, step-by-step strategy for identifying and then studying a Mirror Earth. The key technology would be something called interferometry, a technique for combining the light from two widely spaced telescopes to simulate a single, gigantic scope with extremely high resolution—the ability to take super-sharp images.
The first step, called the Space Interferometry Mission, or SIM, would use that sharp resolution to do astrometry—to measure the side-to-side wobbles a planet imposes on a star rather than the forward-and-back wobbles used in radial-velocity searches. Astrometry was so hard to do that both Bill Borucki and Geoff Marcy rejected it when conceiving their own projects. Nevertheless, said Goldin, SIM would do astrometry with such precision that it would be able to detect the wobbles induced by Mirror Earths as they orbited around Sun-like stars. (Goldin was also the NASA chief who declared when he took office that henceforth, the agency would do everything “better, faster, and cheaper.” Scientists and engineers generally agreed that they could do any two of the three at one time, but not all of them.)
The next grand step would be the Next Generation Space Telescope, a successor to the Hubble. The NGST, which has since been renamed the James Webb Space Telescope, should be in orbit by 2007, said Goldin (the current best estimate is 2018). While the NGST wouldn’t be designed just for planet-hunting, it might be able to take images of giant planets, as long as they were far enough away from their stars that they would be lost in the glare. Then, by 2020 or thereabouts, he said, the crown jewel of NASA’s planet-searching program should be ready for launch. Called the Terrestrial Planet Finder, or TPF, it would be an interferometer like SIM, but with four huge space telescopes instead of two small ones. These four telescopes would have to be so widely separated that they couldn’t sit on a single structure. They would have to fly in formation, out in the general neighborhood of Jupiter, maintaining their separation to within a fraction of an inch as they sailed through interplanetary space.
If it all worked out, the Terrestrial
Planet Finder would be able to do something remarkable. By adjusting the spacing of the telescopes just slightly, interferometry would cause the light to cancel out in parts of the image. In principle, you could blank out the star, making it much easier to see an Earth-size planet, and even to probe its atmosphere for the chemical signature of life. This would be incredibly difficult, technically, but hadn’t NASA landed men on the Moon, and set cameras down on Mars, and detected the faint afterglow of the Big Bang with the COBE satellite?
Despite the fanfare and the promise, however, this grand scheme didn’t play out quite as Goldin had portrayed it. “In 1999,” Marcy recalled, speaking more than a decade later, “the Space Interferometry Mission was approved, the budget was roughly $50, $60, $70 million per year. We met three, four times a year, the science team did—an enormous effort.” Eric Ford’s graduate thesis at Princeton was in support of the SIM program. NASA ultimately spent $600 million on the project without even starting to build the hardware, and then canceled it for budgetary reasons. The more ambitious Terrestrial Planet Finder hasn’t been canceled, but it’s been put on hold. If TPF launches by 2030, astronomers will be very surprised.
The problem with TPF isn’t just that it’s expensive and technically difficult, but also that astronomers can’t even agree on what the instrument should look like. The original concept involved those four space telescopes flying in formation, but in the early 2000s, designers came up with two simpler (though less powerful) alternatives. The first was to build a large, single space telescope and fit it with a coronagraph, a device that blots out the light of the central star to let a planet shine through. The second was like the first, but would involve the launch of two separate pieces of hardware: the telescope itself and a device called an occulter. The occulter would fly thousands of miles away from the telescope and position itself in just the right way to blot out a star. Each version had its proponents who argued that theirs was the right one and everyone else’s was wrong. In the end, NASA threw up its hands, put TPF on the back burner, and cut the project’s budget drastically.
Even in what amounted to suspended animation, however, the few scientists and engineers still working on TPF have continued to make progress. I was in Berkeley one evening for an observing run in the basement of Campbell Hall, the physics building, where Geoff Marcy was using the Keck II telescope to do radial-velocity follow-ups of Kepler candidates. When he and Paul Butler had first begun using the Keck, in 1996, they had to fly to Hawaii, make their way up to the telescope, in the cold, thin air at the summit of Mauna Kea, and try to stay awake as they fought jet lag and oxygen deprivation. Nowadays, with a super-broadband Internet connection, all of the adventure and romance is gone. I haven’t yet heard anyone complain.
The observing run wouldn’t start until close to midnight, so I was killing time in the hotel lobby when in walked David Spergel, the head of Princeton’s astrophysics department and one of the original members of Princeton’s TPF collaboration. He’d come out from Princeton for a dinner celebrating the career of his graduate school mentor, Leo Blitz. The dinner was over and he was headed for bed, but he had a few minutes to talk about his own work on TPF. “It’s not completely dead,” he said. “There’s something like one hundred or two hundred million dollars in technology development available.”
In fact, the Princeton TPF project, whose genesis had come during the conversations that engaged Sara Seager so deeply in the early 2000s, had turned into two parallel projects, one focused on the coronagraph idea and the other on the occulter, also known as a starshade. The first was located in the university’s school of engineering, where the Princeton TPF group’s principal investigator, Jeremy Kasdin, explained both projects during a visit by Michel Mayor in the fall of 2011. The coronagraph arm of the research, said Kasdin, was still using the “pupil” idea Spergel had come up with earlier in the decade, which didn’t blot out starlight so much as shunt it off to the side of the image, letting a planet become visible.
The pupil had evolved from a simple cat’s-eye shape to a mask with a set of elaborate, curved openings, which turn out to be even more efficient at shunting light away from parts of the image to let planets shine though. “How do you come up with these shapes?” asked Mayor. “Is it just intuition?” Not exactly, said Kasdin. “Usually, I’ll get out of the shower with a great idea, and I’ll call Bob.” Bob is Robert Vanderbei, chair of Princeton’s Operations Research and Financial Engineering Department, and an expert in “optimization.” That’s a field of mathematics/computer science that lets you adjust anything from an investment portfolio to an airplane wing for maximum performance. Vanderbei is also an extraordinarily talented amateur astronomer and astrophotographer, so it’s not surprising that he found his way into Kasdin’s group. When Kasdin calls Bob, he’s looking for help in refining a new idea for a pupil shape into something that reduces light in an optimal way.
The lab where Kasdin and his group tests out new pupil shapes is located inside the university’s engineering building, while the lab where they test the second version of TPF, which will use a free-flying occulter, is about three miles away, on the university’s satellite Forrestal Campus. Not far from the occulter lab is the Princeton Plasma Physics Laboratory, where scientists and engineers have been doing experiments in controlled nuclear fusion since the late 1950s. That bears mentioning because the man who founded the fusion-energy lab, Lyman Spitzer, was also the father of the Hubble Space Telescope, which he first thought of in the late 1940s. (Spitzer didn’t get his name on the Hubble, but he’s memorialized by the infrared Spitzer Space Telescope, which Dave Charbonneau now uses to look for planetary atmospheres.) Spitzer also worked on planetary-formation theory early in his career—and to top it off, he wrote an early proposal for free-flying occulters back in 1962.
The occulter Princeton is working on, in partnership with NASA’s Jet Propulsion Lab (JPL), Kasdin explained, would be 40 meters, or about 130 feet, across; it would fly about forty thousand miles away from the telescope itself, and it would have to maintain its position to within about two feet. The occulter wouldn’t be just a round disk, but would rather (because of diffraction, and thanks to optimization analysis) look something like a flower, with twenty stubby petals coming out of a wide central hub. The edges of the petals have to be as sharp as razor blades, said Kasdin; if they were any thicker, they might reflect too much stray light from the Sun into the telescope, tens of thousands of miles away. When I ran into David Spergel in Berkeley, he’d come not directly from Princeton, but from JPL, in Pasadena, where engineers had built a full-size mockup of one of the petals. He’d witnessed a demonstration of how the petal would unfurl in space. It would have to unfurl, since there’s no rocket big enough to contain a hundred-foot-wide flower unless the blossom is folded up on itself.
Spergel agreed that TPF-I, the original four-telescope interferometer, was probably too complicated and difficult to get off the ground. It was basically four James Webb Space Telescopes, and even one is turning out to be very hard and very expensive to build. “The occulter is a lot easier,” he said. “You’re flying two objects, and you have to keep them aligned, and deploy the occulter properly, so it’s still a lot of work. But the fact that I saw them deploy a mockup at JPL makes me optimistic.”
I heard the same from Jim Kasting when we met a couple of months later at an American Astronomical Society meeting. Kasting has been an exoplaneteer for nearly as long as Geoff Marcy, Bill Borucki, Michel Mayor, and Dave Latham, which is to say he’s been at it since the late 1980s. He doesn’t search for planets, and never has. Instead, he’s perhaps the world’s leading authority on habitable zones—the orbital bands around stars where water can be liquid and life therefore stands a chance of gaining a foothold. Like Marcy and the rest of the exoplanetology community, Kasting was under the impression that NASA was actually talking about funding not one, but two versions of TPF a few years ago. “There was a brief period of extreme optimism,” he said, “wher
e we thought we were going to get two big flagship-type missions. Somebody was crazy, basically.” (Flagship is the term reserved for complex, expensive missions that cost a billion dollars or more.)
Then both of them went away. “TPF has been off the drawing boards, basically nowhere, for the last five years, and then it resurfaced as a technology-development project,” he told me one evening over dinner at an inexpensive, minimally decorated, and excellent Vietnamese restaurant a couple of blocks from the Seattle waterfront. “They actually put technology development for TPF at the top of their medium-priority list. We were all disappointed that there wasn’t a dedicated exo-planet mission. That hurt a lot of people who had been working on it for fifteen years or more. But we all see the bright side of it, where if we play our cards right, some form of TPF will be the next big flagship mission.”
The flagship space mission the Decadal Survey, a once-every-ten-year report from the astronomical community to NASA, did recommend was something called the Wide-Field Infrared Survey Telescope, or WFIRST. It would mostly do cosmology, including research on the mysterious dark energy that appears to be making the universe expand faster and faster as time goes on. “It has a small exoplanet component to do gravitational microlensing, but aside from that, most of us are not that thrilled about it,” Kasting said. “But,” he continued, “they’re going to be lucky to ever fly that thing.”
Mirror Earth Page 17