that are relatively mainstream—some tweak of the vacuum
energy, a scalar field left over from the start of the Universe—to
the speculative. The latter possibilities are maybe the most excit-
ing, often involving a direct challenge to the foundations of
Einstein’s general relativity and its ideas about how gravity
works. (Not a game that traditionally ends well for the challenger,
but I suppose there’s always some hope that this time we’ll catch
Albert out.)
To concentrate, as a thousand articles and not a few books
have done, on the intellectual games of theorists, no matter how
diverting or elegant they are, is to miss the point.
One of these ideas will win out, but not because of a sudden
theoretical breakthrough. There will be no dropped chalk at the
end of a lecture to a stunned audience, and no one will be leaping
out of a bath shouting ‘Eureka!’ What’s needed, desperately, is
more data. If we knew, for example, that the strength of dark
energy was changing over time, that would rule out many of the
available theories and give researchers something to aim for.
What’s more, the observational route to this is clear. What we need
are more supernovae. The discovery of more distant examples
would mean that we could compare the past effect of dark energy
with present-day values, and the discovery of more nearby explo-
sions means that we can take better account of systematic effects.
This search has recently become even more important. Just as
it looked like the results from many different cosmological
162 From Supernovae to Zorill aS
probes were converging on a single solution to the parameters
that control the Universe’s evolution, an intriguing set of results
suggest we might not be done yet. Measures of Hubble’s con-
stant- the rate of expansion of the present day Universe made
with supernovae are giving a higher answer to those derived
using methods which depend on measurements of the cosmic
microwave background, which suggest the Universe is expand-
ing more slowly. In a well-behaved Universe, the two should
agree with each other, so this is puzzling.
It could be that there’s nothing to worry about. The difference
is small enough that the ‘tension’, as it’s coyly termed, could just
be due to chance, similar to flipping a coin and getting three
heads in a row. In such circumstances, it’s probably premature to
conclude that the coin is biased towards heads. In the case of
these separate measurements, there seems to be a little more
than a 1 in 100,000 chance of the difference being a coincidence;
not enough for scientists to be sure that’s it’s real, but certainly
enough to worry about. Whole conferences have now been
dedicated to the problem, and both groups—those that study
supernovae and those who stare at the cosmic microwave back-
ground—are adamant that there’s no simple explanation. Either
we don’t understand the early Universe properly, or something is
seriously wrong with our cosmological models, or supernovae
are odder than we think.
All of those possibilities are exciting, and in each case we need
more data, and so both understanding this intriguing result and
getting a critical clue to the nature of dark energy—the key prob-
lem in twenty-first-century physics— depends on our ability to
find changes in the sky. We’ve already seen that finding planets—
and maybe (though probably not) aliens—is essentially a prob-
lem of watching things change. So is keeping the Earth safe from
killer asteroids, or detecting the relics of the earliest days of the
From Supernovae to Zorill aS 163
Solar System that lurk beyond Pluto. Our understanding of stars
(and whether they can support planets with intelligent life)
depends on understanding their variability; the Sun has sun-
spots, and some other stars at least have starspots. And it’s not
too much to hope that one day soon we might watch the centres
of nearby galaxies flicker as material falls into their central black holes.
Given this long to-do list, it’s not surprising that telescopes all
over the world are being converted to look for new transients.
There is hardly a modern survey that doesn’t have looking for
changes in the sky as at least one of its goals, but to do the job
properly dedicated facilities are needed. Optical systems need to
be stable to allow images taken days, months, or even years apart
to be compared, and instruments on the look-out for changes
need to have as wide a field of view as possible. In the most
extreme cases, cameras and computers might need to work fast
to trigger alerts so that other telescopes can follow up on
discoveries.
To get an idea of what a modern transient-hunting machine
might look like, you could travel to a hitherto obscure peak in
the Atacama Desert in northern Chile. The desert has long been
recognized as one of the best places on Earth for observatories,
high above the often cloudy coast but lower than the snowy
peaks of the Andes. Look at a satellite photo of the area, and the
most typical sight is a clear strip between two belts of cloud, and
it’s here that many of the world’s largest telescopes are placed.
On a mountain called Cerro Pachón, construction of a new eye
on the sky is underway.
This is the Large Synoptic Survey Telescope, or LSST, whose
mirror I encountered earlier in the University of Arizona’s sur-
real football stadium-based mirror lab. Staring at a distorted ver-
sion of myself in the newly shiny surface, it was hard to imagine
164 From Supernovae to Zorill aS
that it would ever get anywhere near being ready to ship data to
the world’s astronomers, but now first light—the moment when
the first pictures of the sky are taken—is just around the corner.
The initial images with a temporary, commissioning camera are
due in 2021, and the survey proper will start, all being well, in
2023.
It’s hard for me not to be slightly scared by the prospect. The
roots of the LSST project go back almost two decades, when the
first plans for such a telescope were hatched. Even then, it was
clear that for all the clever optics, the biggest challenge would be
dealing with the data such a survey would produce. After full
operations start, LSST should produce about thirty terabytes of
images a night, more each night than the Hubble Space Telescope
produced in its first fifteen years. That’s just the static images,
though, and it’s the numbers of expected transients which are
which are truly frightening.
Nothing like LSST has ever been built before, so predictions
are uncertain, but subscribing to a service that provided a text
message every time LSST detects a change would leave you
waking up to at the very least a million text messages every
clear night. Most would be routine changes—viewed with a
telescope as large as LSST, a very large number of
stars will vary
in brightness—but hidden in the stream will be everything you
can imagine. If type 1a supernovae are your thing, there will be
plenty hidden in the data if you can only find them.
One solution is to depend on machine learning. Scientists all
over the world are preparing ‘brokers’, little software helpers
which will listen to the great stream of data flowing from the
observatory and shout loudly when they spot something inter-
esting. For the most interesting or useful transients, I suspect
we’ll see competing brokers from different teams, announcing
the highlights from LSST’s transients, either loudly to their world
From Supernovae to Zorill aS 165
or more quietly to their creators. Like an electronic version of an
old trading hall, the advantage will accrue to those who can best
filter information or make sense of the cacophony.
Not all those acting as brokers will be machines. Where there
is a wealth of data to be organized and sorted, the experiences
recounted earlier in this book have taught me that there might be
a place for citizen science; the Zooniverse hosted its first super-
nova-hunting project back in 2010. The data rate, provided by a
reconditioned telescope on Palomar Mountain in California now
pressed into service as the Palomar Transient Factory, was a little
more tractable, but the principle was the same. The telescope
scanned the sky, and a computer checked each night’s images
against a set of standard images. The few thousand such candi-
dates a night were uploaded to our website, and a dedicated band
of a few thousand volunteers jumped onto a dedicated website
each day to sort through them.
They were fast, collectively analysing a night’s worth of data in
just fifteen minutes, and they were accurate. We were able to
broadcast their classifications to observers stationed around the
world, and add newly confirmed supernovae to the cosmological
harvest. As the survey progressed, though, these classified super-
novae also provided new training data for use by would-be
supernova-hunting robots. Eventually, an extremely bright stu-
dent in Berkeley, California produced a trained machine-learning
solution that performed accurately enough to satisfy the astron-
omers running the survey, and as they preferred clinical algorith-
mic precision to messy and confounding citizen science our
project was no more.
I’ll return to this project later, as I think the experience of the
volunteers who took part has much to tell us about the future of
citizen science in general. For now, though, let’s continue to
think like transient-hunting scientists, and worry about getting
166 From Supernovae to Zorill aS
hold of as much data as possible. Proof that relying solely on
their machine was a mistake arrived at Earth on 21 January 2014,
and was first announced by an unusual team from a truly unlikely
place.
London is a terrible spot to put an observatory. If you had to
pick a site among the glitz and glare of the brightly lit metropolis, the very worst place would be in the centre of the West End. The
second worst, though, would be along one of the capital’s main
roads—alongside the A1 as it cuts through built-up North London,
for example. Yet if you drive north on the A1 and look left just at
the right time somewhere in Edgware, you’ll spot the gleaming
domes of the University of London’s Mill Hill observatory.
It’s a long way from a pristine Chilean mountain top, but that’s
OK. The observatory exists primarily as a teaching tool, giving
students on astrophysics courses at University College London
experience in carrying out astronomical observation and data
reduction. While the largest telescope is still the beautiful
Radcliffe refractor, now more than a century old, it’s the modern
telescopes clustered around it that get the most use.
Back on that fateful January night, Steve Fossey—doyen of the
observatory’s teaching labs since well before I was a PhD student
at University College London—was scheduled to give a practical
introduction to the telescopes to a bunch of undergraduates.
Light pollution isn’t the only problem with the site, though, and
clouds closed in overhead as the session was getting going. As the
students took a break with pizza, Steve slewed one of the smaller
telescopes over to one of the last clear patches, a region in Ursa
Major that contains the nearby galaxy M82.
M82 is known as the cigar galaxy—it is a spiral viewed almost
edge on, presenting itself as a thin needle of light on the sky. As
that night’s image appeared on the screen, Steve noticed a new,
bright star located at one end of the disc, something that definitely
From Supernovae to Zorill aS 167
wasn’t in archived images of the same galaxy. From eating take-
away pizza, four students—Ben Cooke, Guy Pollack, Tom
Wright, and Matt Wilde—were suddenly following up on what
proved to be the supernova discovery of the twenty-first century
so far. The clouds were closing in, and the students and Steve
rushed to get confirmation images using filters that exposed the
camera to different colours. Clinching evidence came when they
used a second telescope on the site to take an image of the same
galaxy, and saw that the supernova was still there. It wasn’t an
instrumental error, or something weird happening in the cam-
era; it was real (Plate 9).
From there things moved fast. The standard procedure is to
report such a discovery to the wonderfully named Central Bureau
for Astronomical Telegrams in the US, who announced the dis-
covery to the world. Within hours of the initial discovery, tele-
scopes around the world had observed what the University
College London team had found, confirming that supernova
2014J (as it was now known) was not only real but a type 1a. The
opportunity to study this most important type of explosion up
close—or at least at a distance of only eleven and a half million
light years—was unprecedented, and it became one of the most
observed objects of the twenty-first century.
The discovery of such an object by pizza-munching students
is a great story, and it was wonderful to see Steve’s sharp eyes get
some recognition, but the truth is that they should never have
had a chance. Automated surveys had caught the supernova
before it was observed in London, but the routines used to scan
for interesting transients didn’t catch it and so didn’t sound the
alarm. This seems odd. The supernova is incredibly obvious in
images—it’s the bright star that wasn’t there in 2013—but this is
only true for human observers. My guess is that the training sets
used to send machines hunting for transients didn’t include
168 From Supernovae to Zorill aS
anything this bright, and so the computers had ‘learned’ that
anything that obvious couldn’t possibly be real. And so the
supernova rema
ined unfound.
There are, to my surprise, at least ten images of the M82 super-
nova from before the discovery, including some from amateur
astrophotographers who either didn’t process their data straight
away or who didn’t know the galaxy well enough to recognize
that the star was new. Mostly it was the former; even amateur
astronomers with (advanced) backyard telescopes now leave
looking at the images to the daytime rather than viewing them as
they come in. Such images are, after the fact, still incredibly use-
ful; it turned out that the rise to peak brightness was more rapid
for this event than normal, indicating some unexpected process
at play, a result that is still causing debate among experts.
One of the surveys that imaged M82 during the period when
the supernova was visible but not yet known was the Palomar
Transient Factory which fed data to the Zooniverse’s supernova
project. Had our supernova project still been operating I’m sure
we would have caught it, and quickly. There is, of course, no real
impediment to adapting the machine-learning routines used to
include objects like this one; if I’m right that it was the bright-
ness that made it difficult, one could simply train on as many
bright supernovae as necessary. (If there are enough examples
in both the Universe and our survey of it, that is. I’ll talk about
truly rare objects a little later.) The point, though, isn’t that one couldn’t possibly have designed a system which wouldn’t have
failed in this way, it’s that no one did. When dealing with a com-
plex problem, and real, messy, noisy data, anticipating every
possible eventuality is difficult. Ensuring that every case is
covered, every loophole closed, and every unusual object antici-
pated is impossible. Preparing a training set that reflects reality
is next to impossible.
From Supernovae to Zorill aS 169
We could continue to bet on improvement in machine learn-
ing. We might have missed this supernova, for example, but there
will be others. Certainly there’s plenty of research funding going
into making such systems work better. I prefer, though, to
acknowledge the limits of any system we build and look to com-
bine the best of automated scanning, which brings speed and
consistency, and the quirky responses of adaptable citizen scien-
tists, capable of going beyond their training.
The Crowd and the Cosmos: Adventures in the Zooniverse Page 20