To understand how this might work in practice, we’ve recently
revived supernova hunting as a sport at the Zooniverse. This
time the data comes from Pan-STARRS, a camera and telescope
which sits on top of Mauna Kea and which was built to hunt for
asteroids. It does a pretty good job of looking for supernovae
along the way, and once a week we release a week’s worth of data
to a growing community hungry for discovery. The set-up is
even simpler than before: after reviewing a few example images
we simply ask volunteers whether a new discovery looks like a
supernova.
This time, though, there’s a machine running in parallel. It was
built by Darryl Wright. Darryl’s now part of the Zooniverse team
at the University of Minnesota, but when he was a PhD student
working with the Pan-STARRS team at Queen’s University in
Belfast he was asked to review candidates by eye himself. Instead,
he took an online course in machine learning and ended up
training a neural network to classify the things instead. With the
new project, we could compare Darryl’s machine’s performance
with that of the volunteers, and work out which was best.
Once we agreed what ‘best’ was, that is. As in the penguin-
counting example, it’s a nebulous concept, and how one might
use it probably depends on what kind of science you’re trying to
do. If you want to make a detailed study of only a few supernovae
with the largest of telescopes, then who cares if you miss most of
170 From Supernovae to Zorill aS
them—all you should watch for is the accuracy of those that you
do capture. An inaccurate classification will cost you valuable
observing time and earn you the wrath of other astronomers
who want the telescope for themselves. On the other hand, if
you’re trying to understand the properties of a population of
objects, then you might not care if one or two false alarms sneak
through, and would accept lower accuracy in exchange for catch-
ing more of the supernovae in your net.
This is a common trade-off in this sort of classification prob-
lem, but it turned out not to matter too much. We quickly found
that for almost any realistic case, combining human and machine
classifications outperformed any result provided by each alone.
Working alongside our robot friends makes us more productive,
but input from humans also helps them get better at classifying.
The really great thing about this result is that there’s nothing
especially clever about it. The citizen science project asks a simple question to a small group of volunteers, and we’re not doing any
sophisticated data analysis, just believing that the majority of
people who answer a question get it right. On the other hand,
because we have a crowd of enthusiastic volunteers at hand, Darryl
and his colleagues are freed from trying to do anything especially
novel with machine learning. Picking the right machine for the task
is important, and so is making sure you understand what it’s doing
and how it can best be trained, but that’s a long way from needing
to explore the bleeding edge of the deep-learning revolution.
This approach works well when we’re hunting for objects
which are relatively common. Supernova hunters should expect
to be successful, at least with modern data sets where the tele-
scope and camera are understood well enough to avoid too many
false positives sneaking through. But there are plenty of prob-
lems in astronomy where a successful end to even a dedicated
hunt will be a rarity.
From Supernovae to Zorill aS 171
Planet hunting is one example, though here too some judi-
cious filtering can help. But some objects just are intrinsically
rare, and will only rarely be stumbled across. Perhaps my
favourite of these rarities are gravitational lenses, the result of
Einstein’s theory of general relativity and a cosmic coincidence.
Gravity, Einstein’s theory tells us, is nothing more or less than
a geometrical effect. In other words, we feel gravity because of
the bending of space by mass. This in turn means that anything
passing through space near a massive object will find itself
deflected because instead of travelling through flat, empty space
it will find itself on a curved trajectory. This rule applies regard-
less of the mass of the moving object, and even to light. So a key
prediction of the theory is that light rays will be bent by passage
around a massive object, a fact famously used by Eddington to
carry out one of the first serious tests of relativity by recording
the positions of stars visible near the disc of the Sun during the
total solar eclipse of 1919.
(Two points of pedantry. First, it is possible with some assump-
tions to derive a light-bending effect from Newton’s theory of
gravity, and this was done long before Einstein came along. The
magnitude of the predicted distortion is different though, and
Einstein turns out to be right. Second, there’s some modern grip-
ing about whether Eddington’s results were actually accurate
enough, given challenging weather and difficult conditions on
his eclipse expedition, to provide a sensible test of relativity. Press coverage from the time, though, shows that whatever the reality
this experiment was perceived as important and as elevating
Einstein above Newton.)
The idea that our images of distant sources might be distorted
by gravity was little more than a curiosity until large and deep
surveys of galaxies got going. In just a few places in the Universe,
the distribution of galaxies is such that a distant system will lie
172 From Supernovae to Zorill aS
almost precisely behind another, nearer galaxy or cluster of gal-
axies as seen from Earth. When that happens, the light from the
more distant system will be bent by passing the closer system.
The effects depend on the exact geometry. If the alignment is
exact, we end up with four identical images of the distant system,
one on each side of the nearest system. This is an Einstein cross,
and a handful of these remarkable systems are known.
More commonly, the alignment isn’t quite right. The more dis-
tant object might be slightly displaced from the line of sight, or
the internal structure of the nearer object will distort the light.
What you see then is a smeared-out image of the distant system,
often magnified by the lensing effect of the process. Gravitational
lenses like this act as nature’s telescopes, allowing us to see dis-
tant galaxies which would otherwise be invisible, though as their
optics are imperfect the resulting images are distorted.
Even better, their blurry images contain information. The
degree of bending of light depends on the amount of mass
present in the lens, and on its distribution, and so we get to ‘weigh’
the objects involved through careful modelling. Sometimes
amazing things happen—take the Einstein cross known as
MACS J1149.6+2223, which has four images of a galaxy whose
r /> light has taken over nine billion years to reach us lensed by a sys-
tem some four billion light years away. A single supernova has
been observed in this galaxy not once but four times, once in
each image. In other cases, there are time delays between the
appearance of such supernovae caused by the different lengths
of the paths that the light in each image takes to reach us.
I find these results astounding. The idea that we can see some-
thing that far away, apply knowledge of the Universe and its con-
stituents that is good enough to understand why we see this
apparent repetition, and then use that knowledge to understand
more, is the kind of thing that got me hooked on astrophysics,
From Supernovae to Zorill aS 173
and on observational science. Gravitational lenses are amazing,
and yet only around a thousand of them are known after years of
searching.
It’ll be no surprise by now that astronomers want to find more
of these things, and that LSST has searching for such lenses as a
core part of its programme. It’s probably not a surprise either
that there’s a citizen science project to help, especially as with
only a small number of examples available machine learning is
going to struggle to help. SpaceWarps, the Zooniverse pro-
gramme aimed at searching for gravitational lenses, has been
hugely successful.
My favourite of its discoveries was found nearly live on TV, as
part of a collaboration with Brian Cox, Dara O’Brien, and the
team behind their fantastically successful Stargazing Live show which once a year takes over prime-time BBC TV for three nights
of astronomical chatter. The topics chosen are usually pretty ran-
dom, but for the last six runs of the programme we’ve persuaded
them to ask their audience to help us with a citizen science project.
The pace of these projects is always exhausting. Television is a
strange world, and live television an even stranger one. The pro-
gramme was based for many years at Jodrell Bank, still home
more than sixty years after its foundation to the third-largest
steerable radio telescope in the world. A crew of more than fifty
people is needed to transform this working observatory into a
television studio, with lights and camera needing to be rigged in
the most unlikely places before any action can be broadcast to
the outside world. Add in the vagaries of the British weather and
the logistics become nightmarish.* None of it makes for an ideal
* The most recent BBC Stargazing went to Siding Spring Observatory in rural Australia in an effort to escape Manchester weather. It got hit by the tail end of a tropical cyclone.
174 From Supernovae to Zorill aS
opportunity to get science done, and the Zooniverse crew usu-
ally end up shoved into a corner, craving a decent internet con-
nection to the outside world.
Over the years, thanks to Stargazing Live, we’ve found planets,
studied Mars, and more, but with SpaceWarps we wanted to be
still more ambitious—promoting the project on the first night of
the show in the hope (and certainly in the expectation from the
BBC crew) that we’d find something worth announcing forty-
eight hours later. As we set up for that first broadcast, I lost count of the number of people who ‘just popped in’ to ask whether we
were really going to find something.
The chaos doesn’t die down immediately after the show. In
talking to Brian and Dara I announced the project, and managed
to report quickly on the flood of classifications heading our way.
In Oxford and in Chicago our team watched as their beautiful
infrastructure stumbled under the sheer weight of wannabe sci-
entists before recovering as somewhere in West Virginia servers
sent image after image off to eager classifiers. Meanwhile, those
of us at Jodrell Bank scrambled to clear the site and head back to
the team hotel, leaving the observatory alone.
As a result, it was in the incongruous setting of a conference
hotel bar that I found myself staring at a laptop screen bearing
what looked for all the world like a neat red lens, an arc of light
curving around a nearby galaxy. As producers, presenters, and
crew waited for the adrenaline from the night’s broadcast to
wear off, or huddled in corners to discuss scripts, my Zooniverse
colleague Rob Simpson and I stared at the screen. We had some-
thing, but we weren’t sure what (Plate 10).
It was the red colour that was confusing. Red, in this game,
means distant, a sign that the light that the telescope is receiving
has been substantially stretched by the expansion of the Universe
during its journey from source to us. If this lens was real, it was
From Supernovae to Zorill aS 175
clearly a distant one, a prize catch, but the colour that made it
interesting also meant that we were suspicious of our prize.
We slept on it, but the next morning there wasn’t too much
more to say. Sipping much-needed coffee, we started the search
for previous observations of the new object. It turned up initially
in a catalogue called FIRST, a map of the sky as seen by the Very
Large Array in New Mexico. Our lens—if it was real—was emit-
ting radio waves, and this was good news. First, it made the thing
more interesting; those radio waves must have a source, which
meant extreme star formation or an actively growing black hole
at its centre, both interesting things in a source as far away as we
thought this was. Second, it meant that we could easily design an
observation to measure the redshift and hence the distance of
the lensed galaxy.
‘What we need’, I said to Rob, ‘is a radio telescope.’ He didn’t
reply, but turned slowly to look out of the window. Staring back
at us was the giant dish of the Lovell Telescope. Normally we’d
scramble to apply for time, but the Lovell was standing unused
thanks to the small matter of a live broadcast happening in front
of it. Negotiations followed; Tim O’Brien, the observatory’s dir-
ector, was keen to help, and we eventually persuaded the BBC
that they didn’t mind if we ruined their carefully planned shot by
pointing the telescope away from the studio and towards our tar-
get. A few hours later, Rob and I danced in the pouring rain as the
floodlit telescope turned slowly on its bearings (repurposed
from First World War battleships) to point at a source that had
been found less than twenty-four hours earlier.
As ever, observing is only the start of the work, and I will
always remain grateful to the Jodrell astronomers who stayed up
all night, working on the tricky problem of removing the distinct
signature of a live broadcast from the data they received from
their radio telescope. It turned out our lens was a broken ring,
176 From Supernovae to Zorill aS
viewed with light that had taken more than ten billion years to
reach us. The red radio ring ended up being the target of observa-
tions with telescopes from Hawaii to Mexico. It’s magnified by
ten times because of
the lens, and seems indeed to have an
actively growing black hole as well as being a dramatically power-
ful factory of stars. It is a glimpse of a time when the Universe
was at its most active, a time when most stars were being born.
It’s also, and to me just as importantly, another example of the
ability of citizen scientists to go beyond what they’ve been
taught; despite the fact that all the examples given were blue, the
volunteers were able to recognize this red streak as something
worth marking. As lenses are rare things, even in the era of large
surveys like LSST, we’re unlikely to assemble a large training set
with which to train a lens-hunting machine; there’s progress to
be made, perhaps, using training sets of artificial lenses, but for
the foreseeable future this will remain a fertile hunting ground
for citizen scientists.
What we can do is improve the odds of finding such things.
Because the appearance of a lens is shaped by basic laws of grav-
ity, predicting what a lens around a given nearby galaxy will look
like is a fairly simple matter. (Well, you’ll need a decent computer, but the principles are simple.) That meant that the SpaceWarps
team were able to create artificial galaxies to insert into their project. I was a bit worried about this, unsure about how our volun-
teers would react to being asked to classify ‘artificial’ data (we
were careful not to call them ‘fake’ lenses).
We needn’t have worried. The fact that for these galaxies we
knew what the ‘right’ answer was meant that we could give volun-
teers feedback, which they craved. While anyone taking part in
the project had overcome at least some of the barriers to think-
ing of themselves as scientists, the odd pop-up confirming that
they had the right idea turned out to be extremely welcome.
From Supernovae to Zorill aS 177
After all, even the most confident of us need reassurance that
we’re carrying out a task well every so often.
The real innovation, though, was that we could measure how
people were doing. The SpaceWarps team can measure the skill
of their volunteers, which they define as the average quantity of
information provided by a volunteer presented with a random
image from those available to be classified. I’m deviating slightly,
deliberately, from the language the project team themselves use
The Crowd and the Cosmos: Adventures in the Zooniverse Page 21