The Rules of Contagion

Home > Nonfiction > The Rules of Contagion > Page 19
The Rules of Contagion Page 19

by Adam Kucharski


  It’s not just social media content that is easier to predict after some time has passed. In 2018, Burcu Yucesoy and her colleagues at Northeastern University analysed the popularity of books on the New York Times bestseller list. Although it’s very hard to predict whether a given book will take off in the first place, books that do become popular tend to follow a consistent pattern afterwards. The team found that most books on the bestseller list saw rapid initial growth in sales, peaking within about ten weeks of publication, which then declined to a very low level. On average, only 5 per cent of sales occurred after the first year.[58]

  Despite progress in understanding online outbreaks, most analysis still relies on having good historical data. In general, it’s difficult to predict the duration of a new trend ahead of time, because we don’t know the underlying rules that govern transmission. However, occasionally an online cascade does follow known rules. And it was one such cascade that first sparked my interest in contagion on social media.

  Dressed in an ’i love haters’ baseball cap, the woman plucked the goldfish out of its bag and dropped it into a cup full of alcohol. Then she downed the drink, fish and all. A trainee lawyer, she was travelling around Australia and had performed the stunt after being nominated by a friend. The whole thing had been filmed. Before long, the video was posted on her Facebook page, along with an accompanying nomination for someone else.[59]

  It was early 2014, and the woman was the latest participant in the online game of ‘neknomination’. The rules were simple: players filmed themselves downing a drink, posted it on social media, then nominated others to do the same within 24 hours. The game had swept through Australia, with drinks becoming more ambitious – and alcoholic – as the nominations spread. People downed booze while skateboarding, quad biking and skydiving. Drinks varied from neat spirits to cocktails that included blended insects and even battery acid.[60]

  Coverage of neknomination spread alongside the game itself. The goldfish video was widely shared, with newspapers picking up ever-more-extreme stories. When the game reached the UK, it triggered a media panic. Why was everyone doing this? How bad would it get? Should the game be banned?[61]

  When neknomination hit the UK, I agreed to examine the game for a BBC radio feature.[62] I’d noticed that during games like neknomination, participants transmitted the idea to a handful of specific people, who then passed it along to others. This created a clear chain of propagated transmission, much like a disease outbreak.

  If we want to predict the shape of an outbreak, there are two things we really need to know: how many additional infections each case generates on average (i.e. the reproduction number), and the lag between one round of infection and the next (i.e. the generation time). During new disease outbreaks, we rarely know these values, so we have to try and estimate them. For neknomination, though, the information was laid out as part of the game. Each person nominated 2–3 others, and these people had to do the challenge – and make their nominations – within 24 hours. When I forecast the neknomination game in 2014, I didn’t have to estimate anything; I could plug the numbers straight into a simple disease model.[63]

  My outbreak simulations suggested that the neknomination trend wouldn’t last long. After a week or two, herd immunity would kick in, causing the outbreak to peak and begin to decline. If anything, these simple forecasts were likely to overestimate transmission. Friends tend to cluster together in real life; if multiple people nominate the same person during the game, it will reduce the reproduction number and lead to a smaller outbreak. Interest in neknomination indeed faded quickly. Despite the UK media frenzy in early February 2014, it was all but gone by the end of the month. Subsequent social media games followed a similar structure, from ‘no makeup selfie’ photos to the widely publicised ‘ice bucket challenge’. Based on the rules of the games, my model predicted all of them would peak within a few weeks, just as they did in reality.[64]

  Although nominated-based games have tended to fade away after a few weeks, social media outbreaks don’t always disappear after their initial peak in popularity. Looking at popular image-based memes on Facebook, Justin Cheng and his collaborators have found that almost 60 per cent recurred at some point. On average, there was just over a month between the first and second peaks in popularity. If there were only two peaks, the second cascade of sharing was generally briefer and smaller; if there were multiple peaks, they were often a similar size.[65]

  What makes a meme become popular again? The team found that a big initial peak in interest made it less likely that the meme will appear again. ‘It is not the most popular cascades that recur the most,’ they noted, ‘but those that are only moderately popular’. This is because a small first cascade leaves more people who haven’t seen the meme yet. With a large initial outbreak, there aren’t enough susceptible people left to sustain transmission. For a cascade to recur, it also helps if there are several copies of the meme circulating. This is consistent with what we’ve already seen for stuttering outbreaks: having multiple sparks can make infections spread further.

  Cheng looked at popular images, but what about other types of content? Back in 2016, I gave a public talk at London’s Royal Institution. Over the next couple of years, a video of the talk somehow racked up over a million views on YouTube. Around the same time in 2016, I’d given a talk on a similar topic at Google, which had also been posted on YouTube, on a channel with a similar number of subscribers. During the same period, this one was viewed around 10,000 times. (Ideally, this popularity would have been the other way round: it turns out that if you give two related talks, but screw up a live demonstration in one of them, that’s the talk that will become popular online.)

  I hadn’t expected the Royal Institution talk to get so much attention, but what really came as a surprise was how the views had accumulated. For its first year online, the video had gained relatively little interest, getting a hundred or so views per day. Then suddenly, in the space of a few days, it picked up more attention than it had in an entire year.

  Number of YouTube views per day for my 2016 Royal Institution talk

  Data: Royal Institution

  Perhaps people had started sharing it online, making it go viral? Looking at the data, the real explanation was much simpler: the video had been featured on the YouTube homepage. As the views spiked, the YouTube algorithm added it to the ‘suggested video’ lists that appear alongside popular videos. Almost 90 per cent of people who viewed the talk found it on the homepage or one of these lists. It was a classic broadcast event, with one source generating almost all of the views. And once the video was popular, its popularity created a feedback effect, attracting even more interest. It shows how much the video benefitted from online amplification, first by the Royal Institution to get those initial few thousand views, then by the YouTube algorithm to deliver a much bigger audience.

  There are three main types of popularity on YouTube. The first is where videos get a consistent, low-level number of views. This number randomly fluctuates from day-to-day, without noticeably increasing or decreasing. Around 90 per cent of YouTube videos follow this pattern. The second type of popularity is when a video suddenly gets featured on the website, perhaps in response to a news event. In this situation, almost all of the activity comes after the initial peak. The third type of popularity occurs when a video is being shared elsewhere online, gradually accumulating views before peaking and declining again. It’s also possible to observe a mixture of these shapes; a shared video may get a boost by being featured then settle back down to a low level, like mine did.[66]

  Video is a particularly persistent form of media, with interest tending to last much longer than for news articles. A typical social media news cycle is around two days; in the first twenty-four hours, most content comes in the form of articles, with shares and comments following afterwards.[67] However, not all news is the same. Researchers at MIT have found that false news tends to spread further and faster than true news. Maybe this is
because high-profile people with lots of followers are more likely to spread falsehoods? The researchers actually found the opposite: it was generally people with fewer followers who spread the false news. If we think of contagion in terms of the four DOTS, this suggests false information spreads because the transmission probability is high, rather than there being more opportunities for spread. The reason for the high transmission probability? Novelty might have something to do with it: people like to share information that’s new, and false news is generally more novel than true news.[68]

  It’s not just about novelty, though. To understand how things spread online, we also need to think about social reinforcement. And that means taking another look at the concept of complex contagion: sometimes we need to be exposed to an idea multiple times before we adopt it online. For example, there’s evidence that we’ll share memes online without much prompting, but won’t share political content until we see several other people doing so. When Facebook users changed their profile picture to a ‘=’ symbol in support of marriage equality in early 2013, on average they only did so once eight of their friends had. Complex contagion also influenced the initial adoption of many online platforms, including Facebook, Twitter and Skype.[69]

  A quirk of complex contagion is that it spreads best in tight-knit communities. If people share lots of friends, it creates the multiple exposures needed for an idea to catch on. However, such ideas may then struggle to break out and spread more widely.[70] According to Damon Centola, the structure of online networks can therefore act as a barrier to complex contagion.[71] Many of our contacts online will be acquaintances rather than part of a closely linked friendship group. Whereas we might adopt a political stance if lots of our friends do, we’re less likely to pick it up from a single source.

  This means that complex contagion – such as nuanced political views – can have a major disadvantage on the internet. Rather than encouraging users to develop challenging, socially complex ideas, the structure of online social interactions instead favours simple, easy-to-digest content. So perhaps it’s not surprising that this is what people are choosing to produce.

  With the rising availability of data in the early twenty-first century, some suggested that researchers would no longer need to pursue explanations for human behaviour. One of them was Chris Anderson, then Wired editor, who in 2008 famously penned an article proclaiming the ‘end of theory’. ‘Who knows why people do what they do?’ he wrote. ‘The point is they do it, and we can track and measure it with unprecedented fidelity.’[72]

  We now have vast quantities of data on human activity; it’s been estimated that the amount of digital information in the world is doubling every couple of years, with much of it generated online.[73] Even so, there are a lot of things we still struggle to measure. Take those studies of obesity or smoking contagion, which show just how difficult it can be to pick apart transmission processes. Our inability to measure behaviour isn’t the only problem. In a world of clicks and shares, it turns out we’re not always measuring what we think we’re measuring.

  At first glance, clicks seem like a reasonable way to quantify interest in a story. More clicks mean more people are opening the article and potentially reading it. Surely writers who get more clicks should therefore be rewarded accordingly? Not necessarily. ‘When a measure becomes a target, it ceases to be a good measure’ as economist Charles Goodhart reportedly once said.[74] Rewarding success based on a simple performance metric creates a feedback loop: people start chasing the metric rather than the underlying quality it is trying to assess.

  It’s a problem that can occur in any field. In the run up to the 2008 financial crisis, banks paid bonuses to traders and salesmen based on their recent profits. This encouraged trading strategies that would reap benefits in the short-term, with little regard for the future. Metrics have even shaped literature. When Alexandre Dumas first wrote The Three Musketeers in serialised form, his publisher paid him by the line. Dumas therefore added the servant character Grimaud, who spoke in short sentences, to stretch out the text (then killed him off when the publisher said that short lines didn’t count).[75]

  Relying on measurements like clicks or likes can give a misleading impression of how people are truly behaving. During 2007–8, over 1.1 million people joined the ‘Save Darfur’ cause on Facebook, which aimed to raise money and attention in response to the conflict in Sudan. A few of the new members donated and recruited others, but most did nothing. Of the people who joined, only 28 per cent recruited someone else, and a mere 0.2 per cent donated.[76]

  Despite these measurement issues, there has been a growing focus on making stories clickable and shareable. Such packaging can be highly effective. When researchers at Columbia University and the French National Institute looked at mainstream news articles mentioned by Twitter users, they found that almost 60 per cent of the links were never clicked on.[77] But this didn’t stop some of the stories spreading: users shared thousands of posts featuring one of these never-clicked-on links. Evidently, many of us are happier to share something than to read it.

  Perhaps it’s not that surprising, given that certain types of behaviour require more effort than others. Dean Eckles, a former data scientist at Facebook, points out that it doesn’t take much to get people to interact with social media in simple ways. ‘That’s a behaviour that’s relatively easy to produce,’ he said.[78] ‘The behaviour we’re talking about is whether your friends like or comment on the post.’ Because people don’t have to put much effort into performing such actions, it’s much easier to get them to act. ‘It’s a light touch nudge for an easy to accomplish, low-cost behaviour.’

  This creates a challenge for marketers. An advertising campaign might generate a lot of likes and clicks, but this isn’t quite the behaviour they’re interested in. They don’t just want people to interact with their content; they eventually want people to buy their product or believe in their message. Just as people with more followers won’t necessarily generate larger cascades, content that’s more clickable or shareable won’t automatically generate more revenue or advocacy.

  When we’re faced with a new disease outbreak, there are generally two things we want to know. What are the main routes of transmission? And which of these routes should we target to control the infection? Marketers face a similar task when designing a campaign. First, they need to know the ways someone can be exposed to a message; then they need to decide which of these routes to target. The difference, of course, is that whereas health agencies spend money to block the crucial paths of transmission, advertising agencies put money into expanding them.

  Ultimately, it’s a question of cost-effectiveness. Whether we’re dealing with a disease outbreak or marketing campaign, we want to find the best way to allocate a limited budget. The problem is that historically it’s not always been clear which path leads to which outcome. ‘Half the money I spend on advertising is wasted; the trouble is I don’t know which half,’ as marketing pioneer John Wanamaker supposedly once said. [79]

  Modern marketing has tried to tackle this problem by linking the ads people see to the actions they take afterwards. In recent years, most major websites have employed ad tracking; if companies advertise on them, they know if we saw the ads as well as whether we browsed or bought anything afterwards. Likewise, if we take an interest in their product, a company can follow us around the internet, showing us more ads.[80]

  When we click on a website link, we often become the subject of a high-speed bidding war. Within about 0.03 seconds, the website server will gather all the information they have about us and send it to its ad provider. The provider then shows this information to a group of automated traders acting on behalf of advertisers. After another 0.07 seconds, the traders will have bid for the right to show us an advert. The ad provider selects the winning bid and sends the advert to our browser, which slots the advert into the webpage as it loads on the screen.[81]

  People don’t always realise that websites work in
this way. In March 2013, the UK Labour party tweeted a link to a new press release criticising then Education Secretary Michael Gove. One Conservative MP responded by tweeting about the choice of advert on Labour’s website. ‘I know Labour are short of cash but having an invitation to “Date Arab girls” at top of your press release?’ he wrote. Unfortunately for the MP, other users pointed out that the Labour page featured targeted advertising: the offer on display was likely to depend on a user’s specific online activity.[82]

  Some of the most advanced tracking has cropped up in places we might least expect it. To investigate the extent of online targeting, journalism researcher Jonathan Albright spent early 2017 visiting over a hundred extreme propaganda websites, the sort of places that are full of conspiracy theories, pseudoscience, and far-right political views. Most of the websites looked incredibly amateurish, the sort of thing a beginner would put together. But digging behind the scenes, Albright found that they concealed extremely sophisticated tracking tools. The websites were collecting detailed data on personal identity, browsing behaviour, even mouse movements. That allowed them to follow susceptible users, feeding them even more extreme content. It wasn’t what users could see that made these websites so influential; it was the data harvesting that they couldn’t.[83]

  How much is our online data actually worth? Researchers have estimated that users who opt-out of sharing their browsing data are worth about 60 per cent less to advertisers on Facebook. Based on Facebook’s revenue in 2019, this implies that data on the behaviour of the average American user is worth at least $48 per year. Meanwhile, Google reportedly paid Apple $12bn to be the default iPhone search engine for 2019. With an estimated one billion iPhones in use, this would suggest Google value our search activity at about $12 per device.[84]

 

‹ Prev