The Rules of Contagion

Home > Nonfiction > The Rules of Contagion > Page 17
The Rules of Contagion Page 17

by Adam Kucharski


  The debate around influencers shows we need to think about how we are exposed to information online. Why do we adopt some ideas but not others? One reason is competition: opinions, news, and products are all fighting for our attention. A similar effect occurs with biological contagion. The pathogens behind diseases like flu and malaria are actually made up of multiple strains, which continuously compete for susceptible humans. Why doesn’t one strain end up dominating everywhere? Our social behaviour probably has something to do with it. If people gather into distinct tight-knit cliques, it can allow a wider range of strains to linger in a population. In essence, each strain can find its own home territory, without having to constantly compete with others.[11] Such social interactions would also explain the huge diversity in ideas and opinions online. From political stances to conspiracy theories, social media communities frequently cluster around similar worldviews.[12] This creates the potential for ‘echo chambers’, in which people rarely hear views that contradict their own.

  One of the most vocal online communities is the anti-vaccination movement. Members often congregate around the popular, but baseless, claim that the measles-mumps-rubella (MMR) vaccine causes autism. The rumours started in 1998 with a scientific paper – since discredited and retracted – led by Andrew Wakefield, who was later struck off the UK medical register. Unfortunately, the British media picked up on Wakefield’s claims and amplified them.[13] This led to a decline in MMR vaccination, followed by several large outbreaks of measles years later, when unvaccinated children began entering the bustling environments of schools and universities.

  Despite widespread MMR rumours in the UK during the early 2000s, media reports were very different on the other side of the channel. While MMR was getting bad press in the UK, the French media were speculating about an unproven link between the hepatitis B vaccine and multiple sclerosis. More recently, there has been negative coverage of the HPV vaccine in the Japanese media, while a twenty-year-old rumour about tetanus vaccines resurfaced in Kenya.[14]

  Scepticism of medicine isn’t new. People have been questioning disease prevention methods for centuries. Before Edward Jenner identified a vaccine against smallpox in 1796, some would use a technique called ‘variolation’ to reduce their risk of disease. Developed in sixteenth-century China, variolation exposed healthy people to the dried scabs or pus of smallpox patients. The idea was to stimulate a mild form of infection, which would provide immunity to the virus. The procedure still carried a risk – around 2 per cent of variolations resulted in death – but it was much smaller than the 30 per cent chance of death that smallpox usually came with.[15]

  Variolation became popular in eighteenth-century England, but was the risk worth it? French writer Voltaire observed that other Europeans thought that the English were fools and madmen to use the method. ‘Fools, because they give their children the smallpox to prevent their catching it; and madmen, because they wantonly communicate a certain and dreadful distemper to their children, merely to prevent an uncertain evil.’ He noted that the criticism went the other way too. ‘The English, on the other side, call the rest of the Europeans cowardly and unnatural. Cowardly, because they are afraid of putting their children to a little pain; unnatural, because they expose them to die one time or other of the small-pox.’[16] (Voltaire, himself a survivor of smallpox, supported the English approach.)

  In 1759, mathematician Daniel Bernoulli decided to try and settle the debate. To work out whether the risk of smallpox infection outweighed the risk from variolation, he developed the first-ever outbreak model. Based on patterns of smallpox transmission, he estimated that variolation would increase life expectancy so long as the risk of death from the procedure was below 10 per cent, which it was.[17]

  For modern vaccines, the balancing act is generally far clearer. On one side, we have overwhelmingly safe, effective vaccines like MMR; on the other, we have potentially deadly infections like measles. Widespread refusal of vaccination therefore tends to be a luxury, a side effect of living in places that – thanks to vaccination – have seen little of such infections in recent decades.[18] One 2019 survey found that European countries tended to have much lower levels of trust in vaccines compared to those in Africa and Asia.[19]

  Although rumours about vaccines have traditionally been country-specific, our increasing digital connectedness is changing that. Information can now spread quickly online, with automated translations helping myths about vaccination cross language barriers.[20] The resulting decline in vaccine confidence could have dire consequences for children’s health. Because measles is so contagious, at least 95 per cent of a population needs to be vaccinated to have a hope of preventing outbreaks.[21] In places where anti-vaccination beliefs have spread successfully, disease outbreaks are now following. In recent years, dozens of people have died of measles in Europe, deaths that could easily have been prevented with better vaccination coverage.[22]

  The emergence of such movements has drawn attention to the possibility of echo chambers online. But how much have social media algorithms actually changed our interaction with information? After all, we share beliefs with people we know in real life as well as online. Perhaps the spread of information online is just a reflection of an echo chamber that was already there?

  On social media, three main factors influence what we read: whether one of our contacts shares an article; whether that content appears in our feed; and whether we click on it. According to data from Facebook, all three factors can affect our consumption of information. When the company’s data science team examined political opinions among US users during 2014–2015, they found that people tended to be exposed to views that were similar to theirs, much more so than they would have been if they had picked their friends at random. Of the content that these friends posted, the Facebook algorithm – which decides what appears on users’ News Feeds – filtered out another 5–8 per cent of opposing political views. And of the content people saw, they were less likely to click on articles that went against their political stance. Users were also far more likely to click on posts that appeared at the top of their feed, showing how intensely content has to compete for attention. This suggests that if echo chambers exist on Facebook, they start with our friendship choices but can then be exaggerated by the News Feed algorithm.[23]

  What about the information we get from other sources? Is this similarly polarised? In 2016, researchers at Oxford University, Stanford University and Microsoft Research looked at the web browsing patterns of 50,000 Americans. They found that the articles people saw on social media and search engines were generally more polarised than the ones they came across on their favourite news websites.[24] However, social media and search engines also exposed people to a wider range of views. The stories might have had stronger ideological content, but people got to see more of the opposing side as well.

  This might seem like a contradiction: if social media exposes us to a broader range of information than traditional news sources, why doesn’t it help dampen the echoes? Our reaction to online information might have something to do with it. When sociologists at Duke University got US volunteers to follow Twitter accounts with opposing views, they found that people tended to retreat further back into their own political territory afterwards.[25] On average, Republicans became more conservative and Democrats more liberal. This isn’t quite the same as the ‘backfire effect’ we saw in Chapter 3, because people weren’t having specific beliefs challenged, but it does imply that reducing political polarisation isn’t as simple as creating new online connections. As in real life, we may resent being exposed to views we disagree with.[26] Although having meaningful face-to-face conversations can help change attitudes – as they have with prejudice and violence – viewing opinions in an online feed won’t necessarily have the same effect.

  It’s not just online content itself that can create conflict; it’s also the context surrounding it. Online, we come across many ideas and communities we may not encounter much in rea
l life. This can lead to disagreements if people post something with one audience in mind, only to have it read by another. Social media researcher danah boyd (she styles her name as lower case) calls it ‘context collapse’. In real life, a chat with a close friend may have a very different tone to a conversation with a co-worker or stranger: the fact that our friends know us well means there’s less potential for misinterpretation. Boyd points to events like weddings as another potential source of face-to-face context collapse. A speech that’s aimed at friends could leave family uncomfortable; most of us have sat through a best man’s anecdote that has made this mistake and misfired. But while weddings are (usually) carefully planned, online interactions may inadvertently include friends, family, co-workers, and strangers all in the same conversation. Comments can easily be taken out of context, with arguments emerging from the confusion.[27]

  According to boyd, underlying contexts can also change over time, particularly as people are growing up. ‘While teens’ content might be public, most of it is not meant to be read by all people across all time and all space,’ she wrote back in 2008. As a generation raised on social media grows older, this issue will come up more often. Viewed out of context, many historical posts – which can linger online for decades – will seem inappropriate or ill-judged.

  In some cases, people have decided to exploit the context collapse that occurs online. Although ‘trolling’ has become a broad term for online abuse, in early internet culture a troll was mischievous rather than hateful.[28] The aim was to provoke a sincere reaction to an implausible situation. Many of Jonah Peretti’s pre-BuzzFeed experiments used this approach, running a series of online pranks to attract attention.

  Trolling has since become an effective tactic in social media debates. Unlike real life, the interactions we have online are in effect on a stage. If a troll can engineer a seemingly overblown response from their opponent, it can play well with random onlookers, who may not know the full context. The opponent, who may well have a justified point, ends up looking absurd. ‘O Lord make my enemies ridiculous,’ as Voltaire once said.[29]

  Many trolls – of both the prankster and abuser kinds – wouldn’t behave this way in real life. Psychologists refer to it as the ‘online disinhibition effect’: shielded from face-to-face responses and real-life identities, people’s personalities may adopt a very different form.[30] But it isn’t simply a matter of a few people being trolls-in-waiting. Analysis of antisocial behaviour online has found that a whole range of people can become trolls, given the right circumstances. In particular, we are more likely to act like trolls when we are in a bad mood, or when others in the conversation are already trolling.[31]

  As well as creating new types of interactions, the internet is also creating new ways to study how things spread. In the field of infectious diseases, it’s generally not feasible to deliberately infect people to see how something spreads, as Ronald Ross tried to do with malaria in the 1890s. If modern researchers do run infection studies, they are usually small, expensive, and subject to careful ethical scrutiny. For the most part, we have to rely on observed data, using mathematical models to ask ‘what if?’ questions about outbreaks. The difference online is that it can be relatively cheap and easy to spark contagion deliberately, especially if you happen to run a social media company.

  If they had been paying close attention, thousands of Facebook users might have noticed that on 11 January 2012, their friends were slightly happier than usual. At the same time, thousands of others may have spotted that their friends were sadder than expected. But even if they did notice a change in what their friends were posting online, it wasn’t genuine change in their friends’ behaviour. It was an experiment.

  Researchers at Facebook and Cornell University had wanted to explore how emotions spread online, so they’d altered people’s News Feeds for a week and tracked what happened. The team published the results in early 2014. By tweaking what people were exposed to, they found that emotion was contagious: people who saw fewer positive posts had on average posted less positive content themselves, and vice versa. In hindsight, this result might seem unsurprising, but at the time it ran counter to a popular notion. Before the experiment, many people believed that seeing cheerful content on Facebook could make us feel inadequate, and hence less happy.[32]

  The research itself soon sparked a lot of negative emotions, with several scientists and journalists questioning how ethical it was to run such a study. ‘Facebook manipulated users’ moods in secret experiment,’ read one headline in the Independent. One prominent argument was that the team should have obtained consent, asking whether users were happy to participate in the study.[33]

  Looking at how design influences people’s behaviour is not necessarily unethical. Indeed, medical organisations regularly run randomised experiments to work out how to encourage healthy behaviour. For example, they might send one type of reminder about cancer screening to some people and a different one to others, and then see which gets the best response.[34] Without these kinds of experiments, it would be difficult to work out how much a particular approach actually shifted people’s behaviour.

  If an experiment could have a detrimental effect on users, though, researchers need to consider alternatives. In the Facebook study, the team could have waited for a ‘natural experiment’ – like rainy weather – to change people’s emotional state, or they could have tried to answer the same research question with fewer users. Even so, it may still not have been feasible to ask for consent beforehand. In his book Bit by Bit, sociologist Matthew Salganik points out that psychological experiments can produce dubious results if people know what’s being studied. Participants in the Facebook study might have behaved differently if they had known from the outset that the research was about emotions. If psychology researchers do deceive participants in order to get a natural reaction, however, Salganik notes that they will often debrief them afterwards.

  As well as debating the ethics of the experiment, the wider research community also raised concerns about the extent of emotional contagion in the Facebook study. Not because it was big, but because it was so small. The experiment had shown that when a user saw fewer positive posts in their feed, the number of positive words in their status updates fell by an average of 0.1 per cent. Likewise, when there were fewer negative posts, negative words decreased by 0.07 per cent.

  One of the quirks of huge studies is that they can flag up very small effects, which wouldn’t be detectable in smaller studies. Because the Facebook study involved so many users, it was possible to identify incredibly small changes in behaviour. The study team argued that such differences were still relevant, given the size of the social network: ‘In early 2013, this would have corresponded to hundreds of thousands of emotion expressions in status updates per day.’ But some people remained unconvinced. ‘Even if you were to accept this argument,’ Salganik wrote, ‘it is still not clear if an effect of this size is important regarding the more general scientific question about the spread of emotions.’

  In studies of contagion, social media companies have a major advantage because they can monitor much more of the transmission process. In the Facebook emotion experiment, the researchers knew who had posted what, who had seen it, and what the effect was. External marketing companies don’t have this same level of access, so instead they have to rely on alternative measurements to estimate the popularity of an idea. For example, they might track how many people click on or share a post, or how many likes and comments it receives.

  What sort of ideas become popular online? In 2011, University of Pennsylvania researchers Jonah Berger and Katherine Milkman looked at which New York Times stories people e-mailed to others. They gathered three months of data – almost 7,000 articles in total – and recorded the features of each story, as well as whether it made the ‘most e-mailed’ list.[35] It turned out that articles that triggered an intense emotional response were more likely to be shared. This was the case both for positive emotions
, such as awe, and negative ones like anger. In contrast, articles that evoked so-called ‘deactivating’ emotions like sadness were shared less often. Other researchers have found a similar emotional effect; people are more willing to spread stories that evoke feelings of disgust, for example.[36]

  Yet emotions aren’t the only reason we remember stories. By accounting for the emotional content of the New York Times articles, Berger and Milkman could explain about 7 per cent of the variation in how widely stories were shared. In other words, 93 per cent of the variation was down to something else. This is because popularity doesn’t depend only on emotional content. Berger and Milkman’s analysis found that having an element of surprise or practical value could also influence an article’s shareability. As could the appearance of the story: an article’s popularity depended on when it was posted, what section of the website it was on, and who the author was. When the pair accounted for these additional characteristics, they could explain much more of the variation in popularity.

  It’s tempting to think we could – in theory, at least – sift through successful and unsuccessful content to identify what makes a highly contagious tweet or article. However, even if we manage to identify features that explain why some things are more popular, these conclusions may not hold for long. Technology researcher Zeynep Tufekci has pointed to the apparent shift in people’s interests as they use online platforms. On YouTube, for example, she suspected that the video recommendation algorithm might have been feeding unhealthy viewing appetites, pulling people further and further down the online rabbit hole. ‘Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with – or to incendiary content in general,’ she wrote in 2018.[37] These shifting interests mean that unless new content evolves – becoming more dramatic, more evocative, more surprising – it will probably get less attention than its predecessors. Here, evolution isn’t about getting an advantage; it’s about survival.

 

‹ Prev