Book Read Free

The Rules of Contagion

Page 21

by Adam Kucharski


  Finally, we have the most dangerous form of fake news: disinformation. A common view of disinformation is that it’s there to make you believe something false. However, the reality is subtler than this. When the KGB trained their foreign agents during the Cold War, they taught them how to create contradictions in public opinion and undermine confidence in accurate news.[109] This is what disinformation means. It’s not there to persuade you that false stories are true, but to make you doubt the very notion of truth. The aim is to shift facts around, making the reality difficult to pin down. And the KGB wasn’t just good at seeding disinformation; they knew how to get it amplified. ‘In the quaint old days when KGB spies deployed the tactic, the goal was pickup by a major media property,’ as DiResta put it, ‘because that provided legitimization and took care of distribution.’[110]

  In the past decade or so, a handful of online communities have been particularly successful at getting their messages picked up. One early example emerged in September 2008, when a user posted on the Oprah Winfrey Show’s online message board. The user claimed to represent a massive paedophile network, with over 9,000 members. But the post wasn’t quite what it seemed: the phrase ‘over 9,000’ – a reference to a fighter shouting about their opponent’s power level in the cartoon Dragon Ball Z – was actually a meme from 4chan, an anonymous online message board popular with trolls. To the delight of 4chan users, Winfrey took the paedophilia claim seriously and read out the phrase on air.[111]

  Online forums like 4chan – and others such as Reddit and Gab – in effect act as incubators for contagious memes. When users post images and slogans, it can spark large numbers of new variants. These newly mutated memes spread and compete on the forums, with the most contagious ones surviving and the weaker ones disappearing. It’s a case of ‘survival of the fittest’, the same sort of process that occurs in biological evolution.[112] Although it isn’t anything like the millennia-long timescales that pathogens have had, this crowd-sourced evolution can still give online content a major advantage.

  One of the most successful evolutionary tricks honed by trolls has been to make memes absurd or extreme, so it’s unclear whether they are serious or not. This veneer of irony can help unpleasant views spread further than they would otherwise. If users take offence, the creator of the meme can claim it was a joke; if users assume it was a joke, the meme goes uncriticised. White supremacist groups have also adopted this tactic. A leaked style guide for the Daily Stormer website advised its writers to keep things light to avoid putting off readers: ‘generally, when using racial slurs, it should come across as half-joking.’[113]

  As memes rise in prominence, they can become an effective resource for media-savvy politicians. In October 2018, Donald Trump adopted the slogan ‘Jobs Not Mobs’, claiming that Republicans favoured the economy over immigration. When journalists traced the idea to its source, they found that the meme had probably originated on Twitter. It had then spent time evolving on Reddit forums, becoming catchier in the process, before spreading more widely.[114]

  It’s not just politicians who can pick up on fringe content. Online rumours and misinformation have spurred attacks on minority groups in Sri Lanka and Myanmar, as well as outbreaks of violence in Mexico and India. At the same time, disinformation campaigns have worked to stir up both sides of a dispute. During 2016 and 2017, Russian troll groups reportedly created multiple Facebook events, with the aim of getting opposing crowds to organise far-right protests and counter-protests.[115] Disinformation around specific topics like vaccination can also feed into wider social unrest; mistrust of science tends to be associated with mistrust in government and the justice system.[116]

  The spread of harmful information is not a new problem. Even the term ‘fake news’ has emerged before, briefly becoming popular in the late 1930s.[117] But the structure of online networks has made the issue faster, larger and less intuitive. Like certain infectious diseases, information can also evolve to spread more efficiently. So what can we do about it?

  The great east japan earthquake was the largest in the country’s history. It was powerful enough to shift the Earth on its axis by several inches, with forty-metre-high tsunami waves following soon after. Then the rumours started. Three hours after the earthquake hit on 11 March 2011, a Twitter user claimed that poisonous rain might fall because a gas tank had exploded. The explosion had been real, but the dangerous rain wasn’t. Still, it didn’t stop the rumours. Within a day, thousands of people had seen and shared the false warning.[118]

  In response to the rumour, the government in the nearby city of Urayasu tweeted a correction. Despite the false information having a head start, the correction soon caught up. By the following evening, more users had retweeted the correction than the original rumour. According to a group of Toyko-based researchers, a quicker response could have been even more successful. Using mathematical models, they estimated that if the correction had been issued just two hours earlier, the rumour outbreak would have been 25 per cent smaller.

  Prompt corrections might not stop an outbreak, but they can slow it down. Researchers at Facebook have found that if users are quick to point out that their friend has shared a hoax – such as a get-rich-quick scheme – there’s an up to 20 per cent chance the friend will delete the post.[119] In some cases, companies have deliberately slowed down transmission by altering the structure of their app. After a series of attacks in India linked to false rumours, WhatsApp made it harder for users to forward content. Rather than being able to share messages with over a hundred people, users in India would be limited to just five.[120]

  Notice how these counter-measures work by targeting different aspects of the reproduction number. WhatsApp reduced the opportunities for transmission. Facebook users persuaded their friends to remove a post, which reduced the duration of infectiousness. Urayasu City Hall reduced susceptibility, by exposing thousands of people to the correct information before they saw the rumour. As with diseases, some parts of the reproduction number may be easier to target than others. In 2019, Pinterest announced they’d blocked anti-vaccination content from appearing in searches (i.e. removing opportunities for transmission), having struggled to remove it completely, which would have curbed the duration of infectiousness. [121]

  Then there’s the final aspect of the reproduction number: the inherent transmissibility of an idea. Recall how there are media guidelines for reporting events like suicides, to limit the potential for contagion effects. Researchers like Whitney Phillips have suggested we treat manipulative information in the same way, avoiding coverage that spreads the problem further. ‘As soon as you’re reporting on a particular hoax or some other media manipulation effort, you’re legitimising it,’ she said, ‘and you’re essentially providing a blueprint for what somebody down the road knows is going to work.’[122]

  Recent events have shown that some media outlets still have a long way to go. In the aftermath of the 2019 mosque shootings in Christchurch, New Zealand, several outlets ignored well-established guidelines for reporting on terrorist attacks. Many published the shooter’s name, detailed his ideology, or even displayed his video and linked to his manifesto. Worryingly, this information caught on: the stories that were widely shared on Facebook were far more likely to have broken reporting guidelines.[123]

  This shows we need to rethink about how we interact with malicious ideas, and who is really benefitting when we give them our attention. A common argument for featuring extreme views is that they would spread anyway, even without media amplification. But studies of online contagion have found the opposite: content rarely goes far without broadcast events to amplify it. If an idea becomes popular, it’s generally because well-known personalities and media outlets have helped it spread, whether deliberately or inadvertently.

  Unfortunately, the changing nature of journalism has made it harder to resist media manipulators. An increasing desire for online shares and clicks has left many outlets open to exploitation by people who can deliver cont
agious ideas, and the attention that comes with them. That attracts trolls and manipulators, who have a much better understanding of online contagion than most. From a technological point of view, most manipulators aren’t abusing the system. They’re following its incentives. ‘What’s insidious about it is that they use social media in precisely the ways it was designed to be used,’ Phillips said. In her research, she has interviewed dozens of journalists, many of whom felt uneasy knowing they are profiting from stories about extreme views. ‘It’s really good for me, but really bad for the country,’ one reporter told her. To reduce the potential for contagion, Phillips argues that the manipulation process needs to be discussed alongside the story. ‘Making clear in the reporting that the story itself is part of an amplification chain, that the journalist is part of an amplification chain, that the reader is part of an amplification chain – these things need to be really foregrounded in coverage.’

  Although journalists can play a large role in outbreaks of information, there are other links in the transmission chain too, most notably social media platforms. But studying contagion on these platforms is not as straightforward as reconstructing a sequence of disease cases or gun incidents. The online ecosystem has a massive number of dimensions, with trillions of social interactions and a huge array of potential transmission routes. Despite this complexity, though, proposed solutions to harmful information are often one-dimensional, with suggestions that we need to do more of something or less of something.

  As with any complex social question, there’s unlikely to be a simple, definitive answer. ‘I think the shift we’re going through is akin to what happened in the United States on the war on drugs,’ said Brendan Nyhan.[124] ‘We’re moving from “this is a problem that we have to solve” to “this is a chronic condition we have to manage”. The psychological vulnerabilities that make humans prone to misperceptions aren’t going to go away. The online tools that help it circulate aren’t going to go away.’

  What we can do, though, is try and make media outlets, political organisations, and social media platforms – not to mention ourselves – more resistant to manipulation. To start with, that means having a much better understanding of the transmission process. It’s not enough to concentrate on a few groups, or countries, or platforms. Like disease outbreaks, information rarely respects boundaries. Just as the 1918 ‘Spanish flu’ was blamed on Spain because it was the only country reporting cases, our picture of online contagion can be skewed by where we see outbreaks. In recent years, researchers have published almost five times more studies looking at contagion on Twitter than on Facebook, despite the latter having seven times more users.[125] This is because, historically, it’s been much easier for researchers to access public Twitter data than to see what’s spreading on closed apps like Facebook or WhatsApp.

  There’s hope the situation could change – in 2019, Facebook announced it was partnering with twelve teams of academics to study the platform’s effect on democracy – but we still have a long way to go to understand the wider information ecosystem.[126] One of the reasons online contagion is so hard to investigate is that it’s been difficult for most of us to see what other people are actually exposed to. A couple of decades ago, if we wanted to see what campaigns were out there, we could pick up a newspaper or turn on our televisions. The messages themselves were visible, even if their impact was unclear. In outbreak terms, everyone could see the sources of infection, but nobody really understood how much transmission was happening, or which infection came from which source. Contrast this with the rise of social media, and manipulation campaigns that follow specific users around the internet. When it comes to spreading ideas, groups seeding information in recent years have had a much better idea about the paths of transmission, but the sources of infection have been invisible to everyone else.[127]

  Uncovering and measuring the spread of misinformation and disinformation will be crucial if we want to design effective counter-measures. Without a good understanding of contagion, there’s a risk of either blaming the wrong source, ‘bad air’-style, or proposing simplistic strategies like abstinence, which – as with STI prevention – might work in theory but not in practice. By accounting for the transmission process, we’ll have a better chance of avoiding epidemiological errors like these.

  We’ll also be able to take advantage of knock-on benefits. When something is contagious, a control measure will have both a direct and indirect effect. Think about vaccination. Vaccinating someone has a direct effect because they now won’t get infected; it also has an indirect effect because they won’t pass an infection on to others. When we vaccinate a population, we therefore benefit from both the direct and indirect effects.

  The same is true of online contagion. Tackling harmful content will have a direct effect – preventing a person from seeing it – as well as an indirect effect, preventing them spreading it to others. This means well-designed measures may prove disproportionately effective. A small drop in the reproduction number can lead to a big reduction in the size of an outbreak.

  ‘Is spending time on social media bad for us?’ asked two Facebook researchers in late 2017. David Ginsburg and Moira Burke had weighed up the evidence about how social media use affects wellbeing. The results, published by Facebook, suggested that not all interactions were beneficial. For example, Burke’s research had previously found that receiving genuine messages from close friends seemed to improve users’ wellbeing, but receiving casual feedback – such as likes – did not. ‘Just like in person, interacting with people you care about can be beneficial,’ Ginsburg and Burke suggested, ‘while simply watching others from the sidelines may make you feel worse.’[128]

  The ability to test common theories about human behaviour is a big advantage of online studies. In the past decade or so, researchers have used massive datasets to question long-standing ideas about the spread of information. This research has already challenged misconceptions about online influence, popularity, and success. It’s even overturned the very concept of something ‘going viral’. Online methods are also finding their way back into disease analysis; by adapting techniques used to study online memes, malaria researchers have found new ways to track the spread of disease in Central America.[129]

  Social media might be the most prominent way our interactions have changed, but it’s not the only network that’s been growing in our lives. As we shall see in the next chapter, technological connections are expanding in other ways, with new links permeating through our daily routines. Such technology can be hugely beneficial, but it can also create new risks. In the world of outbreaks, every new connection is a potential new route of contagion.

  6

  How to own the internet

  When a major cyber-attack took down websites including Netflix, Amazon, and Twitter, the attackers included kettles, fridges, and toasters. During 2016, a piece of software called ‘Mirai’ had infected thousands of smart household devices worldwide. These items increasingly allow users to control things like temperature via online apps, creating connections that are vulnerable to infection. Once infected with Mirai, the devices had formed a vast network of bots, creating a powerful online weapon.[1]

  On 21 October that year, the world discovered that the weapon had been fired. The hackers behind the botnet had chosen to target Dyn, a popular domain name system. These systems are crucial for navigating the web. They convert familiar web addresses – like Amazon.com – into a numeric IP address that tells your computer where to find the site on the web. Think of it like a phonebook for websites. The Mirai bots attacked Dyn by flooding it with unnecessary requests, bringing the system to a halt. Because Dyn provides details for several high profile websites, it meant people’s computers no longer knew how to access them.

  Systems like Dyn handle a lot of requests every day without problems, so it takes a massive effort to overwhelm them. That effort came from the sheer scale of the Mirai network. Mirai was able to pull off its attack – one of the large
st in history – because the software wasn’t infecting the usual culprits. Trad­itionally, botnets have consisted of computers or internet routers, but Mirai had spread through the ‘internet of things’; as well as kitchenware, it had infected devices like smart TVs and baby monitors. These items have a clear advantage when it comes to organising mass cyber-attacks: people turn off their computers at night, but often leave other electronics on. ‘Mirai was an insane amount of firepower,’ one FBI agent later told Wired magazine.[2]

  The scale of the Mirai attack showed just how easily artificial infections can spread. Another high-profile example would emerge a few months later, on 12 May 2017, when a piece of software called ‘WannaCry’ started holding thousands of computers to ransom. First it locked users out of their files, then displayed a message telling users they had three days to transfer $300 worth of Bitcoin to an anonymous account. If people refused to pay up, their files would be permanently locked. Wanna­Cry would end up causing widespread disruption. When it hit the computers of the UK National Health Service, it resulted in the cancellation of 19,000 appointments. In a matter of days, over a hundred countries would be affected, leading to over $1bn worth of damage.[3]

  Unlike outbreaks of social contagion or biological infections, which may take days or weeks to grow, artificial infections can operate on much faster timescales. Outbreaks of malicious software – or ‘malware’ for short – can spread widely within a matter of hours. In their early stages, the Mirai and WannaCry outbreaks were both doubling in size every 80 minutes. Other malware can spread even faster, with some outbreaks doubling in a matter of seconds.[4] However, computational contagion hasn’t always been so rapid.

 

‹ Prev