The Rules of Contagion

Home > Nonfiction > The Rules of Contagion > Page 14
The Rules of Contagion Page 14

by Adam Kucharski


  Rosenberg had contrasted gun research to the progress made in reducing car-related deaths, an analogy later used by Barack Obama during his presidency. ‘With more research, we could further improve gun safety just as with more research we’ve reduced traffic fatalities enormously over the last 30 years,’ Obama said in 2016. ‘We do research when cars, food, medicine, even toys harm people so that we make them safer. And you know what, research, science, those are good things. They work.’[43]

  Cars have become much safer, but the industry was initially reluctant to accept suggestions that their vehicles needed improvements. When Ralph Nader published his 1965 book Unsafe at Any Speed, which presented evidence of dangerous design flaws, car companies attempted to smear him. They got private detectives to track his movements and hired a prostitute to try and seduce him.[44] Even the book’s publisher, Richard Grossman, was sceptical about the message. He thought it would be hard to market and probably wouldn’t sell very well. ‘Even if every word in it is true and everything about it is as outrageous as he says,’ Grossman later recalled, ‘do people want to read about that?’[45]

  It turned out that they did. Unsafe at Any Speed became a bestseller and calls to improve road safety grew, leading to seat belts and eventually features like airbags and antilock brakes. Even so, it had taken a while for the evidence to accumulate prior to Nader’s book. In the 1930s, many experts thought it was safer to be thrown from a car during an accident, rather than be stuck inside.[46] For decades, manufacturers and politicians weren’t that interested in car safety research. After the publication of Unsafe at Any Speed, that changed. In 1965, a million miles of car travel came with a 5 per cent chance of death; by 2014 this had dropped to 1 per cent.

  Before he died in 2017, Jay Dickey indicated that his views on gun research had shifted. He believed the CDC needed to look at gun violence. ‘We need to turn this over to science and take it away from politics,’ he told the Washington Post in 2015.[47] In the years following their 1996 clash, Dickey and Mark Rosenberg had become friends, taking time to listen and find common ground on the need for gun research. ‘We won’t know the cause of gun violence until we look for it,’ they would later write in a joint opinion piece.

  Despite constraints on funding, some evidence about gun violence is available. In the early 1990s, before the Dickey Amendment, CDC-funded studies found that having a gun in the home increased the risk of homicide and suicide. The latter finding was particularly notable, given that around two-thirds of gun deaths in the US are from suicide. Opponents of this research have argued that such suicides might have occurred anyway, even if guns hadn’t been present.[48] But easy access to deadly methods can make a difference for what are often impulse decisions. In 1998, the UK switched from selling paracetamol in bottles to blister packs containing up to thirty-two tablets. The extra effort involved with blister packs seemed to deter people; in the decade after the packs were introduced, there was about a 40 per cent reduction in deaths from paracetamol overdoses.[49]

  Unless we understand where the risk lies, it’s very difficult to do anything about it. This is why research into violence is needed. Seemingly obvious interventions may turn out to have little effect in reality. Likewise, there may be policies – like Cure Violence – that challenge existing approaches, but have the potential to reduce gun-related deaths. ‘Like motor vehicle injuries, violence exists in a cause-and-effect world; things happen for predictable reasons,’ wrote Dickey and Rosenberg in 2012.[50] ‘By studying the causes of a tragic – but not senseless – event, we can help prevent another.’

  It’s not just gun violence that we need to understand. So far, we’ve looked at frequently occurring events like shootings and domestic violence, which means there is – in theory, at least – a lot of data to study. But sometimes crime and violence happen as a one-off event, spreading rapidly through a population with devastating consequences.

  On the evening of saturday 6 August 2011, London descended into what would become the first of five nights of looting, arson and violence. Two days earlier, police had shot and killed a suspected gang member in Tottenham, North London, sparking protests that evolved into riots and spread across the city. There would also be rioting in other UK cities, from Birmingham to Manchester.

  Crime researcher Toby Davies was living in the London district of Brixton at the time.[51] Although Brixton avoided the violence on the first night of the riots, it would end up being one of the worst affected areas. In the months following the riots, Davies and his colleagues at University College London decided to pick apart how such disorder could develop.[52] Rather than trying to explain how or why a riot starts, the team instead focused on what happens once it gets underway. In their analysis, they divided rioting into three basic decisions. The first was whether a person would participate in the riot or not. The researchers assumed this depended on what was happening nearby – much like a disease epidemic – as well as local socioeconomic factors. Once someone decided to participate, the second decision involved where to riot. Because a lot of the rioting and looting was concentrated in retail areas, the researchers adapted an existing model for how shoppers flow into such locations (several media outlets described the London riots as ‘violent shopping’[53]). Finally, their model included the possibility of arrest once a person arrived at the rioting site. This depended on the relative number of rioters and police, a metric Davies referred to as ‘outnumberedness’.

  The model could reproduce some of the broad patterns seen during the 2011 riots – such as the focus on Brixton – but it also showed the complexity of these types of events. Davies points out that the model was only a first step; there’s a lot more that needs to be done in this area of research. One big challenge is the availability of data. In their analysis, the UCL team only had information on the number of arrests for riot-related offences. ‘As you can imagine, it’s a very small and very biased subsample,’ Davies said. ‘It doesn’t capture who could potentially engage in rioting.’ In 2011, the rioters were also more diverse than might be expected, with groups transcending long-standing local rivalries. Still, one of the benefits of a model is that it can explore unusual situations and potential responses. For frequent crimes like burglary, police can introduce control measures, see what happens, then refine their strategy. However, this approach isn’t possible for rare events, which might only spark now and again. ‘Police don’t have riots to practise on every day,’ Davies said.

  For a riot to start, there need to be at least some people willing to join. ‘You cannot riot on your own,’ as crime researcher John Pitts put it. ‘A one-man riot is a tantrum.’[54] So how does a riot grow from a single person? In 1978, Mark Granovetter published a now classic study looking at how trouble might take off. He suggested that people might have different thresholds for rioting: a radical person might riot regardless of what others were doing, whereas a conservative individual might only riot if many others were. As an example, Granovetter suggested we imagine 100 people hanging around in a square. One person has a threshold of 0, meaning they’ll riot (or tantrum) even if nobody else does; the next person has a threshold of 1, so they will only riot if at least one other person does; the next person has a threshold of 2, and so on, increasing by one each time. Granovetter pointed out that this situation would lead to an inevitable domino effect: the person with a 0 threshold would start rioting, triggering the person with a threshold of 1, which would trigger the person with a threshold of 2. This would continue until the entire crowd was rioting.

  But what if the situation were slightly different? Say the person with a threshold of 1 had a threshold of 2. This time, the first person would start rioting, but there would be nobody else with a low enough threshold to be triggered. Although the crowds in each situation are near identical, the behaviour of one person could be the difference between a riot and a tantrum. Granovetter suggested personal thresholds could apply to other forms of collective behaviour too, from going on strike to leavin
g a social event.[55]

  The emergence of collective behaviour can also be relevant to counter-terrorism. Are potential terrorists recruited into an existing hierarchy, or do they form groups organically? In 2016, physicist Neil Johnson led an analysis looking at how support for the so-called Islamic State grew online. Combing through discussions on social networks, his team found that supporters aggregated in progressively larger groups, before breaking apart into smaller ones when the authorities shut them down. Johnson has compared the process to a school of fish splitting and reforming around predators. Despite gathering into distinct groups, Islamic State supporters didn’t seem to have a consistent hierarchy.[56] In their studies of global insurgency, Johnson and his collaborators have argued that these collective dynamics in terrorist groups could explain why large attacks are so much less frequent than smaller ones.[57]

  Although Johnson’s study of Islamic State activity aimed to understand the ecosystem of extremism – how groups form, grow, and dissipate – the media preferred to focus on whether it could accurately predict attacks. Unfortunately, predictions are probably still beyond the reach of such methods. But at least it was possible to see what the underlying methods were. According to J.M. Berger, a fellow at George Washington University who researches extremism, it’s rare to see such transparent analysis of terrorism. ‘There are a lot of companies that claim to be able to do what this study is claiming,’ he told the New York Times after the study was published, ‘and a lot of those companies seem to me to be selling snake oil.’[58]

  Prediction is a difficult business. It’s not just a matter of anticipating the timing of a terrorist attack; governments also have to consider the method that may be used, and the potential impact that method will have. In the weeks following the 9/11 attacks in 2001, several people in the US media and Congress received letters containing toxic anthrax bacteria. It led to five deaths, raising concerns that other bioterrorist attacks may follow.[59] One of the top threats was thought to be smallpox. Despite having been eradicated in the wild, samples of the virus were still stored in two government labs, one in the US and one in Russia. What if other, unreported, smallpox viruses were out there and fell into the wrong hands?

  Using mathematical models, several research groups tried to estimate what might happen if terrorists released the virus into a human population. Most concluded that an outbreak would grow quickly unless pre-emptive control measures were in place. Soon after, the US Government decided to offer half a million healthcare workers vaccination against the virus. There was limited enthusiasm for the plan: by the end of 2003, fewer than 40,000 workers had opted for the vaccine.

  In 2006, Ben Cooper, then a mathematical modeller at the UK Health Protection Agency, wrote a high-profile paper critiquing the approaches used to assess the smallpox risk. He titled it ‘Poxy Models and Rash Decisions’. According to Cooper, several models included questionable assumptions, with one particularly prominent example. ‘Collective eyebrows were raised when the Centers for Disease Control’s model completely neglected contact tracing and forecast 77 trillion cases if the epidemic went unchecked,’ he noted. Yes, you read that correctly. Despite there being fewer than 7 billion people in the world at the time, the model had assumed that there were an infinite number of susceptible people that could become infected, which meant transmission would continue indefinitely. Although the CDC researchers acknowledged it was a major simplification, it was bizarre to see an outbreak study make an assumption that was so dramatically detached from reality.[60]

  Still, one of the advantages of a simple model is that it’s usually easy to spot when – and why – it’s wrong. It’s also easier to debate the usefulness of that model. Even if someone has limited experience with mathematics, they can see how the assumptions influence the results. You don’t need to know any calculus to notice that if researchers assume a high level of smallpox transmission and an unlimited number of susceptible people, it can lead to an unrealistically large epidemic.

  As models become more complicated, with lots of different features and assumptions, it gets harder to identify their flaws. This creates a problem, because even the most sophisticated mathematical models are a simplification of a messy, complex reality. It’s analogous to building a child’s model train set. No matter how many features are added – miniature signals, numbers on the carriages, timetables full of delays – it is still just a model. We can use it to understand aspects of the real thing, but there will always be some ways in which the model will differ from the true situation. What’s more, additional features may not make a model better at representing what we need it to. When it comes to building models, there is always a risk of confusing detail with accuracy. Suppose that in our train set all the trains are driven by intricately carved and painted zoo animals. It might be a very detailed model, but it’s not a realistic one.[61]

  In his critique, Cooper noted that other, more detailed smallpox models had come to similarly pessimistic conclusions about the potential for a large outbreak. Despite the additional detail, though, the models still contained an unrealistic feature: they had assumed that most transmission occurred before people developed the distinctive smallpox rash. Real life data suggested otherwise, with the majority of transmission happening after the rash appeared. This would make it much easier to spot who was infectious, and hence control the disease through quarantine rather than requiring widespread vaccination.

  From disease epidemics to terrorism and crime, forecasts can help agencies plan and allocate resources. They can also help draw attention to a problem, persuading people that there is a need to allocate resources in the first place. A prominent example of such analysis was published in September 2014. In the midst of the Ebola epidemic that was sweeping across several parts of West Africa, the CDC announced that there could be 1.4 million cases by the following January if nothing changed.[62] Viewed in terms of Nightingale-style advocacy, the message was highly effective: the analysis caught the world’s attention, attracting widespread media coverage. Like several other studies around that time, it suggested that a rapid response was needed to control the epidemic in West Africa. But the CDC estimate soon attracted criticism from the wider disease research community.

  One issue was the analysis itself. The CDC group behind the number was the same one that had come up with those smallpox estimates. They’d used a similar model, with an unlimited number of susceptible people. If their Ebola model had run until April 2015, rather than January, it would have estimated over 30 million future cases, far more than the combined populations of the countries affected.[63] Many researchers questioned the appropriateness of using a very simple model to estimate how Ebola might be spreading five months later. I was one of them. ‘Models can provide useful information about how Ebola might spread in the next month or so,’ I told journalists at the time, ‘but it is near impossible to make accurate longer-term forecasts’.[64]

  To be clear, there are some very good researchers within the wider CDC, and the Ebola model was just one output from a large research community there. But it does illustrate the challenges of producing and communicating high profile outbreak analysis. One problem with flawed predictions is that they reinforce the idea that models aren’t particularly useful. If models produce incorrect forecasts, the argument goes, why should people pay attention to them?

  We face a paradox when it comes to forecasting outbreaks. Although pessimistic weather forecasts won’t affect the size of a storm, outbreak predictions can influence the final number of cases. If a model suggests the outbreak is a genuine threat, it may trigger a major response from health agencies. And if this brings the outbreak under control, it means the original forecast will be wrong. It’s therefore easy to confuse a useless forecast (i.e. one that would never have happened) with a useful one, which would have happened had agencies not intervened. Similar situations can occur in other fields. In the run up to the year 2000, governments and companies spent hundreds of billions of dollars
globally to counter the ‘Millennium bug’. Originally a feature to save storage in early computers by abbreviating dates, the bug had propagated through modern systems. Because of the efforts to fix the problem, the damage was limited in reality, which led many media outlets to complain that the risk had been overhyped.[65]

  Strictly speaking, the CDC Ebola estimate avoided this problem because it wasn’t actually a forecast; it was one of several scenarios. Whereas a forecast describes what we think will happen in the future, a scenario shows what could happen under a specific set of assumptions. The estimate of 1.4 million cases assumed the epidemic would continue to grow at the exact same rate. If disease control measures were included in the model, it predicted far fewer cases. But once numbers are picked up, they can stick in the memory, fueling scepticism about the kinds of models that created them. ‘Remember the 1 million Ebola cases predicted by CDC in fall 2014,’ tweeted Joanne Liu, International President of Médecins Sans Frontières (MSF), in response to a 2018 article about forecasting.[66] ‘Modeling has also limits.’

  Even if the 1.4 million estimate was just a scenario, it still implied a baseline: if nothing had changed, that is what would have happened. During the 2013–2016 epidemic, almost 30,000 cases of Ebola were reported across Liberia, Sierra Leone and Guinea. Did the introduction of control measures by Western health agencies really prevent over 1.3 million cases?[67]

  In the field of public health, people often refer to disease control measures as ‘removing the pumphandle.’ It’s a nod to John Snow’s work on cholera, and the removal of the handle on the Broad Street pump. There’s just one problem with this phrase: when the pumphandle came off on 8 September 1854, London’s cholera outbreak was already well in decline. Most of the people at risk had either caught the infection already, or fled the area. If we’re being accurate, ‘removing the pumphandle’ should really refer to a control measure that’s useful in theory, but delivered too late.

 

‹ Prev